Data Modeling Best Practise

Hi Friends !
When designing a system, what are the best practise for Data Modeling? Please share few tips. Thanks.
With Regards
Rekha

Hi,
below link can be usefull ,
BI Data Modeling and Frontend Design
Also you can get the best practive(config guide) from service.sap.com.
Best practicess
http://help.sap.com/bestpractices
http://help.sap.com/bp_bblibrary/600/html/
Regards,
Satya

Similar Messages

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • Best practice for BI-BO 4.0 data model

    Dear all,
    we are planning to upgrade BOXI 3.1 to BO 4.0 next year and would like to know if Best Practice exists for BI data model. We find out some general BO 4.0 presentations and it seems that enhancements and changes have been implemented: our goal would be to better understand which BI data model best fits the BO 4.0 solution.
    Have you find documentations or links to BI-BO 4.0 best practice to share ?
    thanks in avance

    Have a look in this document:
    http://www.sdn.sap.com/irj/sdn/index?rid=/library/uuid/f06ab3a6-05e6-2c10-7e91-e62d6505e4ef#rating
    Regards
    Aban

  • SDO_PC, multiple SRIDs - best practise for data model?

    Hi,
    im using UTM and I am getting data covering two zones.
    all my existing data is from zone A.
    tables:
    pointcloud
    pointcloud_blk
    now im getting data with very few points from zone A and most points from zone B. It was agreed that the data delivery will be in SRID for zone B.
    so I tested whether this would work. I had two pointclouds. One with SRID A, another with SRID B. As soon as I put SRID B pointcloud inside, I could NO LONGER QUERY pointcloud with SRID A.
    So it seems to be necessary to use at least another pointcloud_blk, f.e. pointcloud_blk_[srid].
    Question: does another pointcloud_blk for each SRID suffice or do i also need a pointcloud table per SRID. the pointcloud table seems only interesting due to its EXTENT column. But on the other hand this could be queried by "function", since there are only 10 or so records (pointclouds) inside.
    PLZ share your best practises. What does work, what not.

    It is necessary to have one pointcloud_blk table for each SRID since there is a spatial index on that table.
    As for the PointCloud table itself, it is up to you. You can have pointclouds with different SRIDs in that table.
    But if you want to create spatial index on it, you have to use some function based index so that the index
    sees one SRID for the table.
    Since this table usually does not have many rows, this should work fine with one table for different SRIDs.
    siva

  • Best practises Subversion and Data modeler

    hello, i'am looking for some best practises regarding subversion and datamodeler.
    A team of 10 analysts create several releases of our product over time.
    Within one release you'll find several change requests.
    The application itself contains about 700tables so performance is important.
    I want to establish a lean working method were analyst can focus on their job - design.
    Till now I think to create one trunk containing the db model let's call it v17.00
    An analyst could create their designs in separate projects grouped by change request eg CR1234.
    When development starts i would compare the trunk model with their change request to generated the alter script.
    Afterwards i would import their design CR1234 into the trunk.
    Note : it's possible that a change request got cancelled - that's why i opt for a design per change request.
    This way of working seems much leaner than the setup of branches and merging.
    My opinion, being a novice subversion user, is that setting up branches and merging is "more complex" and might causes frustration for designers.
    Anyone having a simular setup or advice ?
    kr
    chris

    Hi Sam,
    Let me add my two cents here, when speaking about MAN deployments the name of the game is MPLS, so I guess you are using the same on your Cat 6500s and connecting your customers on 3550s using Vlans.
    Regarding your questions:
    a) Upgrading Ethernet to L3 for traffic shaping: This is basically done at 3550, so I suppose that's what you intend to do, plus you will be letting Spokes talk to only Hub site, so inter Vlan, atleast between Hub and each spoke will be required, hence inter valn routing. Other way is to configure P2P circuits between Hub site with Vlan mapping (per spoke) and Spoke sites with Port mapping, in this scenario Inter Vlan routing is not a necessity.
    b) Security: This depends on what exact architecure you have deployed, in my case I have simply installed a Gateway router with BGP peering with PEs, a separate VRF alongwith redistribution does the trick.
    Hope I addresses the query correctly, let me know if that helped..
    Cheers
    ~sultan

  • Best practise in SAP BW master data management and transport

    Hi sap bw gurus,
    I like to know what is the best practise in sap bw master data transport. For example, if I updated my attributes in development, what are the 'required only' bw objects should I transport?
    Appreciate advice.
    Thank you,
    Eric

    Hi Vishnu,
    Thanks for the reply but that answer may be suitable if I'm implementing a new BW system. What I'm looking for is more on daily operational maintenance and transport (a BW systems that has gone live awhile).
    Regards,
    Eric

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best book or online education on SQL developer data modeler 3.0 version

    Hi,
    i dont see any OBE for sql developer data modeler the way we have for sql developer, would one of you please suggest the Best book that explains everything about data modeler tool or any other online tutorial for that matter, I am new to this URL and been asked to work on ER (Logical), relational, physical models extensvily.. right away, i will have to build or starts righht from ER/logica and ahead or reverse, forward enginerring. thanks for help friends.

    Did you look at Data Modeler web page.
    There are some online demos and documentation.
    Regards,
    Ivan Zahariev

  • Best practice - creating functions in data model vs. rtf template

    Just a general question. Is there a best practice of creating functions in the data model vs. creating the functions in the data template?
    For example, is it more efficient to sum two fields in my SQL query or to create a function in the template that sums the two fields? Just curious if there is any performance benefit of one over the other.
    Thanks!

    anything you push it to DB(SQL), will be the faster than processing outside.

  • Best approach for Data Modelling.

    Hello Experts
    I am building a Customer Scorecard involving SD and Marketing in BI 7.0.
    There are a couple of existing DSOs, some pushing the data into InfoCubes and some don't. All the reporting is happening from MultiProvider sitting on top of these Data Targets.
    The team has a primitive design which says that I additional DSOs be created to extract data from the above mentioned couple of DSOs based on only the Objects that are needed for Customer Scorecard reporting.
    This means, I am creating a couple of DSOs as per the current design which is in place.
    Upon suggesting to only create a Customer Scorecard MultiProvider on top of the already existing couple of Data Targets (avoiding to recreate addtional DSOs and the hassles of loading and activating them and then loading the data into InfoCubes) and then create the BEx Queries on top of them, the Lead expressed his concerns about the impacts it could have on the existing Data Model and subsequent transports once the Model is complete..!
    What is the best practice to handle a situation like this? I see there are 3 ways to go ahead with this:
    1. Do as the Lead said, which means creating additional DSOs (extracting data from a couple of required existing DSOs, push this data into 1 InfoCube and then create a MultiProvider on top of this (be aware that there is another similar data model that I need to create which will also be embedded into this MultiProvider) and create BEx Reports from there.
    2. Create only the InfoCubes which will extract data from the already existing DSOs (avoid creation of additional DSOs) and then create a MP from where BEx Reports are created.
    3. Only create a MultiProvider on all the required and already existing DSOs and InfoCubes, making sure if reporting needs aggregated data for reporting or not and then create BEx Reports from there (avoid creation of additional DSOs, & ICs).
    Note: We use Rev-Track to do the Transports.
    Which one do you think would be the best way to go and what could be the implications? Eventually, the reporting is done in WAD.
    Thanks for your time in advance.
    Cheers,
    Chandu

    Hi,
    Case 1 and 2 have similarities. But its purely depend user needs.
    I think you may be know the difference between dso and cube.
    DSO - holds detailed level data
    Cube - holds aggregated data.
    As per you needs use any one target only, no need to use DSO---> cube flow for existing flows.
    you can decide which you want use DSO or Cube only.
    Case 3. if your requirement will suffice with existing dso and at reporting level if you can manage to get the required out put then you can with it. But as my guess with existing target your requirement may won't suffice your needs.
    About transports:
    You can create one Rev track and assign multiple transports to it.
    you can add and release transport one by one rather than all at a time.
    if you release all at a time you may get some inconsistency issue and TR won't be released.
    Thanks

  • What's the best way to model the following requirements in a data model?

    I need a data model to represent the following scenario but i just can't get my head around it.
    Class A has many Class B's. Class B's can have many Class C's and vice versa. So ultimately the relationship for the above 3 classes is as follows:
    Class A -- (1 : M) --> Class B --- (M : M) ---> Class C
    And Class C must know about its parent referenced (Class B) and also same applies to Class B where it needs to know who owns it (Class A).
    Whats the best way to construct this? In a Tree? Graph?
    Or just simply:
    Class A has list of Class B and Class B has list of Class C.
    But wouldn't this be tight dependencies?

    Theresonly1 wrote:
    Basically:
    A's owns multiple B's (B's need to know who owns it) BUT a B can be only owned by one A
    B's owns multiple C's AND C's can be owned by multiple B's (this is a many-many relationship, right?) - Again; (C's need to know who owns it)I'd reckon that you'd need some references. First, figure out the names of each tier of class. I would say maybe A is a principal/school, B's are teachers (because typically teachers only teach under one principal/ in one school), and C's are students (because teachers can have multiple students, and each student has multiple teachers). So now that you have the names, make some base classes. If I understand your problem correctly, A's really don't need to know who they own, but B's need to know who owns them, so A's don't need to have much of anything. B's have a reference to the A that owns them, but not much else because they don't need to know who they own. C's own nothing, but they are owned by multiple B's, so they have an Array of refrences to each of the B's that own them. I'd use an ArrayList, considering each could have a different amount of B's, but you could do it with an array if you tried. I'll leave it up to you how you implement everything, but I'll give you some guides to how I might do it:
    public class Principal{
    public class Teacher{
          public Principal owner;
          public Teacher(Principal owner){
                this.owner=owner;
    public class Student{
          public Teacher[] owners;
          public Student(Teacher owner...){
                owners=owner;
          public void addOwner(Teacher newOwner){
                //basically copy and paste the old array into a temporary one
                //with an extra spot, add newOwner to the end of that, then
                //make the owner array refrence that array.
    }In Student, I'm pretty sure thats how you allow an undetermined number of parameters be used, and I'm pretty sure that it comes out as an array. I hope this helps you!

  • Best FREE data modeling tool

    What is the best FREE data modeling tool?  Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

    Hi Kalman,
    According to my knowledge, Microsoft Office Visio is helpful for building data model. For more information, please review this article:
    Create a Database Model (also known as Entity Relationship diagram).
    Besides, you can also use other third-party tools such as
    Erwin, SQL Power Architect to design SQL Server database. However, Microsoft cannot make any representations regarding the quality, safety, or suitability of any the third party software or information.
    There is also a discussion about free data modeling tool in the following thread for your reference.
    https://social.msdn.microsoft.com/Forums/en-US/b70d2cdb-dc7f-4e89-a0ae-9dbf5687199e/free-data-modelling-tool?forum=databasedesign
    Thanks,
    Lydia Zhang

  • BCS Data Model (which one is the best)

    Hi,
    We are implementing SEM-BCS 6.0 with SAP BI 7.0.
    We have 2 possibilities for a data model and we need help to decide which one is the best.
    The first possibility is:
    1)       SAP ECC 6.0:
    a.      Define 2 (two) charts of accounts (company chart of accounts and consolidation chart of accounts) that are mapped/linked in transaction FSP0;
    b.      Define Balance and an Income Statement hierarchy for the group/consolidation chart of accounts;
    2)       BW (BI 7.0)
    a.      In BW we also have both charts of accounts, with a Balance and an Income Statement hierarchy for the group/consolidation chart of accounts. These ones were replicated from source system SAP ECC;
    3)       SEM-BCS 6.0
    a.      IN BCS we update the Financial Statement Item automatically based on/equal to consolidation chart of accounts (and not based on Balance and Income statement);
    b.      For reporting issues, we will define BW workbooks,  based on the Group Balance and Income statement,  for Consolidation reporting. In resume, we will have the Balance and an Income Statement hierarchy for the group/consolidation chart of accounts, created in SAP ECC, replicated in BW in order to report consolidated data.
    The second possibility:
    1)       SAP ECC 6.0:
    In R3 we only have the company chart of accounts. For consolidation purposes  we will define a Balance and an Income statement item hierarchies, one for legal report and the second one for management report, which the superior nodes (items) are the aggregation of G/L Accounts (below nodes).
    2)       BW (BI 7.0)
    In BW we will replicate the Balance and Income Statement that was created in SAP ECC.
    3)       SEM-BCS 6.0
    In BCS we would have to:
                                                      i.      Create automatically the Financial Statement Item, based on the items of Balance and an Income Statement hierarchies, which would be a concatenate of the two item hierarchies.
                                                     Or
                                                        ii.      Create 2 Financial Statement Items, where which one would depend on the balance or the income statement hierarchy.
    Which of the possibilities is the best?
    Having 2 charts of accounts (company and consolidation) or just have one chart of accounts (company) and have a balance item hierarchy consolidated and create the financial statement item in BCS having this hierarchy in account and not a consolidation chart of accounts?
    In our opinion the first scenario is the best one, because is seem to be more flexible and as we have the intention to use SEM BPS in the next year, for budget purpose. Furthermore, we think that with the first scenario we can have a FS Chart of accounts (in BCS) that is not dependent of reporting purposes (based on Balance and Income statements). 
    Is this interpretation correct?
    Thanks

    Hi Ricardo,
    Yes, The first one is the best choice and it is the way to work.
    SAP has given some How-to documents, which are available in service market place. These documents also suggest almost same path as first choice.
    Thanks

  • Transferring data between two tables - Best practise advice

    Hi!
    I need advice on best practise since I am new to abap-thinking.
    I have two tables. I am going to transfer data from table1 and update the corresponding master data table with the data in table1.
    Which is the best way of doing this? The data amount that can be transferred is maximum 300 000 rows in table1.
    I can only think in one, the simple, way which is to read all the rows in to an internal table and then do an update on all the rows in the master data table.
    Is there a better way of doing this?
    thanks in advance,
    regards
    Baran

    Hi!
    1. The update will be done a couple of times per week.
    2. Yes, the fields are the same.
    3. Both tables are SAP dictionary tables. One is a staging table and the other is master data table. Our problem is that we want a custom field to a standard master data table. We add an extra field to the staging table and the same to the corresponding master data table but the standard API is not supporting the transfer of data between custom fields so we are developing our own code to do this.
    After some standard code has transferred the standard fields from staging tables to master data tables we are going to transfer our field by updating all the rows in the standard table
    thanks
    regards
    Baran

Maybe you are looking for

  • How do I put multiple songs on my iMovie video?

    I am trying to make an iMovie video, and I can get one song on it. But it's very long, so I want to keep uploading songs as it goes, but it doesn't work. Does anybody know how to solve this problem?

  • Having trouble sizing a photo for wallpaper in iOS 7

    When I try to move and scale a photo for my home screen wallpaper on my iPad3, using iOS 7, it immediately reverts to it's original size, which is larger than the screen. Is this a bug in iOS 7 or am I doing something wrong?

  • PDF 's on iCloud

    I have a multiple page document that I scanned into a pdf format. I want to be able to view - not edit - it on my iPhone via iCloud. I do not know of any way to get a pdf up to my iCloud so I thought that if I could convert it to a Pages document tha

  • "my Blackberry" web site

    Hi, can't find the right forum so I thought I would start here. I am trying to join my blackberry and the site tells me I have IE6, I have windows xp sp3 with IE8 and the site won't let me in. I even tried the link to get IE8 and Microsoft tells me I

  • WVC210 - Web Gui

    I was wondering if there was any way to skin the web interface. I can see/save the html source code, but I'm unsure as how to get it back onto the camera. I'd like to do this because I feel that some of the icons could be smaller. I use my phone a lo