BW data modelling scenario

Hi BW gurus,
I would like to know how do we decide if a given data should be a key figure or characteristic data in dimension table ?
Regards,
Tushar.

Hi Tushar,
Key figures are the facts, usually values or amounts or quantities that you are trying to measure, whereas the chars hold the unique values of business entities that define a fact. Depending on what your data is doing, you will need to make this decision. See this document for full details:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
Hope this helps...

Similar Messages

  • Simple data modelling scenario

    I have Org dimension A (Grand Total->Primary->Dept->Detail) and Fact B (connects to A via Detail column A.Detail=A.Detail). There's a column MoreDetail in B that needs to be added to Org A. This shouldn't be a problem with a simple join in opaque view/select based on Detail=Detail. The only problem is that the updated Dimension would have MoreDetail as the lowest Grain. Is this possible w/o heavy reorganization? Thanks

    1) change keys for fact table and dim table (if using details)
    2) add moredetail to heirarchy
    3) set level to moredetail whereever level was detail.
    if you are concerned about changes in reports, you can do some small trick to name rename details as lessdetails and moredetails as details. (be wary of the aliases created in presentation layer)

  • Oracle BI Publisher 11g - Known Data Model Limitations

    Hi
    I'm new to BI Publisher 11g. I'm attempting to find out what the known limitations of BI Publisher 11g are.
    In particular, I would like to know the maximum number of queries allowed per data model or if a report may be based on multiple data models.
    Scenario:_
    My organization requires an electronic reporting book (PDF document) effectively consisting of *40-50* data tables and graphs. Someone has sold the idea to IT and the business that BI Publisher as the tool that deliver this type of functionality. Having used the tool now for the last couple of days, it's starting to look it's not the case.
    How do i achieve my ultimate goal of combining disparate report output into 1 huge document?
    Do I develop each reporting section as a separate data model/report pairs outputting XML files and then combine all XML files into 1 target document?
    If anyone can please give me some advice on how to go about this, it would be greatly appreciated.
    Thanks
    Greg

    This is related to browser, try it in some other browser.or else check ur bi publisher settings and data source configurations. Mostly his wil happen if u changed some thing in blaf rich plus.css changes

  • Using CVS in SQL Developer for Data Modeler changes.

    Hi,
    I am fairly new to SQL Developer Data Modeler and associated version control mechanisms.
    I am prototyping the storage of database designs and version control for the same, using the Data Modeler within SQL Developer. I have SQL Developer version 3.1.07.42 and I have also installed the CVS extension.
    I can connect to our CVS server through sspi protocol and external CVS executable and am able to check out modules.
    Below is the scenario where I am facing some issue:
    I open the design from the checked out module and make changes and save it. In the File navigator, I look for the files that have been modified or added newly.
    This behaves rather inconsistently in the sense that even after clicking on refresh button, sometimes it does not get refreshed. Next I try to look for the changes in Pending Changes(CVS) window. According to the other posts, I am supposed to look at the View - Data Modeler - Pending Changes window for data modeler changes but that shows up empty always( I am not sure if it is only tied to Subversion). But I do see the modified files/ files to be added to CVS under Versioning - CVS - Pending Changes window. The issue is that when I click on the refresh button in the window, all the files just vanish and all the counts show 0. Strangely if I go to Tools - Preferences - Versioning - CVS and just click OK, the pending changes window gets populated again( the counts are inconsistent at times).
    I believe this issue is fixed and should work correctly in 3.1.07.42 but it does not seem to be case.
    Also, I m not sure if I can use this CVS functionality available in SQL Dev for data modeler or should I be using an external client such as Wincvs for check in/ check out.
    Please help.
    Thanks

    Hi Joop,
    I think you will find that in Data Modeler's Physical Model tree the same icons are used for temporary Tables and Materialized Views as in SQL Developer.
    David

  • Balanced Scorecard and its corresponding BW Data Model?

    Hi...
    I am doing a Strategy Management scenario in SAP SEM and have developed measures.Next, when creating the BW data model i am facing problem in identifying the key figures that i should be using in the cube.I am providing the list of measures, please tell me the base key figures that i should include in the cube. Will be of great help.
    Measures:
    Economic Profit, Capital Charge Method
    Net Operating Profit after Tax
    Earnings Before Interest and Tax
    Earnings Before Interest, Taxes, Depreciation & Amortization
    Shareholder Value Added
    Return on Equity
    Return on Assets
    Internal Rate of Return
    Present Value of Investment Projects
    Annual Net Income (Balance Sheet)
    Asset Turns
    Present Value of Investment Projects
    Net Sales
    Sales Growth
    No. of Projects Targeted for Sales Growth
    Net Sales from New Customers
    Net Sales from New Products
    Sales Growth from New Products
    Sales Share of New Products
    Sales Share of New Customers
    Share of Customers' Total Expenditure
    Degree of Overlap Own Prod. List  / Competition
    Share of the Addressed Customer Segments
    Share of Customers' Total Expenditure
    Sales Growth from New Products
    Sales Share of New Customers
    Number of Newly Developed Applications
    Share of R&D Expense for Applied Research
    Share of R&D Expense for Basic Research
    Number of Development Teams with Customer Involvement
    Share of Customers with Partnership
    Number of Development Teams with Customer Involvement
    Average Length of Service
    Training Investment Per FTE
    Training Staffing Factor
    Annual Investment in Technologies
    Average Length of Service
    Employer-Initiated Turnover Rate
    Employee-Initiated Turnover Rate
    % Employees with Remuneration Related to Goal Achievement

    Samik,
    The measure you have given can be classified as
    Finance
    Economic Profit, Capital Charge Method
    Net Operating Profit after Tax
    Earnings Before Interest and Tax
    Earnings Before Interest, Taxes, Depreciation & Amortization
    Shareholder Value Added
    Return on Equity
    Return on Assets
    Internal Rate of Return
    Present Value of Investment Projects
    Annual Net Income (Balance Sheet)
    Asset Turns
    Present Value of Investment Projects
    Net Sales
    Sales Growth
    Customer service
    No. of Projects Targeted for Sales Growth
    Net Sales from New Customers
    Net Sales from New Products
    Sales Growth from New Products
    Sales Share of New Products
    Sales Share of New Customers
    Share of Customers' Total Expenditure
    Degree of Overlap Own Prod. List / Competition
    Share of the Addressed Customer Segments
    Share of Customers' Total Expenditure
    Sales Growth from New Products
    Sales Share of New Customers
    Business Process
    Number of Newly Developed Applications
    Share of R&D Expense for Applied Research
    Share of R&D Expense for Basic Research
    Number of Development Teams with Customer Involvement
    Share of Customers with Partnership
    Number of Development Teams with Customer Involvement
    Internal learning
    Average Length of Service
    Training Investment Per FTE
    Training Staffing Factor
    Annual Investment in Technologies
    Average Length of Service
    Employer-Initiated Turnover Rate
    Employee-Initiated Turnover Rate
    % Employees with Remuneration Related to Goal Achievement
    accordingly look for the base key figures for the same. I would not put a finger to many , but essentially for finance look at FI data - especially GL postings , cost center postings , AR/AP etc
    for Customer Service - CRM data would help , Sales Data - SD - invoices , orders etc
    Internal - HR
    business processes - depends on the system
    Also the key figures for the same depends on the business rather than a generic business model that would fit all. First you will have to study the business process and understand the business rules and operating conditions before proceeding.
    Arun
    Assign points if helpful

  • Experiences on first usage of Oracle SQL Developer Data Modeling

    Hi @ll,
    working with Quest Toad Data Modeler 2.25 over a year, I'm searching for a replacement with the ability to create ALTER TABLE... statements. Today I downloaded the standalone version and tried to compare my local database against our develoment server.
    Our usage scenario is that the development database can be changed by each developer. We have a trigger activated for monitoring (and logging) database changes. Currently the tested and released changes will be merged into the Toad data model manually by me.
    Using the Toad model, we create DDL scripts for the SAP supported databases: Oracle, MaxDB, MS-SQL and DB2.
    I'd like to facilitate this process.
    h3. Test 1
    1. Import the current model from the development database (User A), save it as XML
    2. Import the current model from my local database (User B), save it as XML
    3. Compare the XML models
    h4. Results
    a) each table is displayed as modified, although no difference is displayed in any column
    b) in the DDL, source tables are renamed with a "bcp_" prefix
    c) for NVARCHAR2, data types are changed (length from 756 to 510, 1800 to 1200)
    h3. Test 2
    1. Import the current model from the development database (User A), save it as XML
    3. Compare the imported model with the XML
    h4. Results 2
    a) The field attribute "Mandatory" changed from "true" to "false"
    h3. Test 3
    1. Import from database
    2. Compare against a XML scheme
    h4. Results 3
    a) Comparing shows the modified tables only - this works nearyl as expected
    b) still different length for datatype NVARCHAR2
    h3. Wish List
    a) the Open File dialog always opens in "My Files", better would be the last opened directory
    b) While starting the comparing, allow changing "source" and "target"
    c) Allow comparing two database schemes
    d) Support for MaxDB?
    Overall, this new tool looks very promising and I'm looking forward to testing the next versions ;-)

    Hi Christian,
    thanks for trying Oracle Data Modeling.
    EAR2 is released and NVarchar2 problem is fixed there. Also are fixed some problems related to your area of interest - applying differences to database.
    On your observations:
    1) each table is displayed as modified, although no difference is displayed in any column - column ordering could be changed, or some of table properties - you can check this by clicking on table node and you can look at details tab for more information
    2) in the DDL, source tables are renamed with a "bcp_" prefix - this is typical rename, create, copy pattern for applying changes that require restructuring of table and intermediate preserving of content. If it's not enough you can try "Advanced DDL" option which gives more control over the whole process - self controlled script with logging, restarting, execution window, error masking. You have option to unload table into file system and load it back after original table is recreated (LOB columns are not supported - we can add support if there is demand for that). Transformation function can be defined for columns with changed data type. There are few words about that in "Data Modeling overview" document (p.15) - http://www.oracle.com/technology/products/database/sql_developer/pdf/sqldeveloperdatamodelingoverview.pdf
    Regards,
    Philip
    Edited by: Philip Stoyanov on Nov 26, 2008 1:40 PM

  • Oracle SQL Data Modeler -COMPARE/MERGE

    Hi all,
    I am trying to compare/merge or just trying to merge a relation model with another in Oracle SQL Developer DATA MODELER.
    Scenario:
    I have tried to import from data dictionary the hr schema into 3 parts.
    Table employee is alone imported in one design model's relational model(say DataModelerDesign1- DMD1).
    Table Departments,Locations and Countries in another design model's relational model(say DMD2).
    Table Job_history,Jobs and Regions in another design model's relational model(say DMD3).
    Now, I tried to merge all this into one design model's relational model, here into DMD3.
    Requirement:
    I want all of this relational model's tables to get merged with exact mappings as if it is in HR schema all connected to each other. But they are all getting merged as separate entities not conencted to each other when taken from Compare/merge option. How should i do this task?
    Issues:
    1) I can never see anything in the compare model when i try to click on view compare mapping. Can anytime we see any data here?
    2) In realtime scenario, when will we try to merge a table into another or split it. Because some foriegn key violations are happening here. Is it ever possible to succeed our requirement while merging itself instead of creating relationships between entities manually in main relationship model DMD3 in example here.

    I have found on occasions the diagram pdf would be missing a few relationship lines. Usually it happens after I have been doing a lot of work in the tool or printing a bunch of diagrams. Seems to be a memory leak of some sort. If I close Data Modeler, re-open it, then print to PDF, the diagram is fine.

  • Data Model Extension

    Hi Techies,
    Could you please look at the below scenario and provide your valuable inputs.
    Scenario: I am trying to load the Cost Centers from ECC to MDG via MDMGX t code. The CR is not being created and data is not being loaded to the MDG Cost center tables and it shows the error message (Message type MDG_COST_CENTERS: error in XML transformation) from the logs.
    Root cause identified:  The issue was coming because of ZZWERKS field in extracted file from CSKS table. Import program is not able to map this
    field since it doesn’t exist in MDG data model.
    Confirmation: Tried loading a single cost center without having ZZWERKS field from XML and the data got loaded to MDG tables via MDMGX
    olution: Extended the data 0G with a new field ZZ_PLANT and activated the data model. Generated the respective structures for Cost centers.
    Issue: EVenthough extended the data model, when I try to laod the CC data, It shows the same error message as 'Error in XML Transformation' but this time the CR gets created and a very minimal amount of data(not all the data from ECC) is being loaded to MDG tables.
    And also im checking to map the new fields using SMT Tool, I couldnt create the mapping between source and target structures. Because, the target structure doesnt show this newly added ZZWERKS field to map with. where as the extended field zz_plant in 0G model could be seen in the source structures.
    Could you please help if I am missing any thing here
    Thanks,
    Anusha

    Hi Shankar,
    Thanks for your response.
    In my scenario, my target structure is MDGF_COST_CTR_RPLCTN_REQ_COST. In Ecc, already the the fields ZZWERKS  is added. So, the target structure wouldnt be inheriting the ECC structure while mappping. Do we have to add it separately. This target structure has Proxy generated internal structures for the Cost center data. Could you please check and advise.
    Thanks,
    Anusha

  • Does xcelsius access data model in MS sql server 2000

    have the following scenario: I need to create a dashboard which will be accessed through a Microsoft Office SharePoint server portal. This dashboard will be used for analysis of Agencies and Underwriters; they should be able to drill down to policy level details. 
    The data will originate from a Data Warehouse (Relational Schema) which resides on a Microsoft SQL server 2000 platform.
    Q) Does Xcelsius work off a relational model or does the data model need to be converted to a Dimensional Star schema?  Do you have to create Cubes?
    A)
    Q) It was mentioned in the book that Xcelsius can connect to a Microsoft SharePoint server using web part, however there is no information in the help menu to do that task? 
    A)
    My thought) I was also thinking that the data model build on MS SQL SERVER 2000 can be used to create a Universe, which then can link to Live Office using Query as a Web Service (QAAWS) and used to build Xcelsius model.  (Does one have to go this route, or can one access the data model created on SQL SERVER 2000?)
    Any Feedback)

    I need to create a dashboard which will be accessed through a Microsoft Office SharePoint server portal. This dashboard will be used for analysis of Agencies and Underwriters; they should be able to drill down to policy level details.
    The data will originate from a Data Warehouse (Relational Schema) which resides on a Microsoft SQL server 2000 platform.
    Q) Does Xcelsius work off a relational model or does the data model need to be converted to a Dimensional Star schema? Do you have to create Cubes?
    A) Xcelsius works on an X/Y (Column/Row) table model, strictly based on the Excel structure.  Though the underlying data model is a table, it is not inherently relational as it does not impose any BC Normal Form requirements.
    Xcelsius works primarily with data embedded in an integrated Excel workbook.  Data can be input to the workbook by a variety of methods, from importing a premade Excel workbook into the the model to retrieving data from flat files (.txt, .xml, .etc) to getting data from a connection such as a WebService. 
    There is no capability to connect directly, 1:1, to a database source and execute SQL queries.  An intermediary is required to retrieve data from a database, such as a WebService or an XML Data connection.

  • Complex AM Data Model

    Hi, there;
    I'm facing some situation that is very new for me, so if some of you could give me some tips, there go couple of days that i'm trying.
    scenario;
    1 Entities;
    5 (just five ) View Objects
    8 (yes eight ) view Links
    Here is the most complex data model that i ever faced
    MasterA-->Detail1-->Detail2-->Detail3-->Detail4
    +--->Detail4 ( because of FK not present in Detail3 )
    MasterB-->Detail2-->Detail3-->Detail4
    +----Detail4 (again FK not present in Detail3)
    +---Detail3-->Detail4 (again for the same reason )
    So MasterA and MasterB must control everything, just as an example MasterA == Employee and MasterB as Current Period.
    Only Detail4 is editable and has an entity associated with it, it should have 1 line per period(masterB) and per employee(masterA);
    Testing the Application Module everything seems to be fine.
    At my JSP + Struts Application i have one jsp for setting the current line of MasterB and MasterA is ok at Login time.
    Then i have another jsp page where detail2,Detail3 and detail4 will be part of it. It is suppose to the user to navigate detail2, detail3 and edit or create detail4.
    Why Detail4 is not affect when the user command setCurrentRowWithKey link at Detail3 or Detail2 ?
    Strange is the fact that if i submit an create event at detail4 all the FK's seams to be ok.

    Marcos,
    Can you verify whether the problem occurs if you use Immediate mode instead of the default Batch Mode?
    Here's something you can read about the two modes...
    http://www.oracle.com/technology/products/jdev/collateral/papers/10g/adftoystore/readme.html#batchmode

  • BW data model and impacts to HANA memory consumption

    Hi All,
    As I consider how to create BW models where HANA is the DB for a BW application, it makes sense moving the reporting target from Cubes to DSOs.  Now the next logical progression of thought is that the DSO should store the lowest granularity of data(document level).  So a consolidated data model that reports on cross functional data would combine sales, inventory and purchasing data all being stored at document level.  In this scenario:
    Will a single report execution that requires data from all 3 DSOs use more memory vs the 3 DSOs aggregated say at site/day/material?Lower Granularity Data = Higher Memory Consumption per report execution
    I'm thinking that more memory is required to aggregate the data in HANA before sending to BW.  Is aggregation still necessary to manage execution memory usage?
    Regards,
    Dae Jin

    Let  me rephrase.
    I got an EarlyWatch that said my dimensions on one of cube were too big.  I ran SAP_INFOCUBE_DESIGNS in SE38 in my development box and that confirmed it.
    So, I redesigned the cube, reactivated it and reloaded it.  I then ran SAP_INFOCUBE_DESIGNS again.  The cube doesn't even show up on it.  I suspect I have to trigger something in BW to make it populate for that cube.  How do I make that happen manually?
    Thanks.
    Dave

  • Appreciated if someone can share your good data modeling tips/experience!

    Note: we don't want to read SAP Help links and we are tired of reading and understand SAP Help links, just share your own experience of good data modeling tips to improve the data load and query performance.
    We will give reward points and thanks in advance!

    One more (no giving away any more golden eggs)..
    Remember that BW validation is referential integrity and that R/3 is dynpro integrity
    Remember this when using compound info objects (ever wondered why some info obejcts that you would consider absolutely essential are not in content cubes..)
    Think about the following scenario..
    Does R/3 let you save a billing document without passing it through accounting?
    Can that process fail?
    Can it fail because the customer doesn't exist in Accounting?
    Now imagine you have put 0CUST_COMPC (Customer company code) into your Sales Info cube - and have validation switched ON for SID checking
    As soon as that SD billing document comes down the line - infopackage failure
    Next scenario..
    Can you save a purchase order with a blank material..?
    Then fancy putting 0MAT_PLANT on your purchasing cube?
    Even with validation switched off you are going to have a SID failure on upload (Plant filled - material initial - SID doesn't exist and never will do)

  • Need help with Data Model for Private Messaging

    Sad to say, but it looks like I just really screwed up the design of my Private Messaging (PM) module...  *sigh*
    What looked good on paper doesn't seem to be practical in application.
    I am hoping some of you Oracle gurus can help me come up with a better design!!
    Here is my current design...
    member -||-----0<- private_msg_recipient ->0------||- private_msg
    MEMBER table
    - id
    - email
    - username
    - first_name
    PRIVATE_MSG_RECIPIENT table
    - id
    - member_id_to
    - message_id
    - flag
    - created_on
    - updated_on
    - read_on
    - deleted_on
    - purged_on
    PRIVATE_MSG table
    - id
    - member_id_from
    - subject
    - body
    - flag
    - sent_on
    - updated_on
    - sender_deleted_on
    - sender_purged_on
    ***Short explanation of how the application currently works...
    - Sender creates a PM and sends it to a Recipient.
    - The PM appears in the Sender's "Sent" folder in my website
    - The PM also appears in the Recipient's "Incoming" folder.
    - If the Recipient deletes the PM, I set "deleted_on" and my code moves the PM from Recipient's "Inbox" to the "Trash" folder.  (Record doesn't actually move!)
    - If the Recipient "permanently deletes" the PM from his/her "Trash", I set "purged_on" and my code removes the PM from the Recipient's Message Center.  (Record still in database!)
    - If the Sender deletes the PM, I set "sender_deleted_on" and my code moves the PM from the Sender's "Sent" folder to the "Trash" folder.  (Record doesn't actually move!)
    - If the Recipient "permanently deletes" the PM from his/her "Trash", I set "sender_purged_on" and my code removes the PM from the Sender's Message Center.  (Record still in database!)
    Here are my problems...
    1.) I can't store PM's forever.
    2.) Because of my design, the Sender really owns the PM, and if I add code to REMOVE the PM from the database once it has a "sender_purged_on" value, then that would in essence remove the PM from the Recipient's Inbox as well!!
    In order to remove a PM from the database, I would have to make sure that *both* the Recipient has "purged_on" value and the Sender has a "sender_purged_on" value.  (Lot's of Application Logic for something which should be simple?!)
    I am wondering if I need to change my Data Model to something that allows my autonomy when it comes to the Sender and/or the Recipient deleting the PM for good...
    One the other hand, I believe I did a good job or normalizing the data.  And my current Data Model is the most efficient when it comes to saving storage space and not having dups.
    Maybe I do indeed just need need to write application logic - or a cron job - which checks to make sure that *both* the Sender an Recipient have deleted the PM before it actually flushes it out of my database to free up space?!
    Of course, if one party sits on their PM's forever, then I can never clear things out of my database to free up space...
    What should I do??
    Some expert advice would be welcome!!
    Sincerely,
    Debbie

    rp0428,
    I think I am starting to see my evil ways and where I went wrong... 
    > Unfortunately his design is just as denormalized as yours
    I see that now.  My bad!!
    > the last two columns have NOTHING to do with the message itself so do NOT belong in a normalized table.
    > And his design:
    >
    > Same comment - those last two columns also have NOTHING to do with the message itself.
    Right.
    > The message table should just have columns directly related to the message. It is a list of unique messages: no more, no less.
    Right.
    > Mark gave you hints to the proper normalized design using an INTERSECT table.
    > that table might list: sender, recipient, sender_delete_flag, recipient_delete_flag.
    > As mark suggested you could also have one or two DATEs related to when the delete flags were set. I would just make the columns DATE fields.
    >
    > Once both date columns have a value you can delete the message (or delete all messages older than 30+ days).
    >
    > When both flags are set you can delete the message itself that references the sender and the message sent.
    Okay, how does this revised design look...
    MEMBER --||-----0<-- PM_DISTRIBUTION -->0-------||-- PRIVATE_MSG
    MEMBER table
    - id
    - email
    - username
    - first_name
    and so on...
    PM_DISTRIBUTION table (Maybe you can think of a better name??)
    - id
    - private_msg_id
    - sender_id
    - recipient_id
    - sender_flag
    - sender_deleted_on
    - sender_purged_on
    - recipient_flag
    - recipient_read_on
    - recipient_deleted_on
    - recipient_purged_on
    PRIVATE_MSG
    - id
    - subject
    - body
    - sent_on
    Is that what you were describing to me?
    Quickly reflecting on this new design...
    1.) It should now be in 3rd Normal Form, right?
    2.) It should allow the Sender and Recipient to freely and independently "delete" or "purge" a PM with no impact on the other party, right?
    Here are a few Potential Issues that I see, though...
    a.) What is to stop there from being TWO SENDERS of a PM?
    In retrospect, that is why I originally stuck "member_id_from" in the PRIVATE_MSG table!!  The logic being, that a PM only ever has *one* Sender.
    I guess I would have to add either Application Logic, or Database Logic, or both to ensure that a given PM never has more than one Sender, right?
    b.) If the design above is what you were hinting at, and if it is thus "correct", then is there any conflict with my Business Rule: "Any given User shall only be allowed 100 Messages between his/her Incoming, Sent and Trash folders."
    Because the Sender is no longer "tightly bound" to the PRIVATE_MSG, in my scenario above...
    Debbie could send 100 PM's, hit her quota, then turn around and delete and purge all 100 Sent PM's and that should in no way impact the 100 PM's sitting in other Users' Inboxes, right??
    I think this works like I want...
    Sincerely,
    Debbie

  • Architeture influence MDM Data Modeling??

    Hi,
    I understand If I have MDM SP05  then I can utilize some good functionalities of EP, Java Webdynpro or SAP NW CE, Webservices and BPM. and If same applications I integrate with MDM SP03 then I will loose this functionalities. Will this have any impact / influence on my Data Model designing decisions. What will be the change?
    Cheers,
    Rajesh

    Hi Rajesh,
    I don't think you have to think from that perspective. When you say Data Modelling - it is related to repository structure i.e. fields and tables.
    Scenario 1 - You don't have any information about upcoming version
    In this case since you have no idea what new version would consists of hence there is no point in thinking about data modelling. You can go ahead with the most optimized model based on your current requirements
    Scenario 2 - You have some idea about the new features of the new version
    Still I don't think you need to consider data modelling because new features might be in the form of new java classes or new iViews or may be introduction of new data type, etc. In all cases you cannot change your current data model.
    It's always better to have POC and Live on same version so that things can be reused.
    I have just shared my views and didn't provide any solution
    Regards,
    Jitesh

  • What's the best way to model the following requirements in a data model?

    I need a data model to represent the following scenario but i just can't get my head around it.
    Class A has many Class B's. Class B's can have many Class C's and vice versa. So ultimately the relationship for the above 3 classes is as follows:
    Class A -- (1 : M) --> Class B --- (M : M) ---> Class C
    And Class C must know about its parent referenced (Class B) and also same applies to Class B where it needs to know who owns it (Class A).
    Whats the best way to construct this? In a Tree? Graph?
    Or just simply:
    Class A has list of Class B and Class B has list of Class C.
    But wouldn't this be tight dependencies?

    Theresonly1 wrote:
    Basically:
    A's owns multiple B's (B's need to know who owns it) BUT a B can be only owned by one A
    B's owns multiple C's AND C's can be owned by multiple B's (this is a many-many relationship, right?) - Again; (C's need to know who owns it)I'd reckon that you'd need some references. First, figure out the names of each tier of class. I would say maybe A is a principal/school, B's are teachers (because typically teachers only teach under one principal/ in one school), and C's are students (because teachers can have multiple students, and each student has multiple teachers). So now that you have the names, make some base classes. If I understand your problem correctly, A's really don't need to know who they own, but B's need to know who owns them, so A's don't need to have much of anything. B's have a reference to the A that owns them, but not much else because they don't need to know who they own. C's own nothing, but they are owned by multiple B's, so they have an Array of refrences to each of the B's that own them. I'd use an ArrayList, considering each could have a different amount of B's, but you could do it with an array if you tried. I'll leave it up to you how you implement everything, but I'll give you some guides to how I might do it:
    public class Principal{
    public class Teacher{
          public Principal owner;
          public Teacher(Principal owner){
                this.owner=owner;
    public class Student{
          public Teacher[] owners;
          public Student(Teacher owner...){
                owners=owner;
          public void addOwner(Teacher newOwner){
                //basically copy and paste the old array into a temporary one
                //with an extra spot, add newOwner to the end of that, then
                //make the owner array refrence that array.
    }In Student, I'm pretty sure thats how you allow an undetermined number of parameters be used, and I'm pretty sure that it comes out as an array. I hope this helps you!

Maybe you are looking for

  • How can I sort all files and folders by size?

    Due to lack of space I need to find the biggest files and folders on my Mac OSX Mountain Lion, but "Size" is not in the Find options. And I don't know the wildcard to sort all files and folders by size in Easyfind. Thanks Sarah

  • Software Update Function no longer works after update

    When I try to do a software update, I receive a message that says "A networking error has occurred: timed out (-1001). Make sure you can connect to the internet, then try again. When I complete the network diagnostic, it tells me everything is fine.

  • How do I watch Oscar nominated movie rentals on my HDTV?

    How do I connect my MacBookPro to my HDTV to watch an iTunes Movie rental? Instead of sending DVDs of Oscar nominated Films to DUES paying Voting members,you have to download the films to iiTunes! Also! One film shows as a Rental and 3 are in with my

  • Help with Formula

    Hey, I'm trying to figure out a forumla that I don't know if it's even possible to do. In column C I am posting the current days date (ex. May 20, 2009). In column D I am wanting to post another date X amount of days from column C. So for example my

  • BOX CORRUPTED

    I have a box where I have my OLD SENT MESSAGES and I know that in this box I have all my e-mails since 2013/january until 2013/december. I only see the messages until 2013/july and I´m sure all be there. I delet the index but didn´t work. I have this