Simple data modelling scenario

I have Org dimension A (Grand Total->Primary->Dept->Detail) and Fact B (connects to A via Detail column A.Detail=A.Detail). There's a column MoreDetail in B that needs to be added to Org A. This shouldn't be a problem with a simple join in opaque view/select based on Detail=Detail. The only problem is that the updated Dimension would have MoreDetail as the lowest Grain. Is this possible w/o heavy reorganization? Thanks

1) change keys for fact table and dim table (if using details)
2) add moredetail to heirarchy
3) set level to moredetail whereever level was detail.
if you are concerned about changes in reports, you can do some small trick to name rename details as lessdetails and moredetails as details. (be wary of the aliases created in presentation layer)

Similar Messages

  • Simple Data Model or Load in BI 7.0

    Hi  there, please help me this to be clear and firm up the data model.please answer step by step eg 1,2 3,
    1. I am trying to load the two purchasing cube 0pur_c01 and 0pur_c02 from the data sources scheline 2lis_02_scl and 2lis_02_ITM and 2lis_02_hdr
    2. Now question is that i see confusion here to load and design the data model. please tell how
    option 1. create transformation from data source 2lis_02_scl to infosource 2lis_02_scl and then create the transformation from infosource 2lis_02_scl to cube 0pur_c01 and the repeat same thing for other data source
    now create IP from DS to load PSA and create the DTP to load the from DS to CUBE
    3. nOW I SEE peoploe making transfer structure for IS 2lis_02_scl AND connect to DS 2lis_02_scl and then going to cube and creating UR from IS 2lis_02_scl  TO CUBE like in 3.5 WHY THis is needed now in BI 7.0??? if we are going by transformation FROM DS TO IS AND IS TO CUBE AND THEN DO THE DTP
    Please clarify the right approach to solve the requiremnt
    4. i also see the way suggested by sap  http://help.sap.com/saphelp_nw04s/helpdata/en/44/0243dd8ae1603ae10000000a1553f6/content.htm
    to make 2 transform from IS To IS in case 3 why this is required since sap provided different IS
    5. why not create the DS TO IS transform for each ds such as  2lis_02_scl  , 2lis_02_itm and  DTP to cube seperately for each DS.
    please help
    thanks
    soniya kapoor

    Hi  there, please help me this to be clear and firm up the data model.please answer step by step eg 1,2 3,
    1. I am trying to load the two purchasing cube 0pur_c01 and 0pur_c02 from the data sources scheline 2lis_02_scl and 2lis_02_ITM and 2lis_02_hdr
    2. Now question is that i see confusion here to load and design the data model. please tell how
    option 1. create transformation from data source 2lis_02_scl to infosource 2lis_02_scl and then create the transformation from infosource 2lis_02_scl to cube 0pur_c01 and the repeat same thing for other data source
    now create IP from DS to load PSA and create the DTP to load the from DS to CUBE
    3. nOW I SEE peoploe making transfer structure for IS 2lis_02_scl AND connect to DS 2lis_02_scl and then going to cube and creating UR from IS 2lis_02_scl  TO CUBE like in 3.5 WHY THis is needed now in BI 7.0??? if we are going by transformation FROM DS TO IS AND IS TO CUBE AND THEN DO THE DTP
    Please clarify the right approach to solve the requiremnt
    4. i also see the way suggested by sap  http://help.sap.com/saphelp_nw04s/helpdata/en/44/0243dd8ae1603ae10000000a1553f6/content.htm
    to make 2 transform from IS To IS in case 3 why this is required since sap provided different IS
    5. why not create the DS TO IS transform for each ds such as  2lis_02_scl  , 2lis_02_itm and  DTP to cube seperately for each DS.
    please help
    thanks
    soniya kapoor

  • BW data modelling scenario

    Hi BW gurus,
    I would like to know how do we decide if a given data should be a key figure or characteristic data in dimension table ?
    Regards,
    Tushar.

    Hi Tushar,
    Key figures are the facts, usually values or amounts or quantities that you are trying to measure, whereas the chars hold the unique values of business entities that define a fact. Depending on what your data is doing, you will need to make this decision. See this document for full details:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    Hope this helps...

  • Issue in creating a custom data model from BP

    Hi Team
    We have a requirement to create a custom data model by copying data model BP. I have successfully created new data model ZP . I have copied the UI for searching from BP. Issue is when i search a business partner  ideally it should not display any entries because i have just created the data model, But it is taking entries from BP and getting displayed. Please let me know how to map data model to search UI. I dont see option USMD_MODEL here

    Hi Imran,
    actually that is not an issue but a designed feature. I'm afraid that you need to re-think your whole project. The explanations is rather simple:
    Data model BP in MDG is a so called Re-Use Area data modell. This means that active data (records that are currently not stored in a change request) are saved in existing SAP ERP data base tables like BUT000 for the business partner master data and LFA1 or KNA1 for Vendor or Customer master data.
    If you copy data model BP to ZP you still refer to the same active area. You will always find the same active objects - no matter which data model you are actually using for the user interface. The only difference wil occur for objects being currently processed in a change request. In that case a separation between BP and ZP is possible. But this won't help to solve your issue.
    From SAP side I can only recommend not to copy BP but to find a different way of integrating your project needs into BP.
    Best regards
    Michael

  • Need help with Data Model for Private Messaging

    Sad to say, but it looks like I just really screwed up the design of my Private Messaging (PM) module...  *sigh*
    What looked good on paper doesn't seem to be practical in application.
    I am hoping some of you Oracle gurus can help me come up with a better design!!
    Here is my current design...
    member -||-----0<- private_msg_recipient ->0------||- private_msg
    MEMBER table
    - id
    - email
    - username
    - first_name
    PRIVATE_MSG_RECIPIENT table
    - id
    - member_id_to
    - message_id
    - flag
    - created_on
    - updated_on
    - read_on
    - deleted_on
    - purged_on
    PRIVATE_MSG table
    - id
    - member_id_from
    - subject
    - body
    - flag
    - sent_on
    - updated_on
    - sender_deleted_on
    - sender_purged_on
    ***Short explanation of how the application currently works...
    - Sender creates a PM and sends it to a Recipient.
    - The PM appears in the Sender's "Sent" folder in my website
    - The PM also appears in the Recipient's "Incoming" folder.
    - If the Recipient deletes the PM, I set "deleted_on" and my code moves the PM from Recipient's "Inbox" to the "Trash" folder.  (Record doesn't actually move!)
    - If the Recipient "permanently deletes" the PM from his/her "Trash", I set "purged_on" and my code removes the PM from the Recipient's Message Center.  (Record still in database!)
    - If the Sender deletes the PM, I set "sender_deleted_on" and my code moves the PM from the Sender's "Sent" folder to the "Trash" folder.  (Record doesn't actually move!)
    - If the Recipient "permanently deletes" the PM from his/her "Trash", I set "sender_purged_on" and my code removes the PM from the Sender's Message Center.  (Record still in database!)
    Here are my problems...
    1.) I can't store PM's forever.
    2.) Because of my design, the Sender really owns the PM, and if I add code to REMOVE the PM from the database once it has a "sender_purged_on" value, then that would in essence remove the PM from the Recipient's Inbox as well!!
    In order to remove a PM from the database, I would have to make sure that *both* the Recipient has "purged_on" value and the Sender has a "sender_purged_on" value.  (Lot's of Application Logic for something which should be simple?!)
    I am wondering if I need to change my Data Model to something that allows my autonomy when it comes to the Sender and/or the Recipient deleting the PM for good...
    One the other hand, I believe I did a good job or normalizing the data.  And my current Data Model is the most efficient when it comes to saving storage space and not having dups.
    Maybe I do indeed just need need to write application logic - or a cron job - which checks to make sure that *both* the Sender an Recipient have deleted the PM before it actually flushes it out of my database to free up space?!
    Of course, if one party sits on their PM's forever, then I can never clear things out of my database to free up space...
    What should I do??
    Some expert advice would be welcome!!
    Sincerely,
    Debbie

    rp0428,
    I think I am starting to see my evil ways and where I went wrong... 
    > Unfortunately his design is just as denormalized as yours
    I see that now.  My bad!!
    > the last two columns have NOTHING to do with the message itself so do NOT belong in a normalized table.
    > And his design:
    >
    > Same comment - those last two columns also have NOTHING to do with the message itself.
    Right.
    > The message table should just have columns directly related to the message. It is a list of unique messages: no more, no less.
    Right.
    > Mark gave you hints to the proper normalized design using an INTERSECT table.
    > that table might list: sender, recipient, sender_delete_flag, recipient_delete_flag.
    > As mark suggested you could also have one or two DATEs related to when the delete flags were set. I would just make the columns DATE fields.
    >
    > Once both date columns have a value you can delete the message (or delete all messages older than 30+ days).
    >
    > When both flags are set you can delete the message itself that references the sender and the message sent.
    Okay, how does this revised design look...
    MEMBER --||-----0<-- PM_DISTRIBUTION -->0-------||-- PRIVATE_MSG
    MEMBER table
    - id
    - email
    - username
    - first_name
    and so on...
    PM_DISTRIBUTION table (Maybe you can think of a better name??)
    - id
    - private_msg_id
    - sender_id
    - recipient_id
    - sender_flag
    - sender_deleted_on
    - sender_purged_on
    - recipient_flag
    - recipient_read_on
    - recipient_deleted_on
    - recipient_purged_on
    PRIVATE_MSG
    - id
    - subject
    - body
    - sent_on
    Is that what you were describing to me?
    Quickly reflecting on this new design...
    1.) It should now be in 3rd Normal Form, right?
    2.) It should allow the Sender and Recipient to freely and independently "delete" or "purge" a PM with no impact on the other party, right?
    Here are a few Potential Issues that I see, though...
    a.) What is to stop there from being TWO SENDERS of a PM?
    In retrospect, that is why I originally stuck "member_id_from" in the PRIVATE_MSG table!!  The logic being, that a PM only ever has *one* Sender.
    I guess I would have to add either Application Logic, or Database Logic, or both to ensure that a given PM never has more than one Sender, right?
    b.) If the design above is what you were hinting at, and if it is thus "correct", then is there any conflict with my Business Rule: "Any given User shall only be allowed 100 Messages between his/her Incoming, Sent and Trash folders."
    Because the Sender is no longer "tightly bound" to the PRIVATE_MSG, in my scenario above...
    Debbie could send 100 PM's, hit her quota, then turn around and delete and purge all 100 Sent PM's and that should in no way impact the 100 PM's sitting in other Users' Inboxes, right??
    I think this works like I want...
    Sincerely,
    Debbie

  • Some observations on my first use of Data Modeler Beta

    First of all, I can see this tool has a lot of promise.
    I hope Oracle keeps at it, it could turn into a real winner if all the features I see being worked on mature.
    Thanks!
    Here are a few observations on things that I found non-obvious or tedious to do.
    1. When designing an entity, I want to give it a name, a definition, attributes and keys. I want that process to be quick and require the minimum amount of mouse-clicking/navigation fiddling as possible. The current way of defining the attribute's datatype and size is painfully slow. I have to click to get a pop-up. Then I have to click to choose from a set of categories. Then I have to click on a dropdown list. If I try to use the down-arrow on the dropdown list, it works, but not if I go past the one I want. The up-arrow won't take me backwards in the list, so more clicking. It's just a nasty, slow interface to do a simple task that I have to do a thousand times in a data model. If I need to change the size of something, back I go thru the entire process all over again.
    That makes it doubly slow to work in the most natural way, which is to list the attributes and the datatypes, then come back and refine the sizes once the model is maturing and relatively stable.
    2. Adding an additional attribute requires a mouse click instead of a down-arrow. That means I have to take my hands off the keyboard to add a new attribute. Maybe there is some short-cut key that does that, but I have better things to do than memorize non-standard keyboard mappings. Make the down/up arrows works as they should.
    3. Adding the comment that describes the attributes is not quite as slow, but still requires more keystrokes/mouse movements than it should. It's hard enough to get developers to document their attributes, don't discourage them.
    4. I can't see the list of attributes, data types, sizes, key/mandatory settings, and the comment at one time while editing the entity. Makes it harder to grasp what all the attributes mean at a glance, which slows down the modeling and the comprehension of an existing model.
    5. All the entities I created had primary keys with columns in them. But when I tried to have it build a physical model, it complained that some relationships had no columns in them. For the life of me, I couldn't figure out how to fix that. Never had that problem in any other case tool.
    6. Getting it to generate DDL was awkward to find. Make it something obvious, like a button on the toolbar that says "Generate DDL".
    7. Apostrophes in the Comments in RDBMS are not escaped, so the generated DDL won't run.
    8. For the ease of use/speed of use testing on high-volume key tasks, make the developers do the task 1000 times in a row. Make them use long names that require typing, not table A with columns c1, c2 and c3. Long before they get to iteration 1000 they will have many ideas on how to make that task easier and faster to do.
    9. Make developers use names of things that are the maximum length allowed. For example, for a table name in oracle, the max length of the name is 30 characters. The name of one testing table should be AMMMMMMMMMMMMMMMMMMMMMMMMMMMMZ. That's a capital A followed by 28 capital M's and a capital Z. For numbers, use the pattern 1555559. If the developers can't see the A and Z or 1 and 9 in the display area for the name in the default layout for the window, they did the display layout wrong. For places where the text can be really long, choose a "supported visible length" for that field and enter data in the pattern AMMMMMMMMMMMMMMMQMMMMMMMMZ, where Q is placed at the supported visible length. if the A and Q don't show, the layout is wrong.
    10. SQL Developer has quite a few truly gooberish UI interaction designs and I can see some of that carrying over to the Data Modeler tool. I really recommend getting a windows UI expert to design the ui interface, not a java expert. I've seen a lot of very productive windows user interfaces and extremely few java interfaces suitable for high-speed data entry. Give the UI expert the authority to tell the java programmers "I don't want to hear about java coding internals - make the user interface perform this way." I think the technical limitations in java UIs are much less than the mindset limitations I've seen in all to many programmers. That, and making the developers use their code 1000 times in a row to perform key tasks will cause the UI to get streamlined considerably.
    Thanks, and keep up the good work!

    Dear David,
    Again thank you for your valuable and highly appreciated feedback. Find included a more elaborated answer to your observations:
    *1. When designing an entity, I want to give it a name, a definition, attributes and keys.*
    In the new Early Adopter Release, a "SQL developer like" property window has been added for Entities and Tables. For each attribute/column you will have the ability to add name, datatype, Primary Key, Mandatory and comment from one and the same screen
    *2. Adding an additional attribute requires a mouse click instead of a down-arrow.*
    An enhancement request has been created
    *3. Adding the comment that describes the attributes is not quite as slow, but still requires more keystrokes/mouse movements than it should.*
    In the new Early Adopter Release, a "SQL developer like" property window has been added for Entities and Tables. For each attribute/column you will have the ability to add name, datatype, PK, M and comment from one and the same screen
    *4. I can't see the list of attributes, data types, sizes, key/mandatory settings, and the comment at one time while editing the entity. Makes it harder to grasp what all the attributes mean at a glance, which slows down the modeling and the comprehension of an existing model.*
    See former answers. For meaning of attributes you can also use the Glossary and Naming Standardization facilities: see Tools Option menu, Glossary and General Options for naming standards
    *5. All the entities I created had primary keys with columns in them. But when I tried to have it build a physical model, it complained that some relationships had no columns in them. For the life of me, I couldn't figure out how to fix that. Never had that problem in any other case tool.*
    A Bug report has been created. Issue will most probably be solved in the nexr Early Adopter release
    *6. Getting it to generate DDL was awkward to find. Make it something obvious, like a button on the toolbar that says "Generate DDL".*
    An enhancement request has been created.
    *7. Apostrophes in the Comments in RDBMS are not escaped, so the generated DDL won't run.*
    A bug report has been created
    *8. For the ease of use/speed of use testing on high-volume key tasks, make the developers do the task 1000 times in a row. Make them use long names that require typing, not table A with columns c1, c2 and c3. Long before they get to iteration 1000 they will have many ideas on how to make that task easier and faster to do.*
    I aplogize, but I don't understand clearly what you want to say with the use/speed of use here.
    *9. Make developers use names of things that are the maximum length allowed.*
    Our relational model is for use for not just Oracle, but also DB2, SQL Server and in the future maybe other database systems. Whicjh means that we can't taylor it to just one of these database systems. However you can set maxinum name lenghts by clicking right on the diagram and select Model Properties and here you can set naming Options. Here you can also use the Glossary and Naming Standardization facilities: see Tools Option menu, Glossary and General Options for naming standards
    *10. SQL Developer has quite a few truly gooberish UI interaction designs and I can see some of that carrying over to the Data Modeler tool.*
    Fully agree. As you will see in our next Early Adopter release we have started to use SQL Developer like UI objects.
    Edited by: René De Vleeschauwer on 17-nov-2008 1:58

  • APEX application development with an existing data model

    Dear all,
    We - as a company - are trying to build an application in ApEx with an existing data model. The idea is that the data model that has all sorts of TAPIs and business rules defined is going to be re-used in an ApEx environment. I am actually wondering whether this is possible, wise, feasible. When I am building some simple pages on one specific table, which has approx. 35 fields, sometimes the triggers around that table fail. Before I am actually going to dive in to try and sort these problems I am wondering whether this use of a data model in an ApEx environment is wise.
    I mean, when you build an ApEx application there are usually no triggers and TAPIs available. My logic is that the ApEx application can look after validation and stuff.
    Does anybody experienced have anything smart and useful to say about this. Any feedback is appreciated.
    Kind regards,
    -victorbax
    -leiderdorp, the netherlands

    Hey vik,
    At my company we rarely use the standard APEX wizards because of multi-linguility (on data level) and error management issues.
    We always create an API in PL/SQL and call the procedures for APEX making APEX the 'view' layer in a kind of model-view-controller architecture.
    The main thing to keep in mind is maintainability, if you're working with an existing datamodel with an API that works it's probably best to just created forms and reports through APEX and then calling the appropriate pl/sql (packaged) procedure manually. We believe that this kind of architecture has the downside of being a little more work initially but it's much easier to maintain large projects through a self written API and take care of issues like error handling, logging and multi-linguility through a self written API.
    Kind Regards,
    Geert Guldentops

  • Problem connecting PowerPivot's (Office 2013) Data Model deployed on SharePoint 2013 to data source.

    Hello:
    Our configurations is as follow :  SharePoint 2013 is on Server A;  SQL Server Analysis server (SQL 2014) is on DB Server B;   
    SharePoint databases (sp_ ...) and our Data Mart (SQL 2014) are on server B. All servers runs Windows 2012 OS.
    On my desktop I built a simple Excel 2013 workbook with PowerPivot Data Model that imported several tables from our Data Mart (server B above). Then created a Power View report. Locally everything works fine. 
    But when I uploaded this workbook to our SharePoint PowerPivot gallery and was trying to refresh data, I got the connection error: It’s very long but the ErrorCode is “rsCannotRetriveModel”. The end of the error message is:
    'TemporaryDataSource'.</Message><MoreInformation><Source>Microsoft.AnalysisServices.SPClient</Source>
    <Message>Call to Excel Services returned an error.</Message>
    <MoreInformation><Source></Source><Message>We were unable to refresh one or more data connections in this workbook.
    The following connections failed to refresh:ThisWorkbookDataModel</Message>
    <MoreInformation><Source>Microsoft.Office.Excel.Server.WebServices</Source><Message>
    We were unable to refresh one or more data connections in this workbook. The following connections failed to refresh:ThisWorkbookDataModel</Message></MoreInformation>
    </MoreInformation></MoreInformation></MoreInformation></MoreInformation><Warnings xmlns="http://www.microsoft.com/sql/reportingservices" /></detail>
    Our Excel Services on the SharePoint work fine and refresh data on different excel workbooks (with no Data Model) just fine.  We are using an unattended account for Excel Services to connect from SharePoint server to our databases. Found a few references
    on the topic, tried them but with no luck.
    Please advise!
    Regards
    -Jeff
    Jeff Gorvits

    Hi Jeff,
    Firstly, I need to confirm whether you are refreshing Data connection in browser, since Data Refresh is not supported in Office Web Apps. Please refer more information in this article:
    http://blogs.technet.com/b/excel_services__powerpivot_for_sharepoint_support_blog/archive/2013/01/31/powerpivot-for-sharepoint-browser-refresh-fails-data-refresh-not-supported-in-office-web-apps.aspx
    From the error "Call to Excel Services returned an error", please verify the location of the
    data source, for example an Excel workbook, is registered as a trusted location with Excel Services:
    https://technet.microsoft.com/en-us/library/jj219699(v=office.15).aspx
    Since you are using unattended account for Excel Services to connect from SP to databases, I wonder if the issue occurs to unattended referesh, if so, please refer to:
    http://social.technet.microsoft.com/wiki/contents/articles/3870.troubleshoot-powerpivot-data-refresh.aspx#Problems_using_the_Unattended_data_refresh_account
    Regards,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected] .
    Rebecca Tu
    TechNet Community Support

  • Xcode, how do you take an app from an idea to a data model, data structure, or class structure?

    Hey everyone!
    I'm a beginner xcoder and computer engineering sophomore and have literally spent every hour of the past few weeks reading as much as I possibly can about becoming a developer.
    I've found plenty of documentation, and guides on how to do very specific things in xcode and objective-C but am still looking for more information on one topic; how do you decide which data models/structures/controllers etc to use?
    I mean, you personally. I've been looking for a resource on this and have not been able to find one. I just finished Apple's iOS Developers Guide and they mention "choosing a data model" but do not describe them in much detail.
    The following is what is provided in the guide:
    ● Choose a basic approach for your data model:
    ● Existing data model code—If you already have data model code written in a C-based language, you
    can integrate that code directly into your iOS apps. Because iOS apps are written in Objective-C, they
    work just fine with code written in other C-based languages. Of course, there is also benefit to writing
    an Objective-C wrapper for any non Objective-C code.
    ● Custom objects data model—A custom object typically combines some simple data (strings, numbers,
    dates, URLs, and so on) with the business logic needed to manage that data and ensure its consistency.
    Custom objects can store a combination of scalar values and pointers to other objects. For example,
    the Foundation framework defines classes for many simple data types and for storing collections of
    other objects. These classes make it much easier to define your own custom objects.
    ● Structured data model—If your data is highly structured—that is, it lends itself to storage in a
    database—use Core Data (or SQLite) to store the data. Core Data provides a simple object-oriented
    model for managing your structured data. It also provides built-in support for some advanced features
    like undo and iCloud. (SQLite files cannot be used in conjunction with iCloud.)
    ● Decide whether you need support for documents:
    The job of a document is to manage your app’s in-memory data model objects and coordinate the storage
    of that data in a corresponding file (or set of files) on disk. Documents normally connote files that the user
    created but apps can use documents to manage non user facing files too. One big advantage of using
    documents is that the UIDocument class makes interacting with iCloud and the local file system much
    simpler. For apps that use Core Data to store their content, the UIManagedDocument class provides similar
    support.
    I suppose my question boils down to, how do you decide which structures to use? If you can provide an example of an app idea and how its implemented that would be very helpful and much appreciated!
    For example, to implement the idea of an app which allows users to progress through levels of knowledge of a certain subject and rewarding them with badges and such (this is not an actual app just a whim) how would one model that?
    Thanks in advance for all your help!!!

    SgtChevelle wrote:
    how do you decide which structures to use?
    Trial and error.
    I wish I had a better answer for you, but that pretty much encapsulates it. There is some, but not much, good wisdom out there, but it takes a significant amount of experience to be able to recognize it. The software development community if currently afflicted with a case of copy-and-paste-itis. And the prognosis is poor.
    The solution is to be brutal to yourself and others. Focus on what you need and ignore everything else. Remember that other people have their own needs and methods and they might not be applicable to you. Apple, for example, can hire thousands of programmers, set them to coding for six months, pick the best results, and have the end-users spend their own time and monety to test it. If you don't have Apple's resources and power, think twice about adopting Apple's approach. And I am talking from a macro to a micro perspective. Apple's sample and boilerplate code is just junk. Don't assume you can't do better. You can.
    Unfortunately, all this takes time and practice. You can read popular books, but never assume that anyone knows more than you do. Maybe they do and maybe they don't. It takes time to figure that out. Just do your best, ignore the naysayers, and doubt other people even more than you doubt yourself.

  • How to rename a View Link in a data model?

    In JDeveloper 9.0.3, I create a simple BC4J project with a master and a detail. The business components are properly created.
    When I try to design the data model for the module, I am able to link the view for the master and the view for the detail, there is no problem to rename the view objects that I select in my data model, but I found no way to rename the view links used between the view objects.
    I always get an automatically generated name, like "FkForeignDetailLink1". This is what I see in the data model, and also in the structure of the module, in the "View Link Members" section.
    I am able to modify the properties of this view link, but not its name. Am I missing something?
    TIA

    to clarify... In the AM wizard, you are trying to rename the instances of view objects and view links?
    If so, you are right, there is no way to rename view links on that panel. We are working on a better way to do that for the next release. The only way I know of to rename the view links is to shut jdev down and edit the XML for the application modele. This can be dangerous if you get it wrong, so make a backup before attempting this. The hint with renaming things OUTSIDE JDeveloper, I already know, but I don't like much.
    As for the AM wizard, I just noticed that I cannot do it inside. But that's no problem for me, as lots of details cannot be done inside the wizard, but there is a way to customize them afterwards. My real problem is that didn't find ANY way of doing it.
    A question for you if I may. Why are you renaming the view link instances? I though most users would use the detail view instance directly. The detail view instance would in turn look up the appropriate view link into, what ever its name was. The only place where I saw the bad view link names displayed in JDev was the Structure pane of the AM, under View Link Members. But there is no way to modify them there. As for the detail view instance, I didn't find a place to get to the view link at design time (in 9.0.3.988).
    Could you be more precise, please?
    Thanks again,
    Adrian

  • Data modeling dilemma for EAV oriented problems in Data Modeler

    Hello,
    Dealing with EAV( entity attribute value model ) oriented structure of data.
    So this look like this:
    Entity( Entity_Id number , Entity_Name varchar2, Entity_Desc varchar2 )
    Entity that list attributes and some meta data on characteristics of attributes:
    Type_Of_Attribute( Attr_Id varchar2, Type_Of_Value TOV_Domain, Unit_Of_Value varchar2 , Min_Value variant_type , Max_Value variant_Type )
    Then we have actual data. Entity is described with set of attributes and their values. So aditionally to attributes on row form in Entity there are aditional attrbutes in columnar form.
    Because of sparsity...
    However there in columnar form the challenge or issue is type of values so domains of attributes.
    For example:
    weight_of_ person is number between min_number and max_number.
    But another parameter for example mood_of_person is string from the domain which consists of set of strings/descpriptions.
    Another possibility could be reference to some table of values ( key value ) that could be modelled as one:many relationship if put in entity Entity on row form of attributes.
    But since this is attribute relate only to few intances or it is very dispersed....and for preserving table form..it was put in columnar form ..
    Attribute_Of_Entity( Entity_Id, Attr_Id, value,
                                     -- when not normalized, could be add also unit like kg or lbs or inch or piece ).
    My question is on good/succesfull practice of modelling for VALUE in attribute_of_entity?
    Somewhere read that some databases have feature of so-called variant type.
    Guess the objective is to modell in such a way that implementation of this model is as easy as possbile in
    issues like:
    a) validating column oriented form during entering or updating values
    b) consolidating queries when reporting
    c) agregating data when grouping when grouping data and preventing non-comparable data.
    So to implement value as structure/complex_type with methods or there is any other feature supporting variabilty of the data along same column in the table. So logical design that would not cause too much complexity in the relational design and table implementation and procedures are handled as much as possible on the database level?
    Thank you in advance for  comments, experiences, suggestions,

    Hello,
    EAV is rarely a good solution. Tell us about your business problem and we might be able to show you solutions that are performant/easier to maintain/...
    https://www.simple-talk.com/opinion/opinion-pieces/bad-carma/
    Regards
    Marcus
    BTW: this question should not be asked in the forum space for the tool SQL Data Modeler. Instead ask in SQL and PL/SQL or General Database Discussions

  • Can SAP leverage MM and classification (CL-system) data-model?

    Dear all,
    At the moment I’m working for a telecom company, setting up the Functional Management department activities to manage 2nd line support calls, and general overall application functional support.
    One of the issues is simplifying the work of monitoring Item (products, articles etc.) Master Data. Below you can see the data model where Items, their classes, the characteristics of classes and prices are stored. 
    The Legacy system has the following data-model (as ERD, <a href="http://sapparking2007.freehostia.com/LegacyModel.JPG">LegacyModel</a> )
    Main Product (i.e. Nokia 6110)
         - has zero, one or more Characteristics (i.e. 220v cord, 3v battery etc.)      
         - has zero, one or more Variants (i.e. Nokia 6110i,  Black, Red etc.)
         A Variant  (Nokia 6110i, Black)
              - has zero, one or more Characteristics (i.e. 200h stand by time etc.)      
              - has zero, one or more Deliveries (i.e. we sell “a package”)
              A Delivery (certain combination of a package : Nokia 6110i, Red, Prepaid, 1 year)
                   - Has zero, one or more Characteristics (i.e. prepaid simcard, free service etc.)
                   - Has zero, one or more Delivery Prices (i.e. prices)
                   A Delivery Price
                        - Has zero, one or more Characteristics (i.e. 1 month warenty)
                        - Has zero, one or more Prices (i.e. offer price, sales, periods)
    Well, almost perfectly normalised data-model.
    The implementation in SAP is as follows:
    Main Products , Variant and Deliveries are all stored in table MARA.
    Product is Artikeltype Z01A,
    Variant is type KMAT,
    Deliveries is type HAVA and DIEN.
    See image:
    <a href="http://sapparking2007.freehostia.com/Model01.JPG">Current Data Model</a>
    The Characteristics are stored in table CABN etc…
    The prices are stored in the SD table KONP .. (price condition). etc.
    The MARA table is connected to Classes table (KSSK) via the INOB table.
    Classes are connected to Classes with in de KSSK table.
    The filed that stores Class in the KSSK and the field that stores the Class below are 2 different type of fields (18C , and 50CN)  and they would simply wont join in a SAP query.
    The filed that stores Item to Class in the INOB and the field that stores the Class in the KSSK are 2 different type of fields (18C , and 50CN)  and they would also wont join in a SAP query.
    Therefore we have created extra Z. “Tables” (Types actually) that does the join for us.
    Functional Management personnel – are not ABAP and SQL people. So we have to keep it simple.
    After a training session – it proved to be still too complicated. After a Change Request, to create the product structure in a separated, single table, has resulted in a batch job, that create the file periodically, but not all of it. And – at the end, you never know if your data is up-to-date.
    To me it seems logical – to add a (derived) field to the KSSK and the INOB to accumulate the underlying class with the same attributes as the MARA and KSSK, so it can be properly joined. But here there is too much resistant in conjunction with SAP releases etc. AND it seem to be impossible (not sure) to define a “type table” with a derived field.
    To let programmers produce more ABAP is not a clean option either, and yes I realise one can create a sap query on a single table, and add a long SQL statement on Record level.
    So the question:
    -     Is this implementation “the best” possible in SAP?
    -     How to overcome the “unjoinable” friendly database?
    In other query tools it is possible to create “tables” as output (on the fly), where the tables can be used as input tables in a next query, and can be joined further.
    -     Is this somehow possible in SAP?
    I wonder if anyone can shade some light on this approach. We have 2 FTE to monitor 2.5 million products combinations, and the propagation of this data to over 30 sales channels – daily.    
    Thank you in advance for taking the time!
    Kind Regards,
    Doron.

    Ashish:
    Please search the space threads before posting.  There are several posts where this is already covered.
    regards,
    bill.

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Error to open a data model in Report builder (Word)

    Dear all,
    Im troubles when i try to open a data model in Report Builder (Word). Someone know about this problem?
    The message is: A error has ocurred. Check the settings and try again.
    Any suggestion?
    Thanks for all!

    I've also got this error several times. Usually the reason is an error in the Publisher query (or data template). It's better to first test (view) that you get a proper xml-output in Publisher, and only after that try to create an rtf-template. If this doesn't work, I usually start from the beginning, and first make a very simple report, then try the template, and if it works, then gradually increase elements for the Publisher side. Sometimes I haven't got any idea why it didn't work at the first place, when it then works after beginning from the simple report.

Maybe you are looking for