Best practice : Base an Entity object on a table or on PLSQL Package?

Hello,
We are going to build an application based upon services. We'd like to implement the data part of our services with the ADF BC components. Each Service data part is represented as an Application.
There are two ways to define an entity within an application
(1) Directly based upon the database tables
(2) Based upon a PLSQL package, containing insert/update/delete functionality (as described in paragraph 26.4 of the ADF Dev. guide for Forms/4GL Developers)
I can imagine that the last methodology is (from a services standpoint) a cleaner separation between the data and the model layer.
What are the advantages/disadvantages in real life of basing an Entity object on a PLSQL package?
Thanks in advance,
Regards Leon Smiers

Hello Frank,
We are going to use the ADF BC model for both JSF pages and Web Services. So I'd like to hve a reusable BC Model.
You mentioned ADF BC transaction management, when you base the BC model upon PLSQL API's, do I have to define my own transaction management?
Regards Leon

Similar Messages

  • Problem installing SAP best practices base line

    I have already installed ecc 6.0 and performed the R2R build procedure.
    I am trying to finish by installing best practices base line using the installation assistance in transaction "/n/smb/bbi" However I am missing lots of ecatt objects.
    Please help, thanks,

    hi,
    I have uploaded the BC sets..but having problem while activating the project..it says ECATT objects/SMB99/CHECK_o0010_B32 doesnot exist
    ECATT objects/SMB99/RZ11_o001_B32 doesnot not exist in system
    210 ECATT objects are missing..i am thinking off ECATT add-on installation using SAINT..   BP_BLERP is the add-on for ECATT..but m not finding it in www.service.sap.com/swdc....ECATTany ideas...tell me about the BP installation using saint
    Thanks
    Rajdeep
    Message was edited by:
            rajdeep sarma

  • Best-practice for use of object styles to manage image text wrap issues when aiming at both print and EPUB output?

    I have a work-flow question about object styles, text-wrap, and preparing a long document with lots of images for dual print/EPUB output in InDesign CC 2014.
    I am sort of experienced with InDesign but new to EPUB export. I have hundreds of pages and hundreds of images so I'd like to make my EPUB learning curve, in particular, less painful.
    Let me talk you through what I'm planning and you tell me if it's stupid.
    It's kind of a storybook-look I'm going for. Single column of text (6" by 9" page) with lots of small-to-medium images on the page (one or two images per page), and the text flowing around, sometimes right, sometimes left. Sometimes around the bounding box, sometimes following the edges of the images. So in each case I'm looking to tweak image size and placement and wrap settings so that the image is as close to the relevant text as possible and the layout isn't all wonky. Lovely print page the goal. Lots of fussy trade-offs and deciding what looks best. Inevitably, this will entail local overrides of paragraph styles. So what I want to do, I guess, is get the images as closely placed as possible, before I do any of that overriding. Then I divide my production line.
    1) I set aside the uniformly-styled doc for later EPUB export. (This is wise, right? Start for EPUB export with a doc with pristine styles?)
    2) With the EPUB-bound version set aside, I finish preparing the print side, making all my little tweaks. So many pages, so many images. So many little nudges. If I go back and nudge something at the beginning everything shifts a little. It's broken up into lots of separate stories, but still ... there is no way to make this non-tedious. But what is best practice? I'm basically just doing it by hand, eyeballing it and dropping an inline anchor to some close bit of text in case of some storm, i.e. if there's a major text change my image will still be almost where it belongs. Try to get the early bits right so that I don't have to go back and change them and then mess up stuff later. Object styles don't really help me with that. Do they? I haven't found a good use for them at this stage (Obviously if I had to draw a pink line around each image, or whatever, I'd use object styles for that.)
    Now let me shift back to EPUB. Clearly I need object styles to prepare for export. I'm planning to make a left float style and a right float style and a couple of others for other cases. And I'm basically going to go through the whole doc selecting each image and styling it in whatever way seems likeliest. At this point I will change the inline anchors to above line or custom, since I'm told EPUB doesn't like the inline ones.
    I guess maybe it comes down to this. I realize I have to use object styles for images for EPUB, but for print, manual placement - to make it look just right - and an inline anchor seems best? I sort of feel like if I'm going to bother to use object styles for EPUB I should also use them for print, but maybe that's just not necessary? It feels inefficient to make so many inline anchors and then trade them for a custom thing just for EPUB. But two different outputs means two different workflows. Sometimes you just have to do it twice.
    Does this make sense? What am I missing, before I waste dozens of hours doing it wrong?

    I've moved your question to the InDesign EPUB forum for best results.

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

  • Best Practices For Portal Content Objects Transport System

    Hi All,
    I am going to make some documentation on Transport Sytem for Portal content objects in Best Practices.
    Please help in out and send me some documents related to SAP Best Practices for transport  for Portal Content Objects.
    Thanks,
    Iqbal Ahmad
    Edited by: Iqbal Ahmad on Sep 15, 2008 6:31 PM

    Hi Iqbal,
    Hope you are doing good
    Well, have a look at these links.
    http://help.sap.com/saphelp_nw04/helpdata/en/91/4931eca9ef05449bfe272289d20b37/frameset.htm
    This document, gives a detailed description.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f570c7ee-0901-0010-269b-f743aefad0db
    Hope this helps.
    Cheers,
    Sandeep Tudumu

  • How can we create an entity object using multiple tables?

    Hi All,
    I'm a newbie to OAF.
    I'm trying to create a simple page using OAF.
    While creating Entity object, there is an option to add the database objects from which we can create our Entity Object.
    There we can enter only one database object.
    If suppose I need to create a Entity object by using mutiple data base objects, how can I add other database objects?
    Is there any option for multiple selection of database objects there?
    Thanks in Advance

    User,
    a). You should use the [url http://forums.oracle.com/forums/forum.jspa?forumID=210]OA Framework Forum for this question.
    b). Entity objects always correspond to a single table. I think you want to create a View object instead.
    c). Really, you want to be using the OA Framework forum.
    John

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best practice for deleting multiple rows from a table , using creator

    Hi
    Thank you for reading my post.
    what is best practive for deleting multiple rows from a table using rowSet ?
    for example how i can execute something like
    delete from table1 where field1= ? and field2 =?
    Thank you

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best Practices for many Remote Objects?

    Large Object Model doing JDBC over RMI
    Please read the following and provide suggestions for a recommended approach or literature that covers this area well?
    N-Tiered Architecture
    JSP/Servlet (MVC) - Database Access Layer - Database
    Applets - JSp/Servlet Engine - Database Access Layer - Database
    Application Layer - Application Layer - Database Access Layer - Database
    I have an object model developed using Torque (A JDBC Object Relational modeling framework) for over 100 tables (to be over 160) that I am commencing to enable over RMI. I have got several remote methods up and running. For some of the simple methods starting up has been easy. Going forward I forsee issues.
    Each table has a wrapper or data object and a peer object that have setters/getters and special methods as desired. The majority of these classes are extended from Base Objects that have basic common functionality for retrieving, creating, and manipulating with a database using SQL.
    I have started building a Remote Interface and an Implementation class that invoke the necessary methods and classes within the Object Model to pull successfully off or update the database. Additionally the methods will need to return objects that represent non-primitive serializable dataobjects and collections of objects.
    Going forward client applications, servlets, and jsps will be using the database in more complex and comprehensive methods over rmi. Here are a couple of things I am concerned about.
    1) When to use java.rmi.server.codebase for class loading? In my implementation several of the remote methods will return objects (e.g Party, Country, CountryList, AccountList). These objects themselves are composed of other objects that are not part of the jvm. For all remote methods that return non-primitive objects must you include the classes in the codebase for the client to operate upon them. Couldn't this be pointless as you have abstract and extended classes all residing within the codebase? In practice do people generally build very thin proxy objects for the peer/data objects to hold just the basic table elements and sets?
    2) Server Versioning/Identity - Going forward more server classes will be enabled via rmi. Everytime one wants to include more methods on interface that is available must you update the interface, create new implementation classes, and redistribute the Remote Interface, and stubs/skels to client apps to operate on again? Is there some sort of list lookup that a client can do to say which processes are available remotely presently (not just at initialization)? As time changes more or fewer methods might be available?
    Any help is greatly appreciated.

    More on Why other approaches would be better?
    I have implemented some proxy objects for the remote Data Objects produced by Torque. To ease the pain, I have also constructed a proxy Builder that takes the table schema and builds a Proxy Object, an inteface for the proxy, and methods to copy between the Torque Data Object (which only lives on the server) into the Proxy Object (accessible by client and/or server).
    The generated methods are useable in the object implementing the Remote Interface but are themselves not remoteable. Only the Server would use these methods. Clients can only receive primitives, proxy Objects, or collections of ProxyObjects.
    This seems to be fairly light currently. I had to jump hoops to use Torque and enable remote apps to use the proxy objects. What would be the scaling issues that will come up? Why would EJBs with containers and all kinds of things about such as CMP vs BMP to be concerned with be a better approach?
    Methods can be updated to do several operations verses Torque and return appropriately (transactions). In this implementation the client (Servlet, mini App or App) needs the remote stub and the proxy objects (100 or so) to stand in for the Torque generated Data Objects. A much smaller and lighter set of classes, based on common JDK classes, instead of the torque classes (and necessary abstracts/objects/interfaces/exceptions for Torque).

  • Best Practice Using Data Access Object Pattern

    I was wondering what would be the best approach with regards to implementing the Data Access Object pattern:
    Given the following database tables:
    teacher {teacher_id, teacher_name, ...}
    subject {subject_id, subject_name, ...}
    teacher_subject {teacher_id, subject_id}where teacher and subject hold details on teachers and subjects respectively and teacher_subject links teachers with the subjects that they currently teach, would I be better off:
    1) implementing three distinct DAO classes i.e. TeacherDAO, SubjectDAO, TeacherSubjectDAO, that correspond to each database table;
    2) implementing a single DAO class that controls access to all three tables, given that business operations often rely on access to all three tables e.g. getSubjectTeachers(Subject s), assignSubjectToTeacher(Subject s, Teacher t), getSubjectsTaughtBy(Teacher t)...
    Thanks
    Ian

    ...would I be better off:Depends on the system.
    If you are always going to have a teacher with subjects then all you need is one class.
    If on the other hand sometimes you are going to have subjects with teachers (given that you have a link table a many to many relationship exists) then two DAOs would exist.
    It would be unlikely for you to have a DAO for the link table unless perhaps you are anticipating a teacher having tens of thousands of subjects or one subject having tens of thousands of teachers.

  • Best practices for deploying common object services

    Hi,
    Our team has broken out from our main application around 10 services that largely are used to return objects from 10 common tables in the database. We are thinking that these services should be reusable amongst the 5 or so applications that we are going to have in the near future. We're now trying to decide what the best way to make these common services available to the applications is and after considering several ideas, these are the options we've come up with:
    1. Putting jars for all of the services in each application and adding entries to the sessions.xml for any Toplink project mappings that are in the jar files. Also are considering having just one many services jar.
    2. Exposing the services through web services and only giving the client apps the client side code to invoke the web service. Realize this may mean a performance hit, but would mean less code on the client.
    3. Stateless session EJB's.
    4. parent-application tag or some other way to make these jar's be available to all applications on the app server through classloading
    5. Some sort of messaging service
    Would appreciate some input on this, as this seems like it would be a fairly common problem.
    Thanks,
    Mark

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Best Practice - Outer Join between Fact and Dim table

    Hi Gurus,
    Need some advice on the below scenario
    I have an OOTB subject area and we have around 50-60 reports based on it. The related subject area Fact and Dim1 table are having inner join.
    Now I have a scenario for one report where outer join has to be implemented between Fact and Dim1. Here I am against changing the OOTB subject area join as the outer join will impact the performance of other 50-60 reports.
    Can anyone provide any inputs on what is the best way to handle this scenario?
    Thanks

    Ok. I tried this:
    Driving table : Fact, Left outer join -- didnt work.
    Driving table: Dimension D left outer join -- didnt work either
    In either the case, I see physical query as D left outer Join on Fact F. and omitting the rows.
    And then I tried this -
    Driving table: Fact, RIght outer join.
    Now, this is giving me error:
    Sybase][ODBC Driver]Internal Error. [nQSError: 16001] ODBC error state: 00000 code: 30128 message: [Sybase][ODBC Driver]Data overflow. Increase specified column size or buffer size. [nQSError: 16011] ODBC error occurred while executing SQLExtendedFetch to retrieve the results of a SQL statement. (HY000)
    I checked all columns, everything matched with database table type and size.
    I am pulling Fact.account number, Dimension.account name, Fact.Measures. I am seeing this error each time I pull Fact.Account number.

  • Best practices for creating and querying a history table?

    Suppose I have a table of name-value pairs, and I want to keep track of changes to them so that I can query the value of any pair at any point in time.
    A direct approach would be to use a schema like this:
    CREATE TABLE NAME_VALUE_HISTORY (
      NAME      VARCHAR2(...),
      VALUE     VARCHAR2(...),
      MODIFIED DATE
    );When a name-value pair is updated, a new row is added to this table with the date of the change.
    To determine the value associated with a name at a particular point in time, one uses a query like:
      SELECT * FROM NAME_VALUE_HISTORY
      WHERE NAME = :name
        AND MODIFIED IN (SELECT MAX(MODIFIED)
                        FROM NAME_VALUE_HISTORY
                        WHERE NAME = :name AND MODIFIED <= :time)My question is: is there a better way to accomplish this? What indexes/hints would you recommend?
    What about a two-table approach like this one? http://pratchev.blogspot.com/2007/05/keeping-history-data-in-sql-server.html
    Edited by: user10936714 on Aug 9, 2012 8:35 AM

    user10936714 wrote:
    There is one advantage... recording the change of a value is just one insert, and it is also atomic without the use of transactions.At the risk of being dumb, why is that an advantage? Oracle always and everywhere uses transactions so it's not like you're avoiding some overhead by not using transactions.
    If, for instance, the performance of reading the value of a name at a point in time is not important, then you can get by with just using one table - the history table.If you're not overly concerned with the performance implications of having the current data and the history data in the same table, rather than rolling your own solution, I'd be strongly tempted to use Workspace Manager to let Oracle keep track of the changes.
    You can create a table, enable versioning, and do whatever DML operations you'd like
    SQL> create table address(
      2    address_id number primary key,
      3    address    varchar2(100)
      4  );
    Table created.
    SQL> exec dbms_wm.enableVersioning( 'ADDRESS', 'VIEW_WO_OVERWRITE' );
    PL/SQL procedure successfully completed.
    SQL> insert into address values( 1, 'First Address' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update address
      2     set address = 'Second Address'
      3   where address_id = 1;
    1 row updated.
    SQL> commit;
    Commit complete.Then you can either query the history view
    SQL> ed
    Wrote file afiedt.buf
      1  select address_id, address, wm_createtime
      2*   from address_hist
    SQL> /
    ADDRESS_ID ADDRESS                        WM_CREATETIME
             1 First Address                  09-AUG-12 01.48.58.566000 PM -04:00
             1 Second Address                 09-AUG-12 01.49.17.259000 PM -04:00Or, even cooler, you can go back to an arbitrary point in time, run a query, and see the historical information. I can go back to a point between the time that I committed the first change and the second change, query the ADDRESS view, and see the old data. This is invaluable if you want to take existing queries and/or reports and run them as of certain dates in the past when you're trying to debug a problem.
    SQL> select *
      2    from address;
    ADDRESS_ID ADDRESS
             1 First AddressYou can also do things like set savepoints which are basically named points in time that you can go back to. That lets you do things like create a savepoint for the data as soon as month-end processing is completed so you can easily go back to "July Month End" without needing to figure out exactly what time that occurred. And you can have multiple workspaces so different users can be working on completely different sets of changes simultaneously without interfering with each other. This was actually why Workspace Manager was originally created-- to allow users manipulating spatial data to have extremely long-running transactions that could span days or months-- and to be able to switch back and forth between the current live data and the data in each of these long-running scenarios.
    Justin

  • Best practice for initializing objects in a JSF backing bean?

    Hi,
    What is the best practice for initializing some objects in the JSF to-page backing bean before the to-page is displayed for the first time? The initialization would vary and depend upon a command link in the from-page.
    Regards,
    Al Malin

    f:view has two new attributes in 1.2: beforePhase and afterPhase
    which allows you to specify a phase listener method
    which will be called before and after the view is processed.

Maybe you are looking for

  • R/3 fields assignment to communication structure.

    Hi all, Is there a one place(easy way) where I can get a complete list of communcation structure infoobjects and to what R/3 source fields are these infoobjects assigned instead of checking in infosource because it takes a lot of time. please let me

  • Is there any possible way to Upgrade to an EVGA 750ti in a Ideacentre K410

    Because I would really, really like to know.  I have this brand new card that I want to install, only to find out that the lack of a bios update is preventing the card from actually running. So, it fails POST when the new card is in.  The old card is

  • Clean install help please.

    I am in the middle of giving my macbook a clean install and the second install disc has stopped working 9 min's from completion and ejected out. I have tried to install again, Clean the disc and still same problem. Is there a way of reversing this cl

  • Which is better for masking hair? Raw or Jpg ?

    Hi, I know that when it comes to color grading it's much better to work on a raw file. Does it make a big difference also for masking out hair ? Thanks!

  • I need a link to update Firefox 8 please!

    A reminder came on screen that Firefox 8 was available and I was too slow to click on this. Now when I start up laptop, the request to do this doesn't come on until I have already opened up Firefox....then it says it can't do it as I already have Fir