Best Practise: Low coupled, modularized usage of entity beans

Hi out there.
I would like to develop a prototype which contains two or more modules (lets call them CarManagement and TireManagement). We all know a car has surely got 4 tires. So in a EJB3 project using JPA the @Entity called Car would have an attribute Tire, which is also an entity bean.
Now I would like to change my modules to be exchangable and independant. What would be the right way to alter my Car entity that right now imports the class Tire, which leads to tight coupling. How can I get around this?
Thanks in advance!

Hi Malrawi.
And thanks for your quick reply. I thought about interfaces too. But there are some points that argue against that.
- By using interfaces I would force a new TireManagement from another company to use my interfaces. The alternate
way would be creating a new project on my own that is a wrapper which implements those interfaces and calls the
new methods from the external TireManagement. But I dont know yet if all used methods will be implemented in some
way in the external component.
- Adam Bien says in his Book about JavaEE 5 patterns, that there shouldnt be interfaces implemented in entities
- By loading my Car through the EntityManager there is no lazyloading or loading at all for the tire, I guess. I would
have to handle it on my own, right?
Did you already work with exchangable modules, malrawi? How are your experiences about that?

Similar Messages

  • Entity Beans Cache

    Hi,
    We are running our application on WL platform 7.0. We have a number of EntityBeans(about
    30-40) which are container managed and also use CMR.
    The max-beans-in-cache is at its default of 1000. We reach this limit of 1000
    for about 10-15 of these beans in a day or two after a restart of the server.
    (This is production server, we restart this occasionally for maintenance). The
    memory usage for the server keeps increasing and once the entity cache limit is
    reached we see that passivation keeps occurring and the heap usage is always at
    about 80%-95% of the maximum (Total heap size is 1.5GB). We assume this could
    be due to the EntityBeans that are cached by WebLogic. We also see performance
    problems occassionally that might be probably due to the GC or passivation.
    We want to lower our memory usage and also get rid of the occassional slow response
    time. For doing this, is there any way to flush out those Beans from EntityCache
    which are no more used ? WebLogic doesn't seem to flush the cache but only passivate
    them as and when new beans are required. Is there any setting to change this behaviour
    Cheers
    Raja V.

    Thanks Thorick,
    We are using Database concurrency and non-read only beans, hence i believe this
    patch must help us.
    secondly, are you aware of any way to find out the memory usage of the default
    WLS Entity Bean Cache ?
    Cheers
    Raja
    "thorick" <[email protected]> wrote:
    >
    Hi,
    If you are using 'Database' concurrency, then support for an idle-timeout-seconds
    on this cache will be coming in release 7.0sp5. This feature is intended
    to ease
    heap usage when Entity Beans using Database/Optimistic/ReadOnly (but
    NOT Exclusive
    or read-only !). One sets the max-beans-in-cache to be large enough
    to handle
    periodic
    or occasional peak loads and idle-timeout-seconds is set to free the
    cache of
    unused beans
    during periods of low demand.
    If you cannot wait for sp5 and are willing to run a patch, there are
    patches available
    for
    7.0sp2 and 7.0sp3. You'll have to contact your support representative
    about
    these.
    Refer to 'CR110440' courtesy of yours truly !
    Hope this helps
    -thorick

  • Experiences on SAP Best Practise packages?

    We are considering applying a specific SAP Best Practise package on top of ECC6. It will be a new ECC6 installation.
    I have read the BP faq pages http://help.sap.com/bp_bw370/html/faq.htm and read note 1225909 - How to apply SAP Best Practices - and the specific note and the specific installation guide for required Best Practise package.
    I understand that applying a BP package is something not that easy but should speed up the implementation process.
    If you have been participating in a project that implemented SAP Best Practises on ERP, would you please share your experiences with me.
    Here are some topics for discussion:
    (BP=SAP Best Practise package)
    BP implementation could make constraints to your future EHP or SP upgrades, while BP is tightly linked to certain EHP and SP-level?
    SAP support?
    Installation process including config steps according BP installation guide is longer than usual, while there can be multiple activation and configuration steps done by others than basis consultants?
    BPs are country dependent. What if your application is used multinationally?
    What if you later find BP unneccessary?
    generate difficulties in the beginning but did considerably speed up reaching the final goal compared to ECC6 without BP.
    traps to fall in?
    typical issues that basis staff has to notify when preparing/installing a ERP system to be with BP?
    positive/negative things
    I am not saying above statements are true or false. Just wanted to charge you giving comments.
    Br: KimZi
    Edited by: Kimmzini Siewoinenski on Aug 11, 2009 8:35 PM

    Hi,
    Make sure your web service URL correct in Live Office connection and also in Xcelsius data manager.
    Did you check all connection in refresh button proprties? you may try selecting "Refresh after component are Loaded" in Refresh button properties Behavior tab.
    I think Xcelsius refresh are serial refresh so it may be possible that first component refresh is still in progress but you are expecting other component to refresh.
    Click on "Enable Load Cursor" in data manager's Usage tab, it will give you visibility of refresh. If anything refreshing you will see hour glass.
    Cheers

  • Business Content Best Practise

    Hi Guys
    Just a quick request -
    I have activated BC a couple of times, but each time it takes longer than it should - missing certain areas, activate far too much etc.
    Apart from 'help.sap.com.', does anyone have any Best Practise guides or docs on BC, to allow me to cut down on my activation.
    Thanks.

    Hi Scott,
    I would really commont only to activate with only necassary objects (befor and after is very painfull). Just collect the data structures like DataSource, InfoSource, DataStore Object, InfoCube, ..., Queries (then all Infoobjects gets collected). Then collected the linkages (tranfer rules, transformations, etc.) - these can easily be found in the help menue. after the collection activate it in batch.
    best regards clemens

  • SAP Best Practises - Installation

    Hello All,
    I had installed SAP best Practises base line package SAP BP-ERP 617V6 in my system a couple of weeks back. But now when I am trying to activate the Best practises it is asking for the solution scope files and the installation text files. But when I am checking on Service market place , it seems SAP has with drawn the files for 617-V6 and has updated them with the latest files of 617-V7.
    Now without these files I cannot proceed any further. And I have already activated the Business sets and Business Enterprise Functions in Client 000 and created a new client from 000.
    In this regards can anyone advise what will be the best way forward. Can I upgrade my BP version BP-ERP 617-V7 now and proceed. Are there any special steps that I need to do for the BC sets which are already activated. I am guessing I will need to a client copy once again.
    I also went through SAP Note 1301301 , Point IV , which talks about the upgrade of SAP BP package but it lacks clarity on the above points.
    I have also raised a Customer message on this but will really appreciate your views in this regards.
    Thanks
    S

    Hello Surajit Das,
    In http://service.sap.com/swdc have you tried looking in Archive Area ?
    SAP will generally hold the previous versions of software and components in the archive area for some time.
    Regards,
    Siddhesh

  • SAP BEST PRACTISE BI-Purchasing Volume Dashboard issue

    Hello,
    I am working on Purchasing Volume Dashboard (from SAP BEST PRACTISE BI package), I adapted Crystal Reports sources of this dashboard so that they connect to our local database and successfully generated the Export.xls, I Imported this Export.xls file into Purchasing Volume Dashboard.xls, added and configured Live Office Connections (ticking"Refresh before components are loaded" for each Live Connections.
    The issue is that when I click on PREVIEW the four components (Charts and Scorecard) don't refresh simultaneously.
    I have to click once on REFRESH button to have the first component refreshed with data then to click again to have the second component refreshed then click again and again !
    I don't understand why those components don't refresh in the same time !!
    Did one of you ever encountered this issue ?
    many thanks for your help,
    Regards
    Christiane

    Hi,
    Make sure your web service URL correct in Live Office connection and also in Xcelsius data manager.
    Did you check all connection in refresh button proprties? you may try selecting "Refresh after component are Loaded" in Refresh button properties Behavior tab.
    I think Xcelsius refresh are serial refresh so it may be possible that first component refresh is still in progress but you are expecting other component to refresh.
    Click on "Enable Load Cursor" in data manager's Usage tab, it will give you visibility of refresh. If anything refreshing you will see hour glass.
    Cheers

  • Basics:  Best practise when using a thesaurus?

    Hi all,
    I currently use a function which returns info for a search on our website, the function is used by the java code to return hits:
    CREATE OR REPLACE FUNCTION fn_product_search(v_search_string IN VARCHAR2)
    RETURN TYPES.ref_cursor
    AS
    wildcard_search_string VARCHAR2(100);
    search_results TYPES.ref_cursor;
    BEGIN
    OPEN search_results FOR
    SELECT
              DCS_PRODUCT.product_id,
              DCS_CATEGORY.category_id,
              hazardous,
              direct_delivery,
              standard_delivery,
              DCS_CATEGORY.short_name,
              priority
              FROM
              DCS_CATEGORY,
              DCS_PRODUCT,
              SCS_CAT_CHLDPRD
              WHERE
              NOT DCS_PRODUCT.display_on_web = 'HIDE'
              AND ( contains(DCS_PRODUCT.search_terms, v_search_string, 0) > 0)
              AND SCS_CAT_CHLDPRD.child_prd_id = DCS_PRODUCT.product_id
              AND DCS_CATEGORY.category_id = SCS_CAT_CHLDPRD.category_id
              ORDER BY SCORE(0) DESC,
              SCS_CAT_CHLDPRD.priority DESC,
              DCS_PRODUCT.display_name;
    RETURN search_results;
    END;
    I want to develop this function so that is will use a thesaurus in case of no data found.
    I have been trying to find any documentation that might discuss 'best practise' for this type of query.
    I am not sure if I should just include the SYN call in this code directly or whether the use of the thesaurus should be restricted so that it is only used in circumstances where the existing fuction does not return a hit against the search.
    I want to keep overheads and respose times to an absolute minimum.
    Does anyone know the best logic to use for this?

    Hi.
    You want so much ("... absolute minimum for responce time...") from OracleText on 9.2.x.x.
    First, text queries on 9.2 is so slowly than on 10.x . Second - this is bad idea - trying to call query expansion functions directly from application.
    My own expirience:
    The best practise with thesauri usage is:
    1. Write a good searcg string parser which add thes expansion function (like NT,BT,RT,SYN...) directly in result string passed through to DRG engine.
    2. Use effective text queries: do not use direct or indirect sorts (hint DOMAIN_INDEX_NO_SORT can help).
    3. Finally - write effective application code. Code you show is inefficient.
    Hope this helps.
    WBR Yuri

  • Best practise - Domain model design

    Hello forum,
    we're writing an application divided into three sub projects where one of the sub projects will be realized using J2EE and the other two sub projects are stand alone fat client applications realized using Swing. So that's the background...
    And now the questions:
    After doing some research on J2EE best practise topics I found the TransferObject-Pattern (http://java.sun.com/blueprints/corej2eepatterns/Patterns/TransferObject.html) which we certainly want to apply to the J2EE sub project and to one of the standalone client applications also. To avoid code duplications I like the "Entity Inherits Transfer Object Strategy" approach outlined in the document referenced above. But why does the entity bean inherit from the transfer object class and not vice versa? In my opinion the tranfer object adds additional functionality (coarse grained getData()-method) to the class and isn't it a design goal in OO languages that the class that extends a base class has more functionality than the base class?
    For the standalone application we want to use a similar approach and the first idea is to desgin the entitys and let the TO classes extend these entitys.
    When I get it right the basic idea behind all of these design schemes is the "Proxy pattern" but when I design it using the previously mentioned way (Entity <-- EntityTO) I will have a very mighty prox beeing able to execute all operations the base class is able to execute.
    Any tips and comments welcome!
    Thanks in advance!
    Henning

    Hello Kaj,
    at first - thanks for your fast response and sorry for coming back to this topic so late.
    After reading a bit more on patterns in general what about avoiding inheritance
    completely and using the proxy pattern instead (As explained eg.
    http://www.javaworld.com/javaworld/jw-02-2002/jw-0222-designpatterns.html here) - so moving the design to a "has a" relationship rather than an "is a" relationship.
    In the previous post you said that the client shouldn't be aware that there are entity beans and therefore the mentioned implementation was chosen - But if I implement it vice versa (Entity is base class, TO extends entity) and do not expose any of the methods of the entity bean I would achieve the same effect, or not? Clients are only able to work with the TOs.
    I have some headaches implementing it in SUN's recommended way because of the Serialization support necessary within the TOs. Implemented in SUN's way the Entity bean would also have serialization support which isn't necessary because they're persisted using Hibernate.
    Thanks in advance
    Henning

  • Transferring data between two tables - Best practise advice

    Hi!
    I need advice on best practise since I am new to abap-thinking.
    I have two tables. I am going to transfer data from table1 and update the corresponding master data table with the data in table1.
    Which is the best way of doing this? The data amount that can be transferred is maximum 300 000 rows in table1.
    I can only think in one, the simple, way which is to read all the rows in to an internal table and then do an update on all the rows in the master data table.
    Is there a better way of doing this?
    thanks in advance,
    regards
    Baran

    Hi!
    1. The update will be done a couple of times per week.
    2. Yes, the fields are the same.
    3. Both tables are SAP dictionary tables. One is a staging table and the other is master data table. Our problem is that we want a custom field to a standard master data table. We add an extra field to the staging table and the same to the corresponding master data table but the standard API is not supporting the transfer of data between custom fields so we are developing our own code to do this.
    After some standard code has transferred the standard fields from staging tables to master data tables we are going to transfer our field by updating all the rows in the standard table
    thanks
    regards
    Baran

  • Oracle Tuxedo Security Best Practises

    Hi,
    I am new in Oracle Tuxedo. I searching about Tuxedo Security best practises. I found many informations in Tuxedo Documentation but if anybody have more informations, i am very interested.
    Such as:
    - ULOG files permissions => The Tuxedo administrator must not have write acces on this files but if I remove this right, does Tuxedo can write in this files ?
    - tlisten.pw => What is the encryption type and can i add only one user password or more ? It's true that there is no user login ?
    - tpsysadm and tpsysop => What do they serve ? and where are stored their passwords ant how can i change it ?
    - Use of LLE/SSL => What is the best practise, use of LLE and SSL or just LLE, just SSL ?
    Thanks a lot !
    Best regards

    Hi,
    welcome to the wonderful (and sometimes byzantine) world of Tuxedo!
    You have a couple of interesting questions and I'll try to shed some light on some of them. Disclaimer: I'll assume that you run Tuxedo on some flavor of Linux or Unix. If you're running on Windows, some of these thoughts won't make much sense to you, sorry about that.
    When I install the Tuxedo software, I usually let a dedicated user (e g "tuxedo") be the owner of the installed software and files (include files, FML field definitions and so on).
    When I create a Tuxedo application, I have a separate user account (e g "some_application") running each application. In this way, an application running wild cannot overwrite or delete any Tuxedo system files, neither another application's files, only its own files, due to file system permissions. In this case, "some_application" will execute your Tuxedo servers and also need to be the owner of the directory where the ULOG will reside (remember that the application need to be able to create a new file every new day).
    The tlisten.pw file is not for "user" passwords, it's primary use is to authenticate the different (physical) machines working together in a bridged (clustered) Tuxedo application. It is also used in conjunction with TSAM monitoring, although I have no first-hand experience with that (yet). I've had problems trying to have more than one secret in the tlisten.pw file, your mileage may vary...
    When it comes to tpsysadm and tpsysop, you should think of them more as roles rather than actual users. These roles may perform special actions (such as starting/stopping/re-configuring) in your application. Depending on your security settings, any user may (try to) act as tpsysadm and/or tpsysop. Any user passwords you may have are connected to the actual users rather than the roles tpsysadm or tpsysop. All this depends on your settings for SECURITY and AUTHSVC in your ubbconfig. There is no simple/easy answer here, I'm afraid... it all depends on how you have set up your security (USER_AUTH is a good start, but you need to supply an AUTHSVC in that case).
    When it comes to encryption, my experience is only with LLE. It simply works. Using SSL I suspect there will be more challenges setting up certificates and such things. The way I understand it you either use LLE or SSL for a given type of communication (i e WSL or TDOMAIN), you can't use both simultaneously.
    Hope this helps and I may be able to elaborate further if there's a particular area that seems particularly foggy :-)
    /Per

  • Any best practise to archive PO's which does not have corresponding invoice

    Hello,
             As part of initial implementation and conversion, We have a lot of PO's / LTA created but their corresponding invoices were never converted into SAP from legacy system.  SAP archiving program tags those as not business complete as the invoice qty does not match with po qty (there are no invoices to start with).  Just flagging 'delivery complete and final confirmation' of PO does not help.  Anybody ran into similar situation and how did they resolve it?  I am reluctant to enhance standard SAP archiving program to bypass those checks and that is my only last option. Any SAP recommended Note / best practise etc would help.
    Satyajit Deb

    Where is the invoice posted?
    was the invoice posted in the legacy system?
    Clearance of GR/IR account with MR11 will usually close such POs.

  • When granting a user or a role access to a group of pages, it is best practise to grant that access to what type of file or component?

    My question is same while granting user or role in the application, what is the best practise? How to decide the level of applying role to pagedef's, xml files, or some other file that i have missed out.

    As for my concern I would go for page definition files.

  • Best practise in SAP BW master data management and transport

    Hi sap bw gurus,
    I like to know what is the best practise in sap bw master data transport. For example, if I updated my attributes in development, what are the 'required only' bw objects should I transport?
    Appreciate advice.
    Thank you,
    Eric

    Hi Vishnu,
    Thanks for the reply but that answer may be suitable if I'm implementing a new BW system. What I'm looking for is more on daily operational maintenance and transport (a BW systems that has gone live awhile).
    Regards,
    Eric

  • What is the best practise to provide a text file for a Java class in a OSGi bundle in CQ?

    This is probably a very basic question so please bear with me.
    What is the best way to provide a .txt file to be read by a Java class in a OSGi bundle in CQ 5.5?
    I have been able to read a file called "test.txt" that I put in a structure like this /src/resources/<any-sub-folder>/test.txt  from my java class  at /src/main/java/com/test/mytest/Test.java using the bundle's getResource and getEntry calls but I was not able to use the context.getDataFile. How is this getDataFile method call to be used?
    And what if I want to read the file located in another bundle, is it possible? or can I add the file to some repository and then access it - but I am not clear how to do this.
    And I would also like to know what is the best practise if I need to provide a large  data set in a flat file to be read by a Java class in CQ5.
    Please provide detailed steps or point me to a how to guide or other helpful resources as I am a novice.
    Thank you in advance for your time and help.
    VS

    As you can read in the OSGi Core specification (section 4.5.2), the getDataFile() method is to read/write a file in the bundle's private persistent area. It cannot be used to read files contained in the bundle. The issue Sham mentions refers to a version of Felix which is not used in CQ.
    The methods you mentioned (getResource and getEntry) are appropriate for reading files contained in a bundle.
    Reading a file from the repository is done using the JCR API. You can see a blueprint for how to do this by looking at the readFile method in http://svn.apache.org/repos/asf/jackrabbit/tags/2.4.0/jackrabbit-jcr-commons/src/main/java /org/apache/jackrabbit/commons/JcrUtils.java. Unfortunately, this method is not currently usable as it was declared incorrectly (should be a static method, but is an instance method).
    Regards,
    Justin

  • Advice or best practise information about 1 or 2 clients in SAP R/3 DEV

    I'm searching for advice or best practise information about clients in a SAP R/3 development system.
    Reason for this is that we are up to refresh our SAP R/3 development system and up to now we have two clients on it:
    -     One customizing/development client without master data, transaction data et cetera
    -     One local test client with master data, transaction data and so on
    One of our developers suggested to only have one client on development, where we could customize, program and test. So that client would be with master data, transaction data et cetera.
    What would be your advice or what would be best practise for the development system: 1 client (with data) or 2 clients (one clean customizing and one with data). And what are the most important reasons to do it so.
    Maybe there is already some good (SAP) information about this specific subject, but up to now I havenu2019t found it yet.

    Maybe I've asked my question too broad. I'll try to narrow it down.
    Up to now we always had two clients on our SAP R/3 development system:
    - Client 200 - Customizing/development only. No other data in this client
    - Client 400 - Local test client with master data and transaction data. New customizing is copied from client 200 to test
    The reason for having those two clients are:
    - It feels someway good to have a customizing-only client
    - We've always done this before
    A developer suggested to only have one client in our SAP R/3 development system for the following reason:
    - You'll never need to copy the customizing (tr.SCC1) first to be able to test it
    - You can work in one client and don't need to login in the other client to test it (for example: ABAP reports)
    - For customizing of easy setting (for example producthiërarchie, as we don't test it everytime in client 400) it is possible to forget copying it into client 400 (test client). With one client, you can not forget it
    The reasons of this developer seems very valid and up to now we haven't found a convincing/compelling reason to make a good choice for one or two clients.
    Please, try to convince us with good reasons to choose for one or two clients.

Maybe you are looking for

  • Opinions on the best Personal Finance application?

    Just purchased an iMac after about 15+ years with PCs. Problem is I have about 15 years of financial history in my Windows version of Quicken. I bought the Quicken for Mac version, but the data file conversion effort is ridiculous, and besides that t

  • BAPI_LEAD_CREATEMULTI  .. Partner issue

    Hello, I am using bapi "bapi_lead_createmulti" to create lead documents from a file. The lead is getting created, however the Partner data is not being filled. Really dont understand what values need to passed..       l_partner-REF_GUID       l_partn

  • User Status Profile not getting copied to Objects (ECM)

    Dear Experts, I Have created Change Type assign with Status Profile, This Profile is correctly appearing in Change NO (ECR), In Header but After Selecting some of the Objects like, BOM or Material the Status Profile is not getting copied. I have two

  • Show date in printed e-mail

    hi can anyone help me on the following: when I print an e-mail using Mail.app, the date does not appear in the header, so that I each time need to add it manually. the same problem applies when I forward an e-mail... is there a possibility to fix tha

  • How do I load video from my video camera to my ipad

    how do I load video from my video camera to my ipad