Best Practices for EJB--memory leak avoidance

I an researching a memory leak. From what I can tell, we do not want to put any instance variables at the class level. This would incur a memory leak when the ejb containers creates the ejb.
I am wondering if there are any more 'best practices' with EJBs that prevent memory leaks. A good site would help me too
Russ

Thank for your reply.
You are right. I was referring to stateless session
beans.
What I was thinking was the following:
1. Making parameters final unless the parameter
r state is changed inside a method.Won't help with memory leaks. There are benefits for an object being immutable, but I don't think that it eliminates the possibility of the object being leaked.
2. All instance variables in stateless session beans
s should be set to null upon ejbPassivate and
ejbRemove operations. The variables should be
populated on ejbCreate and ejbActivate.Optimizing compilers don't need the hint of setting the reference to null. I don't think that helps, but I'm not 100% certain.
3. Before throwing an exception at the session bean
level, clear up all the instance variables by setting
them to null.Shouldn't scope make it clear to the GC that the objects aren't needed? If they're all primitives, how does this help memory leaks?
I am using a profiler to identify the objects still
being held onto once GC, but I was hoping others with
more experience than I would share some of their tips
on how they avoid memory leaks.I don't know where you're getting these tips, but I don't think they're helpful.
Collections and Singletons would be the places where I'd look. If you add a reference to an object that holds onto it in a collection and never lets go, the GC won't reclaim it.
Singletons aren't cleaned up, because their instance is static. Anything the Singleton refers to won't be cleaned up unless the Singleton relinquishes the reference.
Look for things like that. I think they matter more.
%

Similar Messages

  • Best Practice for EJB calls from servlet?

    Hi folks
    I could not find general rules for making calls to an stateful EJB from the web container (e.g. from a backingBean). In some books they say it is a bad programming style calling them directly from a common servlet. The book says create first an HTTPSession Object and from there call the stateful EJB.
    I'm a bit confused because, I'm missing some best practice guide from where to initiate such calls.
    Can somebody please point me in the right direction?
    Kind Regards
    Bruno
    Edited by: zajoho on Oct 30, 2008 11:14 PM

    Hi Bruno,
    The main issue with the combination of stateful session beans and servlets is the servlet threading model.
    It is dangerous to store a stateful session bean reference in servlet instance state, since the servlet instance
    can be accessed concurrently, yet a stateful session bean reference is intended to be used by only one
    client.
    As you point out, one alternative is to store the reference in the HttpSession. That associates the reference
    with a particular client, which matches the stateful session bean programming model.

  • Best practice for @EJB injection in junit test (out-of-container) ?

    Hi all,
    I'd like to run a JUnit test for a Stateless bean A which has another bean B injected via the @EJB annotation. Both beans are pure EJB 3 POJOs.
    The JUnit test should run out-of-container and is not meant to be an EJB client, either.
    What is the easiest/suggested way of getting this injection happening without explicitely having to instantiate the bean B in my test setup ?

    you can deal with EntityBeans without having the Container managed senario , you can obtain instance of EntityManager using the "EntityManagerFactory" and providing the "persistence.xml" file and provide the "provider" (toplink,hibernate ,...), then you can use entities as plain un managed classes

  • Best Practices for multiple authors using single project?

    We are having many issues, particularly with moving, renaming, and multiple check out warnings.  We have a single project with many authors and it seems like RH is not designed to work that way. There is an article in the RH devnet-archive in an article entitled "Sharing RoboHelp Project Among Multiple Authors" that says"
    "At first there may be the temptation to let every author work on every file in a project.  This is certainly not a best practice. Regardless of source contraol, it is always best to designate certain authors as owning certain content-related sections, folders, or topics within a project -particulary at the folder level."
    This statement, and our experience seems to indicate that RH is not a true CMS as we had envisioned.  What are the best practices for this scenario to avoid stepping on each others toes and having problems with source control.

    I have moved this to the source control forum for the gurus there to answer.
    Meantime I must admit I read that statement the same way as you first time. However, on rereading I think what the author is saying is not that what you want cannot be done, rather it is best practice to guide authors to work in discrete areas.
    I will leave it to the author or another guru to give you a more complete answer.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Best practice for customizing EJB property after deployment

    Hi Gurus,
      What is the best practice for customizing EJB property after deployment in NW7.1? I have a stateless session bean and it needs to get some environment information before acting. While the information can only be known at runtime. What should I do to achieve it? I thought I can bind the property with a JNDI context but I did not find out where to declare and change the context value. Please advise. Thanks.
    B.R.

    Hi.
    I have a similar problem. But I still can not edit the properties of the ejb-jar.xml.
    I tried to stop the web service, but the properties still remain unmodifiable.
    Could you advise me how to change them?
    We have installed SAP Server 7.0.2

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice for CPU and memory usage?

    I find my AIR application takes a lot of memory -- usually
    >170M. And what is strange is that the memory usage is
    increasing (about 4K/s) even when the application is simply sitting
    there and do nothing. The CPU usage is supposed to be 0% when the
    application is doing nothing, but it's not (usually ~5%). So I
    wonder if there is any article about best practice on CPU/Memory
    usage.

    Those numbers indicate that your application is in fact doing
    something. Perhaps you have a timer still running, or work being
    done on an enterFrame event?

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best Practices for new iMac

    I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
    2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
    I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.

    OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
    To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
    If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
    Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
    There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
    The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation.

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for upgrading task definition without deleting task instances

    best practice for upgrading task definition in production system without deleting or terminating task instances
    If I try and update a task definition with task instances running I get the following error:
    Task definition 'My Task - Add User' may not be modified while there are active task instances
    Is there a best practice to handle this. I tried to force an update through the console but that didn't work. I tried editing the task from the debug page and got the same error.

    1) Rename the original task definition.
    2) Upload the new task definition with the original name.
    3) Later, after all the running tasks have timed out, delete the old definition.
    E.g., if your task definition is "myWorkflow":
    1) Rename "myWorkflow" to "myWorkflow-old-2009-07-28"
    2) Upload the new task definition as "myWorkflow".
    Existing tasks will stay linked to the original (renamed) workflow definition.
    New tasks will use the new definition.
    As the previous poster notes, depending on the changes you are making, letting the old task definitions stay active could have bad side-effects and might be better avoided.

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for mouseless ADF applications

    I am developing an ADF application where the users do not want to use the mouse.
    So I would like to know if there are a best practice for this?
    I am already using the accessKey functionality and subforms defaultCommand
    But I have had problems setting focus to objects on a page like tables. I would like a button to return the focus to the table after it has made the command like delete.
    I have implemented a solution where I have found inspiration several threads and other webpages (see below).
    Is this solution okay?
    Are there any problems with it?
    I would also like to know if there are better pathways to go like
    out of the box solutions,
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    in advance thanks
    Inspiration webpages
    https://blogs.oracle.com/jdevotnharvest/entry/how_to_programmatically_set_focus
    http://technology.amis.nl/2008/01/04/adf-11g-rich-faces-focus-on-field-after-button-press-or-ppr-including-javascript-in-ppr-response-and-clientlisteners-client-side-programming-in-adf-faces-rich-client-components-part-2/
    how to Commit table by writting Java code in Managed Bean?
    Table does not refresh and getting error as UIComponent is Null
    A short description of the solution:
    (jdeveloper version 11.1.1.2.0)
    --- Example where I use onSetFocus in jsff page
    <af:commandButton text="#{hrsusuiBundle.FOCUS}" id="cb10"
    partialSubmit="true" accessKey="f"
    shortDesc="Alt+Shift+F"
    actionListener="#{managedBean_clientUtils.onSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    </af:commandButton>
    --- Examples where I use doTableActionAndSetFocus in jsff page
    --- There have to be a binding in the jsff page to delete, commit and rollback
    <af:commandButton text="#{hrsusuiBundle.DELETE}" id="cb4"
    accessKey="x"
    shortDesc="Alt+Shift+X"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Delete"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.COMMIT}" id="cb5"
    accessKey="s" shortDesc="Alt+Shift+S"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Commit"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.ROLLBACK}" id="cb6"
    accessKey="z" shortDesc="Alt+Shift+Z"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}"
    immediate="true">
    <af:resetActionListener/>
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Rollback"/>
    </af:commandButton>
    --- This is the java class I use
    --- It is published in adfc-config.xml as a request scope managedbean
    public class ClientUtils {
    public ClientUtils() {
    public void doTableActionAndSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String actionToDo = (String)rcb.getAttributes().get("actionField");
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(focusOn);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    if ( "Delete".equals(actionToDo) || "Commit".equals(actionToDo) || "Rollback".equals(actionToDo) ){
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    OperationBinding operationBinding = bindings.getOperationBinding(actionToDo);
    Object result = operationBinding.execute();
    AdfFacesContext.getCurrentInstance().addPartialTarget(component);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    public static String onSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String clientId = null;
    if (focusOn.contains(":")) {
    clientId = focusOn;
    } else {
    clientId = findComponentsClientIdInRoot(focusOn);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    return null;
    private static void writeJavaScriptToClient(String script) {
    FacesContext fctx = FacesContext.getCurrentInstance();
    ExtendedRenderKitService erks = null;
    erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
    erks.addScript(fctx, script);
    public static void makeSetFocusJavaScript(String clientId) {
    if (clientId != null) {
    StringBuilder script = new StringBuilder();
    //use client id to ensure component is found if located in
    //naming container
    script.append("var textInput = ");
    script.append("AdfPage.PAGE.findComponentByAbsoluteId");
    script.append ("('"+clientId+"');");
    script.append("if(textInput != null){");
    script.append("textInput.focus();");
    script.append("}");
    writeJavaScriptToClient(script.toString());
    public static String findComponentsClientIdInRoot(String id) {
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(id);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    return clientId;
    }

    Hi,
    I am developing an ADF application where the users do not want to use the mouse. So I would like to know if there are a best practice for this?
    Well HTML (and this is the user interface you see) follows a tab index navigation that you follow with "tab" and "shift+tab". Anything else is a short cut for which you use mnemonics (as you already do) or shortcuts (explained in http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html). There is a distinction to make between non-web environments (which I think you and your users have abackground in) and client desktop environments. Browsers block some keyboard functionality for their own purpose. So you may have to find a list of keys first that work across browsers. Unlike desktop clients, which allow you to "press a button" without the button to take focus, this cannot be done on the web. So you need to be clever here, avoiding buttons at all.
    The following paper is about JavaScript in ADF and explains the basics for what Chris Muir explains in : http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf
    It has the outline for how to register short cut keys that perform a specific action (e.g. register ctrl+d to delete the current row you are on, or press F11 to execute a query (similar to Oracle Forms frmres files)). However, be aware that this includes some code you have to write (actually quite some code to be honest).
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    Actually these are implementations as they come with example code for you to use and customize, do they? So what is this question asking for more ? Also note that global buttons don't quite have anything in common with the question you asked. I assume you want to see it as an implementation of the Forms toolbar that operates on the form or table the focus is in. This however does not work for the web as there is nothing that keeps track of which component has a focus and to what iterator (data block) it belongs. This would involve even more coding (though possibly doable)
    Frank

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

Maybe you are looking for