Best practice for distributing/releasing J2EE applications.

Hi All,
We are developing a J2EE application and would like some information on the best
practices to be followed for distributing/releasing J2EE applications, in general.
In particular, the dilemma we have is centered around the generation of stub, skeleton
and additional classes for the application.
Most App. Servers can generate the required classes while deploying the EJBs in the
application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
two that we are aware of ) allow these classes to be generated before the installation
time and the .ear file containing the additional classes is the one that is uploaded.
For instance, say we have assembled the application "myapp.ear" . There are two ways
in which the classes can be generated. The first is using 'ejbc' ( assume we are
using BEA Weblogic ), which generates the stub, skeleton and additional classes for
the application and returns the file, say, "Deployable_myapp.ear" containing all
the necessary classes and files. This file is the one that is then installed. The
other option is to install the file "myapp.ear" and let the Weblogic App. server
itself, generate the required classes at the installation time.
If the first way, of 'pre-generating' the stubs is followed, does it require us to
separately generate the stubs for each versions of the App. Server that we support
? i.e. if we generate a deployable file having the required classes using the 'ejbc'
of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
have to generate a separate file?
If the second method, of 'install-time-generation' of stubs is used, what is the
nature/magnitude of the risk that we are taking in terms of the failure of the installation
Any links to useful resources as well as comments/suggestions will be appreciated.
TIA
Regards,
Aasif

Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
detach/attach, script generation, Microsoft Sync framework, and a few others.
EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
backup/maintenance routine in your environment.
As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

Similar Messages

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Best practices for a multi-language application

    Hi,
    I'm planning to develop an application to work in two different countries and I'm hopping to get some feedback from this community on the best practices to follow when building the application. The application will run in two different languages (English and French) and in two different timezones;
    My doubts:
    - Wich type format is more appropriated to my table date fields?
    - I will build the application on english language. Since APEX has the french language for the admin frontend, how can I install it and can I reuse the translation to my applications? The interactive reports region are somewhere translated into french, how can I access that translation and use it in my application?
    Thank you

    Hello Cao,
    >> The application will run in two different languages (English and French) and in two different timezones …. It would be very helpful if I could access at least the translation of the IRR regions.
    As you mentioned, French is one of the native supported languages by the Application Builder. As such, all the internal APEX engine messages (including those for IR) were translated to French. In order to enjoy it you need to upload the French language into your Application Builder. The following shows you how to do that:
    http://download.oracle.com/docs/cd/E23903_01/doc/doc.41/e21673/otn_install.htm#BEHJICEB
    In your case, the relevant directory is *…/apex/builder/fr/ *. Please pay attention to the need to set properly the NLS_LANG parameter.
    >> Wich type format is more appropriated to my table date fields?
    I’m not sure exactly what you mean by that. Date fields are saved in the database format free and it’s up to you to determine how to display them, usually by using the to_char() function.
    As you mentioned that you are going to work with two different time zones, it’s possible that the date format for these two zones are different. In this case, you can use the APEX Globalization Application Date Time Format item. As the help for this item shows, you can use a substitution string as the item value, and you can set the value of the substitution string according to the current language and its corresponding date format.
    You should also set the Automatic Time Zone field to yes. It will make your life a bit easier.
    Regards,
    Arie.
    ♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
    ♦ Author of Oracle Application Express 3.2 – The Essentials and More

  • What is the best practice when distributing a desktop application which uses SMO

    I have a WPF application which installs a web site. One of the installation steps is to execute a number of SQL scripts which will install website's database. The database server is not necessarily the same machine as the product is installed - in most cases
    it's a different server on the same network. All the scripts are generated by a build of database project (visual studio 2012 db project) - hence all of those are in sqlcmd mode.
    I can workaround sql variables (:variable - type of creature) by making some simple text replacements. The big problem is with the "GO" statements. At first I thought that I can split the script into many subscripts using a regex split operation
    defining "GO" to be the separator - this did not work; having a world Polygon in the script will cause the whole operation to fail. Unfortunately I have no control  over the content of the script hence if someone will decide
    to put a comment like /* If you do not know how to use this script then GO TO HELL */ would break the installation process.
    I have then tried executing the whole script (with "GO" statements) using SMO. This works well on my development machine but once I try to do this on a server, application falls over missing dlls. Now for Microsoft.SqlServer.ConnectionInfo, Microsoft.SqlServer.Smo, Microsoft.SqlServer.SqlClrProvider
    and Microsoft.SqlServer.SqlEnum the solution is simple - I can just set "Copy local" to true and those dlls will be distributed with the application. 
    Big problem is with Microsoft.SqlServer.SqlClrProvider.dll. OK, I can get this library from my local machine - but I am not sure which SQL version this will be used with. I can't include more than one of those dlls as all of those have the same name. 
    I know that the official MS line is to install SQL feature pack (http://msdn.microsoft.com/en-us/library/ff713979.aspx). But again - I do not know which version of SQL my application will be working with, and I need to make the installation process as simple
    as possible. 
    My question is - is there a way I can distribute a desktop application (which makes a use of SMO) without any prerequisites and for all versions of SQL server? If there is no such a thing then using SMO is pretty much pointless as it cannot be distribute
    - even if then it's not version agnostic... Thanks for any help! 

    I agree with Olaf, when you want to ensure that you can interact with any version of SQL Server.
    The problem you will run into if you are not controlling the version of SQL Server is in the version of SMO. If you use SMO version 10.5 and need to deploy using SMO to SQL 2012, it will not work. SMO will always be backward compatible but not forward compatible.
    So you would have to have a max version of SQL Server your setup would deploy to in order to control errors and failures. So if you distribute SQL 2012 SMO then the max version would be SQL 2012 for the deploy.
    So using a tool that is version agnostic is the way to go, and you can also use in .NET the System.Data.SqlClient to execute statements against the SQL Server as well. This will be version agnostic too.
    Ben Miller - SQL Server MVP, SQL MCM 2008 - @DBADuck http://www.dbaduck.com

  • Best practice for client-server(Socket) application

    I want to build a client-server application
    1) On startup.. client creates connection to Server and keeps reading data from server
    2) Server keeps on sending different messages
    3) Based on messages(Async) from server client view has to be changed
    I tried different cases ended up facing IllegalStateChangeException while updating GUI
    So what is the best way to do this?
    Please give a working example .
    Thanks,
    Vijay
    Edited by: 844427 on Jan 12, 2012 12:15 AM
    Edited by: 844427 on Jan 12, 2012 12:16 AM

    Hi EJP,
    Thanks for the suggestion ,
    Here is sample code :
    public class Lobby implements LobbyModelsChangeListener{
        Stage stage;
        ListView<String> listView;
        ObservableList ol;
         public Lobby(Stage primaryStage) {
            stage = primaryStage;
               ProxyServer.startReadFromServer();//Connects to Socket Server
             ProxyServer.addLobbyModelChangeListener(this);//So that any data from server is fetched to Lobby
            init();
        private void init() {
              ProxyServer.getLobbyList();//Send
            ol = FXCollections.observableArrayList(
              "Loading Data..."
            ol.addListener(new ListChangeListener(){
                @Override
                public void onChanged(Change change) {
                    listView.setItems(ol);
         Group root = new Group();
        stage.setScene(new Scene(root));
         listView = new ListView<String>();
        listView.maxWidth(stage.getWidth());
         listView.setItems(ol);
         listView.getSelectionModel().setSelectionMode(SelectionMode.SINGLE);
        istView.setOnMouseClicked(new EventHandler<MouseEvent>(){
                @Override
                public void handle(MouseEvent t) {
    //                ListView lv = (ListView) t.getSource();
                    new NewPage(stage);
         root.getChildren().add(listView);
         @Override
        public void updateLobby(LobbyListModel[] changes) {
    //        listView.getItems().clear();
            String[] ar = new String[changes.length];
            for(int i=0;i<changes.length;i++)
                if(changes!=null)
    System.out.println(changes[i].getName());
    ar[i] = changes[i].getName();
    ol.addAll(ar);
    ProxyServer.javaProxyServer
    //Implements runnable
    public void run()
         ......//code to read data from server
    //make array of LobbyListModel[] ltm based on data from server
         fireLobbyModelChangeEvent(ltm);
    void addLobbyModelChangeListener(LobbyModelsChangeListener aThis) {
    this.lobbyModelsChangeListener = aThis;
         private void fireLobbyModelChangeEvent(LobbyListModel[] changes) {
    LobbyModelsChangeListener listner
    = (LobbyModelsChangeListener) lobbyModelsChangeListener;
    listner.updateLobby(changes);
    Exception :
    java.lang.IllegalStateException: Not on FX application thread; currentThread = Thread-5
             at line         ol.addAll(ar);
    But ListView is getting updated with new data...so not sure if its right way to proceed...
    Thanks,
    Vijay                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best Practice for implementing dual APEX applications environment

    Question:
    We are in the early stages of building an APEX application for Oracle SaaS consumption. The question I wanted to ask you is what would be the best way to deploy this app for production? Would it be easier just to create a new workspace in apex.oraclecorp.com and export the app definition or create a new instance of APEX container? Also, if we were to create a new container then what are all the hardware/middleware required?
    Follow up questions:
    1. What are you building and for what purpose?
    We are building an application for oracle employees (development, operations, and support) to assist in interacting with the CRM Saas environments. Entering bugs, tracking patch level, obtaining relevant environment contacts & urls.
    2. Who will be installing this application? Oracle customers? In their own on-premise APEX instances? Oracle Cloud?
    For the foreseeable future, there will only be the one internal install for internal use (CRM SaaS Enablement Team, DevOps).
    3. What are the database and APEX version requirements you'll have for this application?
    We do not have a particular requirement. The latest GA version would be the best candidates.
    4. Is it safe to say that there is minimal understanding & experience of APEX on your team?
    All we know is self-taught and from forum responses. Part of the problem we face is that we don’t know how to frame the questions in a way they can be understood.
    APEX container - By this I mean a fully functional APEX environment where applications can be deployed to.
    Use Case - We want to be able to make our Apex app available to the consumer (see above) and also continue to develop new features into that app for use at a later date. We are asking for information about a development model that works well for Apex apps.
    Thanks!!

    Moved the question to the internal Oracle forum:
    http://myforums.oracle.com/jive3/thread.jspa?threadID=1058413

  • Re: Best Practices for installing/running UCES Application

    I'm facing some technical challenges setting up NWDI 7.4 for UCES 6.05 development.
    First of all, out of all the required SCs for FSCM_DB, there's one (namely UMEADMIN) that cannot be checked in. I have checked in all the other SCs; the correct UMEADMIN SCA file is also in the inbox directory. So not sure why it's not visible for check-in?
    Then, when setting up NWDS (v7.0.30), I highlighted all the DCs under both FSCM_DB and SAP-UCES and created a project based on those. After that, I don't see the DC fscm/bd/web/shared in the J2EE DC Explorer. But I need to modify the jsp files in there. What am I doing wrong here?
    Also, when I setup FSCM_BD as a dependent SC, I chose "Source" as package type (instead of "Source and Archive") because I read a post somewhere. Not sure if that is the correct choice?
    Could the experts here please give me some pointers?
    Thanks!
    Eddy

    Hi Eddy,
    You can add FSCM_BD as independent SC to the same track where you've added UCES SC.
    Please select "Source and Archive" for FSCM_BD SC as well and try once again.
    Once you check in the archives, sync your track by updating / re-importing the track configurations in NWDS,
    Hope this helps!!!
    Regards,
    Anurag

  • TestStand best practices for distribution

    We're looking for best practices for distributing TestStand
    systems.  I've found the TestStand
    Style Guide but it's a little sparse on how to set up the
    distributed systems.  We're looking for guidelines on where to put
    configuration data, where to put sequence files, how to manage users,
    and similar. 
    We'll be distributing systems to various contract manufacturers in
    China as well as using the systems in multiple locations in-house.
    What have you done with distributions and what problems have you seen?
    Right now we're planning to separate the Deployment Engine from our
    sequence files and putting all our configuration into our distribution
    kit for the Deployment Engine.

    The TestStand Reference manual provides good information on system
    distribution.  Chapter 14 : Deploying TestStand Systems covers the
    necessary information for distributing your TestStand
    application.  I do have a few suggestions and caveats:
    (1)  Make sure you use a Work Space when distributing your
    files.  A Work Space makes it easy to package all of your files
    and dependencies.  Moreover, the distribution wizard provides a
    feature that displays all the files that will be included in the
    installation package into an easy to use Tree View.
    (2)  TestStand currently does not look for embedded DLL
    dependencies.  So, if your code is calling a DLL module that calls
    a DLL module, be sure to include the embedded dependency DLL in your
    WorkSpace.
    (3)  StationGlobals and Custom Data Types are usually missed in
    installation.  Be sure that you are including your
    StationGlobals.ini file and MyTypes.ini file from the Cfg directory
    into your installation workspace.
    If you have more specific questions, please feel free to post them here!
    Good Luck!
    Tyler Tigue
    Applications Engineer
    National Instruments

  • Best Practices for Configuration Manager

    What all links/ documents are available that summarize the best practices for Configuration Manager?
    Applications and Packages
    Software Updates
    Operating System Deployment
    Hardware/Software Inventory

    Hi,
    I think this may help you
    system center 2012 configuration manager best practices
    SCCM 2012 task-sequence best practices
    SCCM 2012 best practices for deploying application
    Configuration Manager 2012 Implementation and Administration
    Regards, Ibrahim Hamdy

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice for mouseless ADF applications

    I am developing an ADF application where the users do not want to use the mouse.
    So I would like to know if there are a best practice for this?
    I am already using the accessKey functionality and subforms defaultCommand
    But I have had problems setting focus to objects on a page like tables. I would like a button to return the focus to the table after it has made the command like delete.
    I have implemented a solution where I have found inspiration several threads and other webpages (see below).
    Is this solution okay?
    Are there any problems with it?
    I would also like to know if there are better pathways to go like
    out of the box solutions,
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    in advance thanks
    Inspiration webpages
    https://blogs.oracle.com/jdevotnharvest/entry/how_to_programmatically_set_focus
    http://technology.amis.nl/2008/01/04/adf-11g-rich-faces-focus-on-field-after-button-press-or-ppr-including-javascript-in-ppr-response-and-clientlisteners-client-side-programming-in-adf-faces-rich-client-components-part-2/
    how to Commit table by writting Java code in Managed Bean?
    Table does not refresh and getting error as UIComponent is Null
    A short description of the solution:
    (jdeveloper version 11.1.1.2.0)
    --- Example where I use onSetFocus in jsff page
    <af:commandButton text="#{hrsusuiBundle.FOCUS}" id="cb10"
    partialSubmit="true" accessKey="f"
    shortDesc="Alt+Shift+F"
    actionListener="#{managedBean_clientUtils.onSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    </af:commandButton>
    --- Examples where I use doTableActionAndSetFocus in jsff page
    --- There have to be a binding in the jsff page to delete, commit and rollback
    <af:commandButton text="#{hrsusuiBundle.DELETE}" id="cb4"
    accessKey="x"
    shortDesc="Alt+Shift+X"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Delete"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.COMMIT}" id="cb5"
    accessKey="s" shortDesc="Alt+Shift+S"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}">
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Commit"/>
    </af:commandButton>
    <af:commandButton text="#{hrsusuiBundle.ROLLBACK}" id="cb6"
    accessKey="z" shortDesc="Alt+Shift+Z"
    partialSubmit="true"
    actionListener="#{managedBean_clientUtils.doTableActionAndSetFocus}"
    immediate="true">
    <af:resetActionListener/>
    <af:clientAttribute name="focusField" value="t1"/>
    <af:clientAttribute name="actionField" value="Rollback"/>
    </af:commandButton>
    --- This is the java class I use
    --- It is published in adfc-config.xml as a request scope managedbean
    public class ClientUtils {
    public ClientUtils() {
    public void doTableActionAndSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String actionToDo = (String)rcb.getAttributes().get("actionField");
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(focusOn);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    if ( "Delete".equals(actionToDo) || "Commit".equals(actionToDo) || "Rollback".equals(actionToDo) ){
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    OperationBinding operationBinding = bindings.getOperationBinding(actionToDo);
    Object result = operationBinding.execute();
    AdfFacesContext.getCurrentInstance().addPartialTarget(component);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    public static String onSetFocus(ActionEvent event) {
    RichCommandButton rcb = (RichCommandButton)event.getSource();
    String focusOn = (String)rcb.getAttributes().get("focusField");
    String clientId = null;
    if (focusOn.contains(":")) {
    clientId = focusOn;
    } else {
    clientId = findComponentsClientIdInRoot(focusOn);
    if (clientId != null) {           
    makeSetFocusJavaScript(clientId);
    return null;
    private static void writeJavaScriptToClient(String script) {
    FacesContext fctx = FacesContext.getCurrentInstance();
    ExtendedRenderKitService erks = null;
    erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
    erks.addScript(fctx, script);
    public static void makeSetFocusJavaScript(String clientId) {
    if (clientId != null) {
    StringBuilder script = new StringBuilder();
    //use client id to ensure component is found if located in
    //naming container
    script.append("var textInput = ");
    script.append("AdfPage.PAGE.findComponentByAbsoluteId");
    script.append ("('"+clientId+"');");
    script.append("if(textInput != null){");
    script.append("textInput.focus();");
    script.append("}");
    writeJavaScriptToClient(script.toString());
    public static String findComponentsClientIdInRoot(String id) {
    UIComponent component = null;
    String clientId = null;
    component = JSFUtils.findComponentInRoot(id);
    clientId = component.getClientId(JSFUtils.getFacesContext());
    return clientId;
    }

    Hi,
    I am developing an ADF application where the users do not want to use the mouse. So I would like to know if there are a best practice for this?
    Well HTML (and this is the user interface you see) follows a tab index navigation that you follow with "tab" and "shift+tab". Anything else is a short cut for which you use mnemonics (as you already do) or shortcuts (explained in http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html). There is a distinction to make between non-web environments (which I think you and your users have abackground in) and client desktop environments. Browsers block some keyboard functionality for their own purpose. So you may have to find a list of keys first that work across browsers. Unlike desktop clients, which allow you to "press a button" without the button to take focus, this cannot be done on the web. So you need to be clever here, avoiding buttons at all.
    The following paper is about JavaScript in ADF and explains the basics for what Chris Muir explains in : http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf
    It has the outline for how to register short cut keys that perform a specific action (e.g. register ctrl+d to delete the current row you are on, or press F11 to execute a query (similar to Oracle Forms frmres files)). However, be aware that this includes some code you have to write (actually quite some code to be honest).
    http://www.oracle.com/technetwork/developer-tools/adf/learnmore/79-global-template-button-strategy-360139.pdf (are there an example implementation?), or
    http://one-size-doesnt-fit-all.blogspot.dk/2010/11/adf-ui-shell-supporting-global-hotkeys.html
    Actually these are implementations as they come with example code for you to use and customize, do they? So what is this question asking for more ? Also note that global buttons don't quite have anything in common with the question you asked. I assume you want to see it as an implementation of the Forms toolbar that operates on the form or table the focus is in. This however does not work for the web as there is nothing that keeps track of which component has a focus and to what iterator (data block) it belongs. This would involve even more coding (though possibly doable)
    Frank

  • Best practice for E-business suite 11i or R12 Application backup

    Hi,
    I'm taking RMAN backup of database. What would be "Best practice for E-business suite 11i or R12 Application backup" procedure?
    Right now I'm taking file level backup. Please suggest if any.
    Thanks

    Please review the following thread, it should be helpful.
    Reommended backup and recovery startegy for EBS
    Reommended backup and recovery startegy for EBS

  • Best practice for keeping a mail session open in web application?

    Hello,
    We have a webmail like application where users login with their IMAP credentials, then are taken to an authenticated area of the site where they can manage different things about their email account.
    Right now the application is opening and closing a mail store connection (via a new javax.mail.Session) for each page load based on the current logged in user credentials. To me this seems like it would be a bad practice to keep opening and closing a connection each page load.
    Are there any best practices for this situation? It would be nice to be able to open the connection to the mail server on login, then keep that connection open until the person logs out, session expires, etc.
    I can probably put the javax.mail.Session into the HTTP session, but that seems like it would break any clustering functionality of tomcat. This would be fine if the machine the user is on didn't fail, but id assume if they failed over to another the mail session would be gone. Maybe keeping the mail session in the http session, checking for a connection, then first attempting to reconnect with the logged in credentials before giving up would be a possiblity?
    Any pointers would be appreciated

    If you keep the connection open across pages, you're going to need to deal with
    timeouts - from the http session and from the mail server.
    If you don't keep the connection open, you're going to need to "resynchronize"
    your view of the store/folder with each operation, in case the folder is modified
    by another session.
    The former is easier in the common cases, especially if you don't care how gracefully
    you handle failures. The latter is more difficult in the common cases, but handles
    failure better, and in particular handles clustering better. You'll need to measure it to
    see if it meets your performance and scalability requirements. You may need to mix
    the two approaches to get acceptable performance.

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

Maybe you are looking for