PJC deployment best practice

Hi
We have started using several PJCs to enhance our 10g webforms. The conversion has not gone live and I need to decide how to deploy the PJCs.
The 2 options that I can think of:
1. Have all PJCs packaged into one jar.
Pro - no need to change formsweb.cfg every time a new pjc is used
2. Have separate jar files for each pjc
Pro - Easier to maintain each pjc e.g. if one changes just have to test that one rather than all as with above deployment
- Reduce jar download quantity. Form will only download jar files required rather than one large jar file
Has anybody been through a similar process?
Any thoughts/comments are appreciated.
thanks
paul schweiger

Hi,
I think option 2 is preferrable. However, I don't see any benefit in configuring all jar files with an application.
One thing I haven't tried yet, maybe you want to try it for me and le me know if it works
Say you have common PJCs and application specific PJCs. You may be able to define
commonPjc = pjc1.jar, pjc2.jar, pjc3.jar
Then in your application specific archive definition you do
[myApp]
form=...
<the archive tag> = frmall.jar,%commonPjc%
It works for other configurations and may work for PJCs as well. This way you have a common configuration (similar to a big monolithic jar file) that you can use for each application definition). Additional, application specific jar files can be added to teh application definition
Let me know if thsi works
Frank

Similar Messages

  • Grid Control deployment best practices

    Looking for this document, interested to know more about Grid Control deployment best practices, monitoring and managing for more than 300+ dbs.

    hi
    have a search for the following document
    MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
    regards
    Alan

  • Jdev101304 SU5 - ADF Faces - Web app deployment best practice|configuration

    Hi Everybody:
    1.- We have several web applications that provides a service/product used for public administration purposes.
    2.- the apps are using adf faces adf bc.
    2.- All of the apps are participating on javaSSO.
    3.- The web apps are deployed in ondemand servers.
    4.- We have notice, that with the increase of users on this dates, the sessions created by the middle tier in the database, are staying inactive but never destroyed or removed.
    5.- Even when we only sing into the apps using javasso an perform no transacctions (like inserting or deleting something), we query the v$sesisons in the database, and the number of inactive sessions is always increasing, until the server colapse.
    So, we want to know, if this is an issue of the configurations made on the Application Module's properties. And we want to know if there are some "best practices" that you could provide us to configure a web application and avoid this behavior.
    The only configurations that we found recomended for web apps is set the jbo.locking.mode to optimistic, but this doesn't correct the "increasing inactive sessions" problem.
    Please help us to get some documentation or another resource to correct configure our apps.
    Thnks in advance.
    Edited by: alopez on Jan 8, 2009 12:27 PM

    hi alopez
    Maybe this can help, "Understanding Application Module Pooling Concepts and Configuration Parameters"
    see http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html
    success
    Jan Vervecken

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

  • SOA OSB Deployment best practices in Production environment.

    Hi All
    I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
    1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
    2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
    Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

    There are two approaches for deployment of OSB and SOA code.
    1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
    2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
    Most commonly followed approach is approach#1.
    Regards
    Vivek

  • Portal server deployment best practices

    anyone out there knows what is the right way to deply portal server into production environment instead of manually copying all the folders and run the nessarily commands? Is there a better way to deploy portal server? Any best practices that i should follow for deploying portal server?

    From the above what I understood is you would like to transfer your existing portal server configuration to the new one. I don't think there is an easy method to do it.
    One way You can do is by taking the "ldif " back up from the existing portal server.
    For that first you have to install the portal server in the new box and then take back up of existing portal server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db /tmp/profile.ldif
    edit the "/tmp/profile.ldif " file and modify <hostname> and <Domain name> with the new system values.
    copy this file to the new server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db -i /tmp/backdb.ldif
    and also copy the file "slapd.user_at.conf " under /opt/netscape/directory4/slapd-<hostname>/config to the new system.
    Restarting the server makes you to access the portal server with the confguration of the old one.

  • CCT deployment best practices Question.

    Using the packager to deploy 50+ seats, would it be best practice to perform a team-wide uninstall of currently installed CS5.5 assets prior to package deployment?  Or can these assets remain on end user systems?  Please advise.  Thanks!

    We're faced with that scenario more times than we like. Depending on who you ask, it's "We need to remove old versions to reduce unnecessary issues" on  the IT side, and "Touch my CS5.5 and I will beat you" on the user side. LOL
    Our position is to remove old versions, so we're testing the Adobe cleaner tool, hoping to be able to script removal of all previous versions. Of course the litmus test is to get CS4 installed on a test Mac for testing.
    http://www.adobe.com/support/contact/cscleanertool.html
    Man if there's anything this exercise has done is to remind us how much progress Jody/Karl have made in the last few years. Installing CS4 (pre-AAMEE days) is like having toothpicks shoved into your retina. It's sooo much better today.
    Don

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Export and Deployment - Best Practices for RAR and CUP

    Hi Experts,
    I wanted to know what in your opinon is best practice for deployment for GRC for a 3 system landscape.
    We have a development landscape which connacts to all our environments - Dev-QA-Prod.
    Is it recommended to have just the production client connected to the prodiction boxes only and use Dev/ QA for other environments or is it a good idea to have Prod and QA in sync?
    In my opinion it looks like a good idea to have the same QA and PROD as it would make export easier.. Maybe I am worng..
    What according to you all is a good recommended practice here?
    Thanks,
    Chinmaya

    Hi Chinmaya,
    depends how many clusters you have in your landscape
    if it is something like 5 DEV box to connect 5 QAS boxes, so on
    then best practice will be to have separate DEV - QAS - PRD boxes for GRC  if money (h/w ) is no constraint for organization
    rather than later asking SAP for deletion scripts for deleting sandbox or dev connectors,
    best to have separate boxes for each
    also for future whenever you do rule changes in RAR and config changes in CUP, best to test in QAS first, as CUP will become very critical for your organization, post go-live
    and good part will be that management report will reflect true data for PRD only
    regards,
    Surpreet

  • Deployment best practice

    Looking to setup a deploy to live strategy for a relatively new team, the options are:
    option-1) Build in test environment using ant tasks and move the war, jar etc. to live. With the live specific properties in a separate file
    option-2) Make a ant task that reads properties files and compiles a war, jar, ear specific to the environment. The only issue
    here is the war file will be specific to environment.
    What are the pros & cons of each?
    Any suggestions will be greatly appreciated.
    TIA,
    Raj

    Thanks for the links.
    The particular item we were concerned was, how to handle environment specific values.
    Seems like this is the best to go...
    1) Keep the application binaries/byte-code independent of the environment
    2) Factor the environment specific values into a separate resource xml or properties file
    Based on which, the way to go about would be to build the jars/wars etc in test environment and copy them to live instead of
    rebuilding them in live.
    Cheers,
    Raj

  • Deployment - best practices

    There is a lot of questions/answers about deployment strategies in this forums .
    i solved a lot of problems an get answer to my questions
    but there are some remaining problems..
    My problem :
    I developed a project.
    There are several mappings, source/target tables, process flows and one main flow combined with a scheduler.
    What is the best method/way to deploy the whole project without the OWB client.
    With OMB scripts , i can deploy the mappings and process flows...
    But how can i define new connections(for example the production connections) ,
    how can i deploy the target tables...
    how can i configure the new connection to the tables,mappings etc.
    how can i deploy the scheduling part en let start it( without the OWB client software).
    Thanks a lot..

    OMB encompasses pretty much every UI interaction, so you can also deploy tables, create new locations, connectors and schedules can be created.
    You can create locations and connectors via OMB
    OMBCREATE LOCATION 'X' SET PROPERTIES (TYPE,VERSION,CONNECTION_TYPE) VALUES ('ORACLE_DATABASE','11.2','DATABASE_LINK')
    OMBCREATE LOCATION 'Y' SET PROPERTIES (TYPE,VERSION) VALUES ('ORACLE_DATABASE','11.2')
    # 1.a. To create a connector and let OWB generate a database link....
    OMBCREATE CONNECTOR 'Y/X' SET PROPERTIES (DATABASE_LINK_NAME) VALUES ('LINKNAMEHERE') SET REF LOCATION 'X'
    # 1.b. To create a connector that uses an existing database link of yours ....
    OMBCREATE CONNECTOR 'Y/X' SET PROPERTIES (DATABASE_LINK_NAME) VALUES ('LINKNAMEHERE') SET REF LOCATION 'X'
    # To add the location to a module and configure the module to use the location
    OMBALTER ORACLE_MODULE '<modulename>' ADD REFERENCE LOCATION '<locationname>' SET AS DEFAULT
    OMBALTER ORACLE_MODULE '<modulename>' SET PROPERTIES (DB_LOCATION) VALUES ('<locationname>')
    Schedules can be created as in the link Re: how to properly create a calendar using OMB*Plus, to schedule a workflow ? and then to set the calendar on a flow for example see Problems setting the calendar on a process flow using OMB
    Cheers
    David

  • Best Practice/Validation for deploying a Package to Azure

    Before deploying a package to Azure, What kind of best practice/Validation can be done to know the Package compatibility with Azure Enviroment?

    What do you mean by the compatibility of the azure package with the azure environment? what do you want to validate? would be great if you provide bit of a background for your question.
    As far as the deployment best practice is concerned, the usual way is to upload your azure cloud service deployment package and configuration files (*.cspkg and *.cscfg) to the blob container first and upload it to the cloud service by referring from uploaded
    container. This will not only give you flexibility to keep different versions of your deployments which you can use to roll back entire service but also the process of the deployment will be comparatively faster than that of deploying from VS or by uploading
    manually from file system.
    You can refer link - http://azure.microsoft.com/en-in/documentation/articles/cloud-services-how-to-create-deploy/#deploy
    Bhushan | Blog |
    LinkedIn | Twitter

  • ADF Deployment Granularity - Best Practices

    Hi People,
    If anyone can spare some time to discuss this, I would like some pointers about ADF applications deployment best practices. For example, we have some customers that complain about having to re-deploy the entire application EAR just to add a field "rendered" condition on a single page, and also having to re-deploy the ADF BC model JARs even though the application has only been changed on the view layer.
    What level of deployment granularity can we JDeveloper + ADF developers provide to our customers, without the risk of having inconsistency or dependency problems? So far, our strategy is to deploy the BC Model layer to separate JARs and the View Layer in a WAR file, packaging everything in an EAR. Is it feasible to allow the developers to change one single page and generate a deployment archive for just that single page? If not, which arguments can i provide in a discussion to support the single-deployment point of view?
    Thanks for your time, and regards!
    Thiago

    Hi Thiago
    Interesting question and one that comes up from time to time with JEE applications. I've been doing some research on this issue, and recently blogged about how OC4J and BEA Weblogic handles this scenario. Hopefully the post and reference to the OTN post gives you more information, though I'd be interested if you're research reveals a different approach.
    I'm also hoping this issue comes up at the OOW ADF Methodology chat among the JDev experts, it would be good for the experts to share their different approaches to this common issue.
    I know this doesn't give you a direct answer but hopefully will be useful.
    Cheers,
    CM.

  • Best Practice Documents on Sourcefire

    Hello,
    May I know if there are any deployment best practice documents on IPS (covering Pre Processors settings etc), Network AMP and Fire AMP.
    Thanks.
    Regards,
    Akhtar

    Please check these answered links:
    Contract best practices
    good practices in SAP Value Contract
    Best Practice while creating Contract, Purchase Requisition, Purchase Order
    Best Practice unit of measurement usage in CONTRACT.

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

Maybe you are looking for

  • Need help with WRT610N

    About a month ago, I purchased the WRT610N, and had a fair bit of difficulty setting it up.  I'm not a networking guru, and unfortunately, though I know lots of programmers, I don't know many networking people, so I've been kind of feeling this out.

  • Non-whitespace characters are not allowed ...

    I'm trying to parse the following XML fragment into a DOM Document object: <CNetSpecs><section name="General "><spec name="Printer Type ">Personal printer - ink-jet - colour </spec><spec name="Weight ">6.4 kg </spec></section><section name="Printer "

  • Query designer formula editing very slow

    Hi Experts, deleting formula in the designer is very slow. for example : formula :  noerr(a/b) to add ndiv0 to this formula, i need to place the cursor at the right place and delete 1 character by 1 character. Each character takes about 1 second or m

  • Database Adapter - Polling updated rows

    Hello, I have a question reguarding the polling strategy available in the database adapter. I set it up and it works great with new inserted rows in the table. However, it doesn´t capture the updated rows! For instance, i have the following table: ID

  • HT3702 I have about $150.00 charges to my Itunes and i did not make them, How can i get help from APPLE?

    Please help me resolve. I have changes to my ITUNE store i did not make. How do I get this corrected?