ADF Deployment Granularity - Best Practices

Hi People,
If anyone can spare some time to discuss this, I would like some pointers about ADF applications deployment best practices. For example, we have some customers that complain about having to re-deploy the entire application EAR just to add a field "rendered" condition on a single page, and also having to re-deploy the ADF BC model JARs even though the application has only been changed on the view layer.
What level of deployment granularity can we JDeveloper + ADF developers provide to our customers, without the risk of having inconsistency or dependency problems? So far, our strategy is to deploy the BC Model layer to separate JARs and the View Layer in a WAR file, packaging everything in an EAR. Is it feasible to allow the developers to change one single page and generate a deployment archive for just that single page? If not, which arguments can i provide in a discussion to support the single-deployment point of view?
Thanks for your time, and regards!
Thiago

Hi Thiago
Interesting question and one that comes up from time to time with JEE applications. I've been doing some research on this issue, and recently blogged about how OC4J and BEA Weblogic handles this scenario. Hopefully the post and reference to the OTN post gives you more information, though I'd be interested if you're research reveals a different approach.
I'm also hoping this issue comes up at the OOW ADF Methodology chat among the JDev experts, it would be good for the experts to share their different approaches to this common issue.
I know this doesn't give you a direct answer but hopefully will be useful.
Cheers,
CM.

Similar Messages

  • JDeveloper ADF development & CVS best practices?

    Greetings all,
    My team has chosen to use CVS for our ADF source control. Are there any best practices or advice on the source control lifecycle using CVS & JDev?

    Shay Shmeltzer wrote:
    We would recommend that if you are starting a new development project you'll use Subversion instead of CVS.
    I'll echo that - if you're familar with CVS, you'll find most SVN commands are similar (if not identical!) and you'll find that branching/merging operations and atomic commits will make the problem areas with CVS a little easier.
    Some good discussion here:
    http://stackoverflow.com/questions/245290/subversion-vs-cvs

  • SSIS 2012 Deployment Standards/Best Practices

    Hi guys!  I've been tasked with creating a set of standards for SSIS 2012, more specifically surrounding deployment standards.  Is there any guide or something I can read for best practices so I can put a document together.  Appreciate your
    help! 

    None that I am aware of because it largely is constrained by the business rules, project requirements, etc.
    In my view, use the Project Deployment Model, automation from the PowerShell side, protect the sensitive data, use domain proxies to run packages, sensitive variables, use baselining, get periodic health checks done.
    Arthur My Blog

  • Deployment guide / best practices for CR 2008 on Vista

    Post Author: plokolp
    CA Forum: Deployment
    Hi there,
    Is there a deployment guide for application developers who need to deploy CR2008 as part of their application?  There seems to be scattered information around the website, however it would be good to have an official, consolidated document that simply outlines what's needed for this situation.
    At the moment I'm trying to find information about whether to use a merge module, or the .msi.  This .msi seems to deploy a whole lot of stuff that simply isn't needed for simply displaying reports within an application, yet BO seem to say this is the best method for deployment?  This turns a relatively small install package into a monster!
    If it talked about Vista Certification also that would be great also.
    Thanks,
    Rod

    Post Author: plokolp
    CA Forum: Deployment
    Hello?
    Does anyone have an answer to this question?   Does anyone from Business Objects look at these forums?Best regards, Rod

  • ADF BC 11 - best practice question

    Hi,
    I think that many developers/architects working with development in the SOA area, are facing the same problem as we have. The SOA architecture promis you decoupled systems that are highly adaptable and flexible. However most systems also have user interfaces and they need
    to be fast, othervise the system would not be successfull. Domain logic should only be kept in one place and there will be a need for accessing the logic from both bepl, user interfaces and f.ex other services as well. The dilemma is that we both need the decoupling that we get from web services and the
    speed we get from calling f.ex java directly.
    But here is more about our actual challenge :
    We are building a system based on bpel 10.1.3 and adf 11g. The reason for the versions are history related. Oracle released SOA 11 g in the middle of our service development, and before we really was started with faces.
    The system that must to be able to adapt data from various sources (with different structure) and we need a central place for "domain" logic that can be accessed from both bpel and ui (faces) . The adf faces forms should be independent of data sources.
    Right now we are working with a model where a pojo implements a certain domain logic. The pojo can be exposed as datacontrol and webservice as well. The pojo will be using calls to adf bc application modules when it needs to operate on data or in some cases it would be entreprise service bus services.
    We have made several adf faces forms that purely operates on the pojo, either through calls from the backend bean or binding to the datacontrol. That works, however i read the article from steve muench that describes that you should never instantiate
    the application module with Configuration.createRootApplicationModule() from the backend bean. However since faces are not aware of the application module it will not try to instantiate it. Will this approach still potential cause us probems regarding conflitcs, performance ....
    Regards,
    Jan
    Edited by: jsteenbe on Feb 18, 2010 11:45 AM

    Hi,
    the recommendation ot to use Configuration.createRootApplicationModule() is because it creates a new database connection, which is expensive. Instead you could expose the AM as a DataControl and then use the ADF binding layer to access the data control from where you access the business logic. This slimlines the approach.
    However, your use case sounds so much like a candidate for services than POJO beans that I think this is the way to go to be agile enough in the future
    Frank
    Edited by: Frank Nimphius on Feb 18, 2010 1:38 PM

  • Test granularity - best practice

    We're struggling with a couple issues.  We want to write our LabVIEW in a generic fashion such that they simply perform a task, and unless it's a very simple task, they don't make the determination of pass/fail - we want to leave that upto TestStand.  We don't want any of our LabVIEW tests to have hard-coded ranges in them.  This brings up a number of questions.  Since we (mostly) want TestStand to ochestrate the parameter ranges, test sequence, pass/fail, etc.- Is there a best/suggested method of capturing test scenarios.  Obviously, at some point, you have to identify the parameters/ranges for the various tests.  Is it best to hard-code them into the TestStand sequence file associated with a particular LRU - OR - Is there a nice XML, .ini, .txt format that TestStand can read to populate a series of local variables.  Hopefully someone understands my babble and can provide a starting point.

    TestStand provides the property loader step type that you can use to modify limits for a particular board type. Whether you use that or hard code limits into a test sequence is really application specific. If you have an LRU (or what I would call a UUT or Unit Under Test) that has a large number of different varieties, it would often be simpler to use the property loader. Doing this, you would only have to write a single sequence file. For a new board, you would then just create a new .txt, .xls, etc. and distribute that. I don't happen to be in that sort of situation. The vast majority of boards that I have to test are unique and the limits are fixed in the sequence file. I do have several board types where stuffing options make certain whole tests optional and I find it more convenient to use pre-conditions in the sequence file for that situation. For example, perform this step if serial number prefix = 'abc'.
    You are correct in removing any pass/fail criteria from the LabVIEW code and letting TestStand do that work but the actual mechanics of how TestStand does it should be approached on a case-by-case basis. It helps to be in touch with the product development teams to see what other flavors of a particular product are planned. The extra overhead of loading limits from an external file would not be justified for something that is a one-off and you are trying to optimize the test time.

  • Oracle BPM Best Practices

    Hi all,
    Anybody has any information on the Oracle BPM Best Practices?
    Any guide?

    All,
    I was trying to find a developers guide for using Oracle BPM Suite (11g). I found the one in the following link, however this looks like a pretty detailed one...
    http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28981/toc.htm
    Can you someone help me find any other flavors of the developers guide? I am looking for the following...
    1. Methods of work - Best Practices for design and development of BPM process models.
    2. Naming Conventions for Process Modeling - Best Practices
    3. Coding standards for Process Modeling (J Developer)
    4. Guide with FAQ's for connecting / Publishing Process Models to the MDS Database.
    5. Deployment Standards - best practices....
    6. Infrastructure - Recommendations for Scale out deployment in Linux v/s Windows OS.
    Regards,
    Dinesh Reddy

  • Best Practice of CRM 6.0/ RKT 2007

    Hi Experts,
    We have installed the RKT CRM2007 Server that is CRM6.0.
    Now when we are trying to upload the CRM Best Practice of CRM 5.2 (as its the latest available in market) its gives the error that 'Current Software Component Vector does not match' and we are not able to upload these Best Practice.
    Can any one help us to deploy the Best Practice or any alternative.
    Regards
    Pulkit

    Hey thats news to us. We are on our way to implement CRM 5.2 but after knowing the release of CRM 2007, we are pursuing SAP to give us the same too.
    When did you guys implement 2007?, should be very recent only. And were you using SAP CRM prior to it or this is th furst of it?
    Would you have any advice for us who are on the road for the CRM 2007?
    Regards,
    Tariq

  • Bundling Best Practices

    Good morning All,
    I know that this has been asked for before, and my apologies for doing so. However; I would like to provide my customer something that can describe Best Practices when it comes to building, deploying bundles, and policies (which can be handled in another posting). Especially in relation to the placement of these Bundles, whether they should be at the Device, User, or Workstation level (if an enterprise wide application or Policy)
    There is a debate within the organization, that Policies, Bundles, should be deployed out to all geographic locations, instead of just being Deployed from the Top level down. In this case they have Group Policies which go out to every PC in the organization, and instead of doing that from the Workstation level, they deploy these Policy's from each geographic local. Which seems to not be according to best practices, or at least what I have always assumed to be Best Practices.
    I have seen this from the Documentation site, http://www.novell.com/documentation/zenworks11/ - System Planning, Deployment, and Best Practices Guide, and I have found some others looking to put together a packet for the customer.
    Thank you,
    -DS

    I know that the University of Uppsala has written up some, but they are
    in Swedish. Anyway, as for your question, I would say that it depends.
    Your one main goal should be management by exception.
    Are policies, bundles etc different based on geography?
    Anders Gustafsson (NKP)
    The Aaland Islands (N60 E20)
    Have an idea for a product enhancement? Please visit:
    http://www.novell.com/rms

  • Best Practice for Deploying ADF application

    I am tasked with developing a best or prefered practice of feploying a large ADF application. Background: we are in the process of redeveloping a UI for a large system. We have broken the system down into susbsytems. Each of these susbsystems UI will be a ADF aaplicaion(?). This is a move from a MS .Net front end. The backend (Batch processes etc) is being dveloped in Java. So my question is if I have several ADF projects for each subsystem and common components that they all will use - what is the best practice to compile package and deploy? The deployment will be to weblogic server or servers(Cluster).
    We have a team of at least 40 -50 developers worldwide so we are looking for an automated build and deploy and would like to follow Oracle best practice. So far I have read Deploying ADF Applications (http://download.oracle.com/docs/cd/E15523_01/web.1111/e15470/deploy.htm#BGBJHGFH) and have followed the links. I have also look at the ADF evangalist blogs - lots of chatter about ojdeploy. My concern about ojdeploy is that dependent files are also being compiled at the same time. I expected that we want shared dependent files compiled only once (Is that a valid concern)?
    So then when we build the source out of subversion (ojdeploy ? Ant? ) then what is best practice to deploy to a weblogic server (wslt admin console) - again we want it to be automated.
    Thank you in advance for replies.
    RK

    Rule 1: Never use the "Automatically Expose UI Componentes in a New Managed Bean" option, create your bindings manually;
    Rule 2: Rule 1 is always right;
    Rule 3: In doubts, refer to rule 2.
    You may also want to check out :
    http://groups.google.com/group/adf-methodology
    And :
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf

  • Jdev101304 SU5 - ADF Faces - Web app deployment best practice|configuration

    Hi Everybody:
    1.- We have several web applications that provides a service/product used for public administration purposes.
    2.- the apps are using adf faces adf bc.
    2.- All of the apps are participating on javaSSO.
    3.- The web apps are deployed in ondemand servers.
    4.- We have notice, that with the increase of users on this dates, the sessions created by the middle tier in the database, are staying inactive but never destroyed or removed.
    5.- Even when we only sing into the apps using javasso an perform no transacctions (like inserting or deleting something), we query the v$sesisons in the database, and the number of inactive sessions is always increasing, until the server colapse.
    So, we want to know, if this is an issue of the configurations made on the Application Module's properties. And we want to know if there are some "best practices" that you could provide us to configure a web application and avoid this behavior.
    The only configurations that we found recomended for web apps is set the jbo.locking.mode to optimistic, but this doesn't correct the "increasing inactive sessions" problem.
    Please help us to get some documentation or another resource to correct configure our apps.
    Thnks in advance.
    Edited by: alopez on Jan 8, 2009 12:27 PM

    hi alopez
    Maybe this can help, "Understanding Application Module Pooling Concepts and Configuration Parameters"
    see http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html
    success
    Jan Vervecken

  • Best practice for RDGW placement in RDS 2012 R2 deployment

    Hi,
    I have been setting up a RDS 2012 R2 farm deployment and the time has come for setting up the RDGW servers. I have a farm with 4 SH servers, 2 WA servers, 2 CB servers and 1 LS.
    Farm works great for LAN and VPN users.
    Now i want to add two domain joined RDGW servers.
    The question is; I've read a lot on technet and different sites about how to set the thing up, but no one mentions any best practices for where to place them.
    Should i:
    - set up WAP in my DMZ with ADFS in LAN, then place the RDGW in the LAN and reverse proxy in
    - place RDGW in the DMZ, opening all those required ports into the LAN
    - place the RDGW in the LAN, then port forward port 443 into it from internet
    Any help is greatly appreciated.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    The deployment is totally depends on your & company requirements as many things to taken care such as Hardware, Network, Security and other related stuff. Personally to setup RD Gateway server I would not prefer you to select 1st option. But as per my research,
    for best result you can use option 2 (To place RDG server in DMZ and then allowed the required ports). Because by doing so outside network can’t directly connect to your internal server and it’s difficult to break the network by any attackers. A perimeter
    network (DMZ) is a small network that is set up separately from an organization's private network and the Internet. In a network, the hosts most vulnerable to attack are those that provide services to users outside of the LAN, such as e-mail, web, RD Gateway,
    RD Web Access and DNS servers. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network called a perimeter network in order to protect the rest of the network if an intruder were to succeed. You can refer
    beneath article for more information.
    RD Gateway deployment in a perimeter network & Firewall rules
    http://blogs.msdn.com/b/rds/archive/2009/07/31/rd-gateway-deployment-in-a-perimeter-network-firewall-rules.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best Practice for SRST deployment at a remote site

    What is the best practice for a SRST deployment at a remote site? Should a separate router such as a 3800 series be deployed for telephony in addition to another router to be deployed for Data? Is there a need for 2 different devices?

    Hi Brian,
    This is typically done all on one ISR Router at the remote site :)There are two flavors of SRST. Here is the feature comparison;
    SRST Fallback
    This feature enables routers to provide call-handling support for Cisco Unified IP phones if they lose connection to remote primary, secondary, or tertiary Cisco Unified Communications Manager installations or if the WAN connection is down. When Cisco Unified SRST functionality is provided by Cisco Unified CME, provisioning of phones is automatic and most Cisco Unified CME features are available to the phones during periods of fallback, including hunt-groups, call park and access to Cisco Unity voice messaging services using SCCP protocol. The benefit is that Cisco Unified Communications Manager users will gain access to more features during fallback ****without any additional licensing costs.
    Comparison of Cisco Unified SRST and
    Cisco Unified CME in SRST Fallback Mode
    Cisco Unified CME in SRST Fallback Mode
    • First supported with Cisco Unified CME 4.0: Cisco IOS Software 12.4(9)T
    • IP phones re-home to Cisco Unified CME if Cisco Unified Communications Manager fails. CME in SRST allows IP phones to access some advanced Cisco Unified CME telephony features not supported in traditional SRST
    • Support for up to 240 phones
    • No support for Cisco VG248 48-Port Analog Phone Gateway registration during fallback
    • Lack of support for alias command
    • Support for Cisco Unity® unified messaging at remote sites (Distributed Exchange or Domino)
    • Support for features such as Pickup Groups, Hunt Groups, Basic Automatic Call Distributor (BACD), Call Park, softkey templates, and paging
    • Support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0 on same computer
    • No support for secure voice in SRST mode
    • More complex configuration required
    • Support for digital signal processor (DSP)-based hardware conferencing
    • E-911 support with per-phone emergency response location (ERL) assignment for IP phones (Cisco Unified CME 4.1 only)
    Cisco Unified SRST
    • Supported since Cisco Unified SRST 2.0 with Cisco IOS Software 12.2(8)T5
    • IP phones re-home to SRST router if Cisco Unified Communications Manager fails. SRST allows IP phones to have basic telephony features
    • Support for up to 720 phones
    • Support for Cisco VG248 registration during fallback
    • Support for alias command
    • Lack of support for features such as Pickup Groups, Hunt Groups, Call Park, and BACD
    • No support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0
    • Support for secure voice during SRST fallback
    • Simple, one-time configuration for SRST fallback service
    • No per-phone emergency response location (ERL) assignment for SCCP Phones (E911 is a new feature supported in SRST 4.1)
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/prod_qas0900aecd8028d113.html
    These SRST hardware based restrictions are very similar to the number of supported phones with CME. Here is the actual breakdown;
    Cisco 880 SRST Series Integrated Services Router
    Up to 4 phones
    Cisco 1861 Integrated Services Router
    Up to 8 phones
    Cisco 2801 Integrated Services Router
    Up to 25 phones
    Cisco 2811 Integrated Services Router
    Up to 35 phones
    Cisco 2821 Integrated Services Router
    Up to 50 phones
    Cisco 2851 Integrated Services Router
    Up to 100 phones
    Cisco 3825 Integrated Services Router
    Up to 350 phones
    Cisco Catalyst® 6500 Series Communications Media Module (CMM)
    Up to 480 phones
    Cisco 3845 Integrated Services Router
    Up to 730 phones
    *The number of phones supported by SRST have been changed to multiples of 5 starting with Cisco IOS Software Release 12.4(15)T3.
    From this excellent doc;
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/data_sheet_c78-485221.html
    Hope this helps!
    Rob

Maybe you are looking for

  • Display a column with a list of values from DUAL;

    Hello, how can I display something like: COL1 A B C D E F

  • How to get the previous record in the select statement

    I am trying to update the parent part number as a parent for component part number. I can assign the main part with level 1 as a parent and it looks like this. PART_NUMBER     PART_LEVEL     PART_PARENT 1140000000T     3     1140001755L 1140001755D  

  • Overriding jsf

    I'm a bit frustrated with some of the behavior for the jscookmenu. I looked online, but couldn't find any advice on how to override the behavior. What I'm trying to do is alter the javascript functions so that menus pop up on mouse clicks rather than

  • Error message after install-now iPhoto will not open

    After installing iPhoto, a message asks to upgrade the library. Soon, a message says There is a problem in your iPhoto library folder. In the finder, select the iPhoto Library Folder. Then choose file > Get Info. Make sure you have Read & Write permi

  • Omnibox in Firefox

    How can I remove this crap from my Homepage? )= I desinstalled this from my computer, but the Homepage still omnibox.com I can't set another homepage and I can't find it on about:config Any help, please?