SOA OSB Deployment best practices in Production environment.

Hi All
I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

There are two approaches for deployment of OSB and SOA code.
1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
Most commonly followed approach is approach#1.
Regards
Vivek

Similar Messages

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Grid Control deployment best practices

    Looking for this document, interested to know more about Grid Control deployment best practices, monitoring and managing for more than 300+ dbs.

    hi
    have a search for the following document
    MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
    regards
    Alan

  • Portal server deployment best practices

    anyone out there knows what is the right way to deply portal server into production environment instead of manually copying all the folders and run the nessarily commands? Is there a better way to deploy portal server? Any best practices that i should follow for deploying portal server?

    From the above what I understood is you would like to transfer your existing portal server configuration to the new one. I don't think there is an easy method to do it.
    One way You can do is by taking the "ldif " back up from the existing portal server.
    For that first you have to install the portal server in the new box and then take back up of existing portal server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db /tmp/profile.ldif
    edit the "/tmp/profile.ldif " file and modify <hostname> and <Domain name> with the new system values.
    copy this file to the new server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db -i /tmp/backdb.ldif
    and also copy the file "slapd.user_at.conf " under /opt/netscape/directory4/slapd-<hostname>/config to the new system.
    Restarting the server makes you to access the portal server with the confguration of the old one.

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • Jdev101304 SU5 - ADF Faces - Web app deployment best practice|configuration

    Hi Everybody:
    1.- We have several web applications that provides a service/product used for public administration purposes.
    2.- the apps are using adf faces adf bc.
    2.- All of the apps are participating on javaSSO.
    3.- The web apps are deployed in ondemand servers.
    4.- We have notice, that with the increase of users on this dates, the sessions created by the middle tier in the database, are staying inactive but never destroyed or removed.
    5.- Even when we only sing into the apps using javasso an perform no transacctions (like inserting or deleting something), we query the v$sesisons in the database, and the number of inactive sessions is always increasing, until the server colapse.
    So, we want to know, if this is an issue of the configurations made on the Application Module's properties. And we want to know if there are some "best practices" that you could provide us to configure a web application and avoid this behavior.
    The only configurations that we found recomended for web apps is set the jbo.locking.mode to optimistic, but this doesn't correct the "increasing inactive sessions" problem.
    Please help us to get some documentation or another resource to correct configure our apps.
    Thnks in advance.
    Edited by: alopez on Jan 8, 2009 12:27 PM

    hi alopez
    Maybe this can help, "Understanding Application Module Pooling Concepts and Configuration Parameters"
    see http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html
    success
    Jan Vervecken

  • Hotfix Management | Best Practices | WCS | J2EE environment

    Hi All,
    Trying to exploit some best practices around hotfix management in a J2EE environment. After some struggle, we managed to handle the tracking of individual hotfixes using one of our home-grown tools. However, the issue remains on how to manage the 'automated' build of these hotfixes, rather than doing the same manually, as we are currently doing.
    Suppose we need to hotfix a particular jar file on a production environment, I would need to understand how to build 'just' that particular jar. I understand we can label the related code (which in this case could be just a few java files). Suppose this jar contains 10 files out of which 2 files need to be hotfixed. The challenge is to come up with a build script which builds -
    - ONLY this jar
    - the jar with 8 old files and 2 new files.
    - the jar using whatever dependent jars are required.
    - the hotfix build script needs to be generic enough to handle the hotfix build of any jar in the system.
    Pointers, more in line with a WCS environment would be very much appreciated!
    Regards,
    Mrinal Mukherjee

    Moderator Action:
    This post has been moved from the SysAdmin Build & Release Engineering forum
    To the Java EE SDK forum, hopefully for closer topic alignment.
    @)O.P.
    I don't think device driver build/release engineering is what you were intending.
    Additionally, your partial post that was accidentally created as a duplicate to this one
    has been removed before it confuses anyone.

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Deployment from DEV to Production environment

    Hi,
    What are the best possible ways to deploy the visual web parts and Pages from development environment to Production environment.
    As I am very much new to Sharepoint, I would like to have the safest methods to do the deployment into production. Please also suggest the prerequisites.
    I have a solution with number of visual web parts in the development environment which is working fine. Need to move the Visual web parts to production (Existing site). So please suggest the safest methods.
    Thanks and Regards,
    Satish
    Sathiish Reddy

    Hi Satish,
    From your description, my understanding is that you want to deploy your solution from development environment to production environment with a safest method.
    These links below about moving content from environment to another environment could be helpful to you:
    SharePoint 2013 Dev/Test/Production environment - Moving content
    http://sharepoint.stackexchange.com/questions/78483/sharepoint-2013-dev-test-production-environment-moving-content
    How to deploy webpart on production (not debugging) server?
    http://sharepoint.stackexchange.com/questions/17779/how-to-deploy-webpart-on-production-not-debugging-server
    you could also have a look at this link below about SharePoint Content Deployment Wizard:
    How do you deploy your SharePoint solutions?
    http://stackoverflow.com/questions/9543/how-do-you-deploy-your-sharepoint-solutions
    Best Regards,
    Vincent Han
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Export and Deployment - Best Practices for RAR and CUP

    Hi Experts,
    I wanted to know what in your opinon is best practice for deployment for GRC for a 3 system landscape.
    We have a development landscape which connacts to all our environments - Dev-QA-Prod.
    Is it recommended to have just the production client connected to the prodiction boxes only and use Dev/ QA for other environments or is it a good idea to have Prod and QA in sync?
    In my opinion it looks like a good idea to have the same QA and PROD as it would make export easier.. Maybe I am worng..
    What according to you all is a good recommended practice here?
    Thanks,
    Chinmaya

    Hi Chinmaya,
    depends how many clusters you have in your landscape
    if it is something like 5 DEV box to connect 5 QAS boxes, so on
    then best practice will be to have separate DEV - QAS - PRD boxes for GRC  if money (h/w ) is no constraint for organization
    rather than later asking SAP for deletion scripts for deleting sandbox or dev connectors,
    best to have separate boxes for each
    also for future whenever you do rule changes in RAR and config changes in CUP, best to test in QAS first, as CUP will become very critical for your organization, post go-live
    and good part will be that management report will reflect true data for PRD only
    regards,
    Surpreet

  • Best Practice for MUD Environment

    Hi Guys,
    I initially thought of using Merge Repository as an option to MUD Environment.
    But I found that while merging repositories, You have to either accept changes from Modified or Current Repository.
    What if I have 2 developers working parallel in a single Presentation Folder?
    Then I though Project based MUD Implementation will be the only option but in that Developers will have power to keep or remove changes from other developer.
    Now I am confused how I can get multiple users develop single RPD.
    Please let me know what's the best practice used?
    Thanks
    Saurabh

    Below some explanations. Follow the links, if you want more information. Personnaly, I prefer to set up a MUD environment.
    Software Configuration Management
    By default, the Oracle BI repository development environment is not set up for multiple users. However, online editing makes it possible for multiple developers to work simultaneously, though this may not be an efficient methodology, and can result in conflicts, because developers can potentially overwrite each other's work.
    To develop a repository in a concurrent version environment, you have several choices :
    * first of all, you can send the repository to the developper, keep a copy, retrieve it after modification and perform an [Merge Repository|http://gerardnico.com/wiki/dat/obiee/obiee_repository_merge|Merge Repository]
    * second, you can set up a [multiuser environment (MUD)|http://gerardnico.com/wiki/dat/analytic/obiee/multiuser_environment] which use the notion of Projects to split the work area. It would permit developers to modify a repository simultaneously and then check in changes.
    The import option which permit to import a subset of a repository to an other repository, work but is deprecated.
    Success
    Nico

  • SOA composite application best practice

    Hi All,
    We are running SOA Suite 11g. One of my colleagues said that we should always have a mediator in our composite applications instead of just exposing the BPEL Process as a SOAP service. Is this a correct statement? If so why is that good practice. Still trying to grasp the concepts and best practices for SOA so any information is greatly appreciated.
    Thanks,
    S

    if you place a mediator in between them, you can change the bpel interface without having to change your composite soap interface
    that's one thing which could be a best practice

  • SSL: Best Practice in HA environment

    Hi all,
    The customer want to know, if there is any best practice Doc available regarding SSL Konfiguration in one WebLogic Server (10.3.6) - HA environment and Hardware Loadbalancer? Additional questions:
    1- Configuration of lacation of "certification files"? which practice is better: share Storage or each Managed Server?
    2- More Information regarding generating of certificates
    Thanks in advance, Moh

    Hi Moh,
    Did you have a look at this?
    http://www.oracle.com/technetwork/database/availability/maa-fmwsharedstoragebestpractices-402094.pdf
    Search for certs.
    Regards Peter

  • Best Practices: Clustered Author Environment

    Hello,
    We are setting our CQ 5.5 infrastructure in 3 datacenters with ultimately an Authoring instance in each (total of three).  Our plan was to Cluster the three machines using “Share Nothing” and each would replicate to the Publish instances in all data centers.  To eliminate confusion within our organization, I’d like to create a single URL resource for our Authors so they wouldn’t have to remember to log into 3 separate machines?
    So instead of providing cqd1.acme.com, cqd2.acme.com, cqd3.acme.com, I would distribute something like “cq5.acme.com” which would resolve to one of the three author instances.  While that’s certainly possible by putting a web server/load balancer in front of the three, I’m not so sure that’s even a best practice for supporting internal users.
    I’m wondering what have other multi-datacenter companies done (or what does Adobe recommend) to solve this issue, did you:
    Only give one destination and let the other two serve as backups? (this appears to defeat the purpose of clustering)
    Place a web server/load balancer in front of each machine and distribute traffic that way?
    Do nothing, e.g., provide all 3 author URLs and let the end-user choose the one closest to them geographically?
    Something else???
    It would be nice if there was a master UI an Author could use that communicated with the other author machines in a way that’s transparent to the end-user – so if Auth01 went down, the UI would continue to work with the remaining machiness without the end-user (author) even knowing the difference (e.g., not have to change machines).
    Any thoughts would be greatly appreciated.

    Day's documentation (for CRX 2.3) states in part, "whenever a write operation is received by a slave instance, it is redirected to the master instance ..."  So, all writes will always go to the master, regardless of which instance you hit.
    Day's documentation also states, "Perhaps surprisingly, clustering can also benefit the author environment because even in the author environment the vast majority of interactions with the repository are reads. In the usual case 97% of repository requests in an author environment are reads, while only 3% are writes."
    This being the case, it seems the latency of hitting a remote author would far outweight other considerations.  If I were you, New2CQ, I would probably have my users hit the instance that's nearest to them (in terms of network latency, etc...) regardless or whether it's a master or a slave.

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

Maybe you are looking for