Best Practice for deploying in Production Cluster

          I have the following.
          2 Physical machines each running 2 jvm's, thus I have 4 jvm's in my cluster.
          We access the jvm's via an IIS plug-in.
          When it comes time to do a new .war file migration, do you need to stop the jvm's
          first ?
          I have tried deploying with the jvm's live, it technically worked but we then
          noticed several 404 errors during the day on a servlet that was there. (Called
          successfully around the 404)
          Anyway, I'm just looking for recomendations on how others deploy to production.
          Tim
          

          Tim wrote:
          > I have the following.
          >
          > 2 Physical machines each running 2 jvm's, thus I have 4 jvm's in my cluster.
          >
          > We access the jvm's via an IIS plug-in.
          >
          > When it comes time to do a new .war file migration, do you need to stop the jvm's
          > first ?
          You should be able to redeploy a web application without any
          problems? When you get this 404 errors, do you see any stacktrace
          on the server window
          Kumar
          >
          >
          > I have tried deploying with the jvm's live, it technically worked but we then
          > noticed several 404 errors during the day on a servlet that was there. (Called
          > successfully around the 404)
          >
          > Anyway, I'm just looking for recomendations on how others deploy to production.
          >
          > Tim
          

Similar Messages

  • What is best practice for deploying agent(10204) on RAC 9i

    Hello,
    What would be best practice for deploying agent(10204) on RAC 9i? Should the agent be deployed on each node or should the agent be deployed on the cluster file system? What are the advantages/disavantages deploy on individual nodes vs. on cluster file system? Please advice. Thank you in advance.

    Please use agent push application to deploy agent on all the nodes at one shot
    Please refer the obe
    http://www.oracle.com/technology/obe/obe10gemgc_10203/agentpush/agentpush.htm

  • Best practices for deploying EMGrid Control

    Can i use one db for OEM & RMAN repository? Looking for Best practices for deploying EMGrid Control in our environment, I have experience working with EMGrid control it was very slow , how to make it fast ? Like i enjoy the speed of EMDBControl....

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Best practices for deployment from Dev /Staging /Production in SharePoint ?

    Hi All,
    What is a best practices to deploy SharePoint Portal to dev / staging / Production.
    I have custom solution deployed using WSP file. But I have done some changes using sharepoint designer.
    Like as Designer workflow, master pages etc.
    How can I deploy my document libraries and list to dev to prod using best practices?
    Thanks
    Balaji More

    Hi,
    According to your post, my understanding is that you wanted to know the best practices to deploy SharePoint Portal in different SharePoint environment.
    If the site is not existing in the production server, we can save the site from the development server, and then import it to the production server.
    But if the site is already existing in the production server, we should follow these steps to just add the taxonomy and content types to the production server:
    Save the site from Dev as a template
    Import the template as solution in Visual Studio
    Remove unnecessary items from the solution(Please pay more      attention on it. If a content type/list... in the solution is existing in      the production site too, it will replace the
    same object existing in the      production after deployment)
    Package the solution
    Deploy the solution in the production
    For more detailed, please see:
    http://ahmedmadany.wordpress.com/2012/12/30/importing-sharepoint-solution-package-wsp-into-visual-studio-2010/
    There is a similar thread for your reference.
    http://social.technet.microsoft.com/Forums/en-US/7dcf61a8-1af2-4f83-a04c-ff6c439e8268/best-practices-guide-for-deploying-sharepoint-2010-from-dev-to-test-to-production?forum=sharepointgeneralprevious
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Best practices for deploying forms in a 'cluster'?

    Anyone know of any public docs that discuss typical best practices for
    - forms deployment;
    - forms apps management and version control; and/or
    - deploying (and keeping) the .frm/frx in sync when using multiple forms servers in a HA or load balancing envrionment?

    Hi adil,                      
    Based on your description, you want to know the best practices for search service in a SharePoint farm.
    Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
    The article is about the guidance for different farms. 
    Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
    If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
    In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
    The articles below describe the best practices for enterprise search.
    https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
    https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
    Best regards      
    Sara Fan
    TechNet Community Support

  • Best Practice for Deploying ADF application

    I am tasked with developing a best or prefered practice of feploying a large ADF application. Background: we are in the process of redeveloping a UI for a large system. We have broken the system down into susbsytems. Each of these susbsystems UI will be a ADF aaplicaion(?). This is a move from a MS .Net front end. The backend (Batch processes etc) is being dveloped in Java. So my question is if I have several ADF projects for each subsystem and common components that they all will use - what is the best practice to compile package and deploy? The deployment will be to weblogic server or servers(Cluster).
    We have a team of at least 40 -50 developers worldwide so we are looking for an automated build and deploy and would like to follow Oracle best practice. So far I have read Deploying ADF Applications (http://download.oracle.com/docs/cd/E15523_01/web.1111/e15470/deploy.htm#BGBJHGFH) and have followed the links. I have also look at the ADF evangalist blogs - lots of chatter about ojdeploy. My concern about ojdeploy is that dependent files are also being compiled at the same time. I expected that we want shared dependent files compiled only once (Is that a valid concern)?
    So then when we build the source out of subversion (ojdeploy ? Ant? ) then what is best practice to deploy to a weblogic server (wslt admin console) - again we want it to be automated.
    Thank you in advance for replies.
    RK

    Rule 1: Never use the "Automatically Expose UI Componentes in a New Managed Bean" option, create your bindings manually;
    Rule 2: Rule 1 is always right;
    Rule 3: In doubts, refer to rule 2.
    You may also want to check out :
    http://groups.google.com/group/adf-methodology
    And :
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf

  • Best practices for promotions to production

    My company's production environment is way to lose, I need to implement some controls. My analysts keep fouling up the production objects. Does anyone know of best practices for an organization rolling out production changes?
    thanks

    Yes you can. With SOA 11g, you can create deployment profiles to change poperties during deployment. You can also build your own deployment mechanism, as I did.
    http://orasoa.blogspot.com/2009/04/new-oracle-soa-build-server-osbs.html
    Marc

  • Best practice for deploying the license server

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

    Was wondering if there is a best practice guideline or a rule of thumb out there for deploying the license server. For instance, is it better to have one license server and all your products connect to that, dev, QA, prod. Or is it better to have a license server for each deployment, i.e. one for dev one for QA.. etc.

  • Best practice for test to production

    I actually only have one server for test and production, but the dev processes all point to development databases and the production processes will point to production databases.
    The only real change is to make the JMS queue points to prod vs test. There doesn't seem to be an easy way to copy a complete process and change the name. That would work best for me.
    Any ideas?
    Edited by: ss396s on Nov 19, 2009 9:21 AM

    Yes you can. With SOA 11g, you can create deployment profiles to change poperties during deployment. You can also build your own deployment mechanism, as I did.
    http://orasoa.blogspot.com/2009/04/new-oracle-soa-build-server-osbs.html
    Marc

  • Best Practices for deployment of Oracle 10g database.

    Hello ,
    Is anyone aware of a whitepaper/ document that talks about best pratices in deploying a database on Oracle 10g and configuration of the database to utilize all the features available in 10g ( eg. ADDM , reports setup etc )
    Thanking you in Advance.
    Cheers..rCube

    Appreciate the input Jaffer. Thanks.
    However I was referring to a Best Practices whitepaper like the one existing for Data Guard & MAA available at the follwogng url : - http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
    Is there something available along the same lines ?
    Cheers..rCube

  • Best practices for a development/production scenario with ORACLE PORTAL 10G

    Hi all,
    we'd like to know what is the best approach for maintaining a dual development/production portal scenario. Specially important is the process of moving from dev. to prod. and what it implies in terms of portal availability in the production environment.
    I suppose the best policy to achieve this is to have two portal instances and move content via transport sets. Am I right? Is there any specific documentation about dev/prod scenarios? Can anybody help with some experiences? We are a little afraid regarding transport sets, as we have heard some horror stories about them...
    Thanks in advance and have a nice day.

    It would be ok for a pair of pages and a template.
    I meant transport sets failed for moving an entire pagegroup (about 100 pages, 1Gb of documents).
    But if your need only deals with a few pages, I therefore would direclty developp on the production system : make a copy of the page, work on it, then change links.
    Regards

  • Best practices for deploying common object services

    Hi,
    Our team has broken out from our main application around 10 services that largely are used to return objects from 10 common tables in the database. We are thinking that these services should be reusable amongst the 5 or so applications that we are going to have in the near future. We're now trying to decide what the best way to make these common services available to the applications is and after considering several ideas, these are the options we've come up with:
    1. Putting jars for all of the services in each application and adding entries to the sessions.xml for any Toplink project mappings that are in the jar files. Also are considering having just one many services jar.
    2. Exposing the services through web services and only giving the client apps the client side code to invoke the web service. Realize this may mean a performance hit, but would mean less code on the client.
    3. Stateless session EJB's.
    4. parent-application tag or some other way to make these jar's be available to all applications on the app server through classloading
    5. Some sort of messaging service
    Would appreciate some input on this, as this seems like it would be a fairly common problem.
    Thanks,
    Mark

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Best Practices for Deployment

    We have been developing apps and are now looking at how best to deploy. Because we are developing departmental apps, the thought is to create a workspace/schema per department. To access corporate data, we are looking at a departmental user with access views that hide which of the environments they are looking at.
    I'm sure this is a topic that has or is crossing many peoples minds and I'd like to hear how other companies are approaching this.
    Success and/or Failure stories are greatly appreciated!

    Gerald,
    I believe your suggestion of a workspace + schema per department is a good one. By access views I assume you mean a public synonym?
    Anyone out there wish to share experiences?
    Sergio

  • Best practices for deploying an IPS ?

    Hi all
    Im thinking of putting an IPS on my network, my question is what should the approach be to this, my thinking was to run it in monitor mode to get a baseline etc for a few weeks, then switch on inline mode.
    I hear there are different types of protection, signiture based, anomoly etc, can you change this on the device ?
    what kind of protection do most people run, would it be the default ?
    cheers
    Carl

    how do I know which signitures are 100% malicious ?
    Usually, by default, when you first install IPS (cisco or not), all signatures with deny/drop-kind of action are targeted for really malicious traffic wich shouldn't appear on your network. I would say you can just plug ips in inline mode in your network and it won't block any legitimate traffic (from my own experience). Plus, in cisco IPS you can manage behaviour globally by tuning Event Action Overrides and Event Action Filters depending on Risk Rating values. But you should be ready to disable/change event action of a certain signature if it blocks smth that it shouldn't.
    and when you say tune them, what do you mean ?
    I mean that you should analyze logs and take certain actions, i.e.: disabling or enabling certain signatures, changing actions that certain signatures do, changing anomaly detection policies if u use tnem, etc. For example, you see that some signature trigers tonns of loggs every day, but you know that there's nothing special about it, it's all legitimate - so you just disable that signature. Or you see that some log  indicates something that shouldn't appear on your network, but doesn't block it, cause IPS is not sure what do do with it. In that case you should change action of that signature from log to some kind of deny/drop. And many other things.
    also should I enable anomaly detection?
    First you should know how it works, and then you'll know if you should))

  • Export and Deployment - Best Practices for RAR and CUP

    Hi Experts,
    I wanted to know what in your opinon is best practice for deployment for GRC for a 3 system landscape.
    We have a development landscape which connacts to all our environments - Dev-QA-Prod.
    Is it recommended to have just the production client connected to the prodiction boxes only and use Dev/ QA for other environments or is it a good idea to have Prod and QA in sync?
    In my opinion it looks like a good idea to have the same QA and PROD as it would make export easier.. Maybe I am worng..
    What according to you all is a good recommended practice here?
    Thanks,
    Chinmaya

    Hi Chinmaya,
    depends how many clusters you have in your landscape
    if it is something like 5 DEV box to connect 5 QAS boxes, so on
    then best practice will be to have separate DEV - QAS - PRD boxes for GRC  if money (h/w ) is no constraint for organization
    rather than later asking SAP for deletion scripts for deleting sandbox or dev connectors,
    best to have separate boxes for each
    also for future whenever you do rule changes in RAR and config changes in CUP, best to test in QAS first, as CUP will become very critical for your organization, post go-live
    and good part will be that management report will reflect true data for PRD only
    regards,
    Surpreet

Maybe you are looking for