Environment best practices

In my experience with other Integration tools, it has been a best practice to deploy an instance of the product per environment.  For example, I would expect to have Oracle SOA Suite deployed for a development enviornment, an Int Test environment, a System Test environment, and production.  In each of those environments exists a version of the corporate applications that are in various states of readiness as well.
The question has been raised at this client - Why do we need to do that and could we not just stand up a single ESB instance for all non-prod and an ESB instance for prod.
What is the experience of others on this forum around this topic - best practice, pros/cons, things to watch out for, etc.

Thats a bad idea. You must have at least 3 environments.
Developers need a space to deploy code and test. You need Test ideally a COPY of a production to smoke out any weirdness in the code.  The Test environment should resemble Prod in all aspects i.e hardware,memory, software versions etc.

Similar Messages

  • Create Test Environment - Best Practice

    HI all..
    Please, someone know which is the best practice  to create a environment test from our company DB for testing purpose?
    OUR SCENARIO:
    SAP B1 9.1 PL 6
    MSSQL 2008 R2
    Integration service activated
    WINDOWS 2008 R2 Ent Edition for client machine and server machine
    Thank's in advance
    --LUCA

    Hi Manish..
    I would like to have a copy of our company DB for testing purpose..
    It's not important integration service.
    We would like to test 9.1 PL 06 BoM new features without modify the rpoduction environment...
    Thank's
    --LUCA

  • Network users in mixed desktop and portable environment: Best Practices

    Hello,
    When using Server 4.0 in a network that includes both Mac desktops and Macbook devices, what is the recommended setup for network users and their Home directories?
    The environment I'm building would best be suited for a "Local Only" Home folder setup due to the frequent use of large files, such as iPhoto stores, by many users. However, how does that function with the portables? When on the network, the user has all files accessible locally, but once off site, the user is not available. I know that a user may be converted into a Mobile Account so that it is available both on and off the network. However, my understanding is that this is more suited for syncing Home folder, i.e. when the Home folder is on a network share. Will a mobile account also function this a "Local Only" Home folder setup?
    Essentially, I'm using the server to manage network users, not to serve files for the Home directories.
    Thank you.

    Bump?

  • Best Practice Question on Heartbeat Network

    After running 3.0.3 a few weeks in production, we are wondering if we set up our Heartbeat /Servers correctly.
    We have 2 servers in our Production Server pool. Our LAN, a 192.168.x.x network, has the Virtual IP of the Cluster (heartbeat), the 2 main IP addresses of the servers, and a NIC assigned to each guest. All of this has been configured on the same network. Over the weekend, I wanted to separate the Heartbeat onto a new network, but when trying to add to the pool I received:
    Cannot add server: ovsx.mydomain.com, to pool: mypool. Server Mgt IP address: 192.168.x.x, is not on same subnet as pool VIP: 192.168.y.y
    Currently, I only have one router that translate our WAN to our LAN of 192.168.x.x. I thought the heartbeat would strictly be internal and would not need to be routed anywhere and just set up as a separate VLAN and this is why I created 192.168.y.y. I know that the servers can have multiple IP addresses, and I have 3 networks added to my OVM servers. 192.168.x.x, 192.168.y.y and 192.168.z.z. y and z are not pingable from anything but the servers themselves or one of the guests that I have assigned that network to. I can not ping them directly from our office network, even through the VPN which only gives us access to 192.168.x.x.
    I guess I can change my Sever Mgt IP away from 192.168.x.x to 192.168.y.y, but can I do that without reinstalling the VM server? How have others structured there networks especially relating to the heartbeat?
    Is there any documentation/guides that would describe how to set up the networks properly relating to the heartbeat?
    Thanks for any help!!

    Hello user,
    In order to change your environment, what you could do is go to the Hardware tab -> Network. Within here you can create new networks and also change via the Edit this Network pencil icon what networks should manage what roles (i.e. Virtual Machine, Cluster Heartbeat, etc). In my past experience, I've had issues changing the cluster heartbeat once it has been set. If you have issues changing it, via the OVM Manager, one thing you could do is change it manually via the /etc/ocfs2/cluster.conf file. Also, if it successfully lets you change it via the OVM Manager, verify it within the cluster.conf to ensure it actually did your change. This is where that is being set. However, doing it manually can be tricky because OVM has a tendency to like to revert it's changes back to its original state say after a reboot. Of course I'm not even sure if they support you manually making that change. Ideally, when setting up an OVM environment, best practice would be to separate your networks as much as possible i.e. (Public network, private network, management network, clusterhb network, and live migration network if you do a lot of live migrating, otherwise you can probably place it with say the management network).
    Hope that helps,
    Roger

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • Best Practice to configure tnsnames.ora on client of MAA environment in 10g

    Hi,
    I have a MAA environment, 1 RAC Primary of 2 nodes and 1 RAC standby of 2 nodes too. I want to configure the tnsnames.ora on clients (we have many clients on each PC) and I need to configure the tnsnames.
    I have read some papers but the information is not clear regarding to configure the tnsnames on clients. I just want to have only one entrie on tnsnames that I can use on clients to connect to RAC Primary and/or Standby depending on what site is primary in that moment, I was thinking in something like this:
    SALES.WORLD =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = site1a)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = site1b)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = site2a)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = site2b)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = sales.world)
    (FAILOVER_MODE =
    (TYPE = SELECT)
    (METHOD = BASIC)
    (RETRIES = 180)
    (DELAY = 5)
    Where site1 is RAC Primary now, and site2 is standby. But in case of switchover, I want that clients recognize automatically to which site connect.
    I was thinking to stop listeners on site that is not primary on that moment.
    The question is if this is the best practice on this scenario?.
    Thanks for your advice.
    Luis

    Hi Louis,
    you need to setup a service with clusterware. On both sides: primary and standby.
    On primary you start them. On standby the services are also configured but stopped.
    In case of switchover or failover, data guard will notice clusterware to bring them up.
    You need to use this service name in your clients tnsnames.
    Another issue are TCP timeouts, to protect against them you use
    outbount_connect_timeout in your sqlnet.ora
    Also have a look at
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_SwitchoverFailoverBestPractices.pdf
    HTH Mathias

  • Best Practice to generate UUIDs in a Cluster-Server Environment

    Hi all,
    I just need some inputs over the best practices to generate UUIDs in typical internet world where there are multiple servers/JVMs involved for load balancing or traffic distribution etc. I know JAVA is shipped with very efficient UUID generator API.
    But still that doesn't solve the issue in multiple server environment.
    For the discussion sake lets assume I need it to be unique over the setup than a near unique.
    How do you guys approach it?
    Thanks you all in advance.

    codeNombre wrote:
    jverd wrote:
    codeNombre wrote:
    Thanks jverd,
    So adding to the theory of "distinguishing all possible servers" in addition to UUID over each server would be the way to go.If you're unreasonably paranoid, sure.I think its a common problem and there is a big number of folks who might still be bugged about the "relative uniqueness" of UUID in long run. People who don't understand probability and scale, sure.
    Again coming back to my original problem in an "internet world", shouldn't the requirement like unique id between different servers be dealt with generating the UUID's at a layer before entering into the multi-server setup. Where would that be? I don't have the answer..Again, that is the POINT of the UUID class--so that you can generate as many IDs as you want and still be confident that nobody anywhere is in the world has ever generated any of those same IDs. However, if your requirements say UUID is not good enough, then you need to define what is, and that means having a lot of foresight as to how this system will evolve and how long it will live, AND having total control over some aspect of your servers, AND having a process that is so good that it's LESS LIKELY for a human to screw up and re-use a "unique" server ID than the probabilities I presented in my previous post.

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • Best practice for replicate the IDM (OIM 11.1.1.5.0) environment

    Dear,
    I need to replicate the IDM production to build the IDM test environment. I have the OIM 11.1.1.5.0 in production with lots of custom codes.
    I have two approaches:
    approach1:
    Manually deploy the code through Jdeveloper and export and import all the artifacts. Issue is this will require a lot of time and resolving dependencies.
    approach2:
    Take the OIM, MDS and SOA schemas export and Import the same into new IDM Test DB environment.
    could you please suggest me what is the best practice and if you have some pointers to achieve the same.
    Appreciate your help.
    Thanks,

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best Practice for MUD Environment

    Hi Guys,
    I initially thought of using Merge Repository as an option to MUD Environment.
    But I found that while merging repositories, You have to either accept changes from Modified or Current Repository.
    What if I have 2 developers working parallel in a single Presentation Folder?
    Then I though Project based MUD Implementation will be the only option but in that Developers will have power to keep or remove changes from other developer.
    Now I am confused how I can get multiple users develop single RPD.
    Please let me know what's the best practice used?
    Thanks
    Saurabh

    Below some explanations. Follow the links, if you want more information. Personnaly, I prefer to set up a MUD environment.
    Software Configuration Management
    By default, the Oracle BI repository development environment is not set up for multiple users. However, online editing makes it possible for multiple developers to work simultaneously, though this may not be an efficient methodology, and can result in conflicts, because developers can potentially overwrite each other's work.
    To develop a repository in a concurrent version environment, you have several choices :
    * first of all, you can send the repository to the developper, keep a copy, retrieve it after modification and perform an [Merge Repository|http://gerardnico.com/wiki/dat/obiee/obiee_repository_merge|Merge Repository]
    * second, you can set up a [multiuser environment (MUD)|http://gerardnico.com/wiki/dat/analytic/obiee/multiuser_environment] which use the notion of Projects to split the work area. It would permit developers to modify a repository simultaneously and then check in changes.
    The import option which permit to import a subset of a repository to an other repository, work but is deprecated.
    Success
    Nico

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice - Development Lifecycle in an SSRS, SharePoint and TFS environment

    Hi,
    We have installed our SharePoint/SSRS (in SharePoint integration mode) development environment servers and we're starting to think about the development process for moving reports to a test environment, then to a production environment.  Since the RDL files contain URL's specific to the server on which it resides (URL to the data source, URL's to images in an image library, URL's to subreports, etc), I'd like get some guidance regarding how to move RDL's to the other servers. 
    Since we're planning to use TFS, one option is to use a build script to change the URL's in the RDL file from pointing to the Dev server to the Test server when we promote a report to test, then change the URL to the Prod server when we promote a report to production.  The data source libraries and image librarys will use the same paths regardless of environment, so changing the server name in the URL should suffice.  However, I don't really like the idea of changing source code (the RDL) during code promotion, but I can't really think of an alternate solution.  Is there any best practice guidance or is there anyone who'd like to share how their development process works?
    Thanks,
    -Mike-

    TFS questions go to the TFS forum where they also cover the SharePoint parts of them.
    You'll find it here: (extract from the sticky post at the top of SharePoint - General on which forum to post to)
    Team Foundation Server - there are several Team Foundation Server forums included in the "Visual Studio Team System" group here
    http://social.msdn.microsoft.com/Forums/en-US/category/vsts/
    Team Foundation Server - Admin is for instance here    http://social.msdn.microsoft.com/Forums/en-US/tfsadmin/threads/
    and Team Foundation Server - General is here  http://social.msdn.microsoft.com/Forums/en-US/tfsgeneral/threads/
    Moving to Off-topic.
    WSS FAQ sites: http://wssv2faq.mindsharp.com and http://wssv3faq.mindsharp.com
    Total list of WSS 3.0 / MOSS 2007 Books (including foreign language) http://wssv3faq.mindsharp.com/Lists/v3%20WSS%20FAQ/V%20Books.aspx

  • What is the best practice to deploy the SharePoint site from test to production environment?

    We are beginning to start a new SharePoint 2010 and 2013 development projects, soon developing new features, lists, workflows, customizations to the SharePoint site, customization to list forms and would like to put good practice (that will help in deployment)
    in place before going ahead with development.
    What is the best way to go about deploying my site from Development to Production?
    I am using Visual Studio 2012 and also have Designer 2013...
    I have already read that this can be done through powershell, also through visual studio and also via designer. But at this point I am confused as to which are best practices specifically for lists, configurations; workflows; site customizations; Visual studio
    development features; customization to list forms etc. You can also provide me reference to links/ebook covering this topic.
    Thanks in advance for any help.

    Hi Nachiket,
    You can follow below approach where the environments has been built in similar fashion
    http://thesharepointfarm.com/sharepoint-test-environments/
    if you have less data then you can use  http://spdeploymentwizard.codeplex.com/
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/b0bdb2ec-4005-441a-a233-7194e4fef7f7/best-way-to-replicate-production-sitecolletion-to-test-environment?forum=sharepointadminprevious
    For custom solutions like workflows etc you can always build the WSP packages and deploy across the environments using powershell scripts.
    Hope this helps.
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful
    Hi, can you answer me specifically with regards to the foll:-
    lists
    configurations
    workflows
    site customizations like changes to css/masterpage
    Visual studio webparts
    customization to list forms
    Thanks.

Maybe you are looking for

  • Creation of New Plant

    Inputs reqd to create the Plant........ Our company have only 2 modules FICO and MM Module and Version used is SAP 4.7 I tried to create the Plant 1. Define Plant BU35 2. EC02 -> copied Old Plant BU01 (old plant) to BU35 3. checked Assignment for Pla

  • Problem refreshing SAP R/3 tables via RFC connection

    Hello,   I am developing an application in WAS (using BSPs). I have the WAS in 1 machine and the R/3 in another one.   When I update some tables, this tables aren't refreshed. In order to refresh it I have to restart the application.    ¿Do you know

  • Authorization Issue for BI Reports

    Hi All, I am running the report with one User, and while running i am getting the error message as "NO AUTHORIZATION" I have checked in Su53 and got some logs over there. Pls find below. Authorization check failed    Object Class RS   Business Inform

  • FormulaCalc error when using variablesin variable files?

    I defined two variables RForce_ and IntLength_' in variables files: as following: IntLength_ : I <1700> RForce_ : R <1.4314> in vbs file,I have following: Call FormulaCalc("Ch('PSA_Kraft') := ('RForce_'-'RForce_'/'IntLength_'*'Gurtweg')") When I run

  • Feature Request - custom crop default & making canvas larger

    Please make lightroom quicker for people who use 8x10 instead of the original proportions from your camera.  I am aware of the trick to copy and paste or sync with crop,  however if lightroom could have a way to setup at a default custom crop so it l