Best Practices for Patching RDS Environment Computers

Our manager has tasked us with creating a process for patching our RDS environment computers with no disruption to users if possible. This is our environment:
2 Brokers configured in HA Active/Active Broker mode
2 Web Access servers load balanced with a virtual IP
2 Gateway servers load balanced with a virtual IP
3 session collections, each with 2 hosts each
Patching handled through Configuration Manager
Our biggest concern is the gateway/hosts. We do not want to terminate existing off campus connections when patching. Are there any ways to ensure users are not using a particular host or gateway when the patch is applied?
Any real world ideas or experience to share would be appreciated.
Thanks,
Bryan

Hi,
Thank you for posting in Windows Server Forum.
As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
Hope it helps!
Thanks.
Dharmesh Solanki

Similar Messages

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • Best Practices for patching Exchange 2010 servers.

    Hi Team,
    Looking for best practices on patching Exchnage Server 2010.
    Like precautions  , steps and pre and post patching checks.
    Thanks. 

    Are you referring to Exchange updates? If so:
    http://technet.microsoft.com/en-us/library/ff637981.aspx
    Install the Latest Update Rollup for Exchange 2010
    http://technet.microsoft.com/en-us/library/ee861125.aspx
    Installing Update Rollups on Database Availability Group Members
    Key points:
    Apply in role order
    CAS, HUB, UM, MBX
    If you have CAS roles in an array/load-balanced, they should all have the same SP/RU level.  so coordinate the Exchange updates and add/remove nodes as needed so you do not run for an extended time with different Exchange levels in the same array.
    All the DAG nodes should be at the same rollup/SP level as well. See the above link on how to accomplish that.
    If you are referring to Windows Updates, then I typically follow the same install pattern:
    CAS, HUB, UM, MBX
    With windows updates however, I tend not to worry about suspending activation on the DAG members rather simply move the active mailbox copies, apply the update and reboot if necessary.

  • Best Practices for patch/rollback on Windows?

    All,
    I have been working on BO XI with UNIX for some time now and while I am pretty comfortable with managing it on UNIX, I am not too sure about the "best practices" when it comes to Windows.
    I have a few specific questions:
    1) What is the best way to apply a patch or Service Pack to BO XI R2 on a Windows envt without risking a system corruption?
    - It is relatively easier on UNIX because you don't have to worry about registry entries and you can even perform multiple installations on the same box as long as you use different locations and ports.
    2) What should be the ideal "rollback" strategy in case an upgrade/patch install fails and corrupts the system?
    I am sure I will have some follow up questions, but if someone can get the discussion rolling with these for now, I would really appreciate!
    Is there any documentation available around these topics on the boards some place?
    Cheers,
    Sarang

    This is unofficial but usually if you run into a disabled system as a result of a patch and the removal/rollback does NOT work (in other words you are still down).
    You should have made complete backups of your FRS, CMS DB, and any customizations in your environment.
    Remove the base product and any seperate products that share registry keys (i.e. crystal reports)
    Remove the left over directories (XIR2 this is boinstall\business objects\*)
    Remove the primary registry keys (hkeylocalmachine\software\businessobjects\* & hkeycurrentuser\software\businessobjects\* )
    Remove any legacy keys (i.e. crystal*)
    Remove any patches from the registry (look in control panel and search for the full patch name)
    Then reinstall the product (test)
    add back any customizations
    reinstall either the latest(patch prior to update) or newest patch(if needed)
    and restore the FRS and CMS DB.
    There are a few modifications to these steps and you should leave room to add more (if they improve your odds at success).
    Regards,
    Tim

  • Best practices for securely storing environment properties

    Hi All,
    We have a legacy security module that is included in many
    different applications. Historically the settings (such as
    database/ldap username and password) was stored directly in the
    files that use them. I'm trying to move towards a more centralized
    and secure method of storing this information, but need some help.
    First of all, i'm struggling a little bit with proper scoping
    of these variables. If another application does a cfinclude on one
    of the assets in this module, these environment settings must be
    visible to the asset, but preferrably not visible to the 'calling'
    application.
    Second i'm struggling with the proper way to initialize these
    settings. If other applications run a cfinclude on these assets,
    the application.cfm in the local directory of the script that's
    included does not get processed. I'm left with running an include
    statement in every file, which i would prefer to avoid if at all
    possible.
    There are a ton (>50) applications using this code, so i
    can't really change the external interface. Should i create a
    component that returns the private settings and then set the
    'public' settings with Server scope? Right now i'm using
    application scope for everything because of a basic
    misunderstanding of how the application.cfm's are processed, and
    that's a mess.
    We're on ColdFusion 7.
    Thanks!

    Hi,
    Thank you for posting in Windows Server Forum.
    As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
    completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best practice for patching ( Feb 2014 CU) of Project Server 2010

    Hi,
    Please advise.
    Thanks
    srabon

    Hi,
    My current environment is,
    # VM -1 (Project Server 2010) Enterprise Edition
    - Microsoft SharePoint and Project Server 2010 SP1
    - Configuration Database Version 14.0.6134.5000
    - Patch installed - KB2767794
    # VM -2 (SQL Server 2008 R2)
    Now my plan is,
    - Taking Snapshot for the VM-1
    - Should I also take VM snapshot for VM-2 ?
    - Taking Farm Backup
    - Taking /pwa site collection backup
    For your information I do see in your articale MS says that I may have SP1 or SP2 before running this patch but you mentioned I have to have SP2 as well ?
    Fyi....
    Prerequisites
    To install this cumulative update, you must have one of the following products installed:
    Microsoft Project Server 2010 Service Pack 1 (SP1) or Service Pack 2 (SP2)
    Microsoft SharePoint Server 2010 Service Pack 1 (SP1) or Service Pack 2 (SP2)
    Microsoft SharePoint Foundation 2010 Service Pack 1 (SP1) or Service Pack 2 (SP2)
    Please advise if I miss something in my plan.
    Thanks
    srabon

  • Best practice for setting an environment variable used during NW AS startup

    We have installed some code which is running in both the ABAP and JAVA environment, and some functionality of this code is determined by the setting of operating system environment variables. We have therefore changed the .sapenv_<host>.csh and .sapenv_<host>.sh scripts found in the <sid>adm user home directory. This works, but we are wondering what happens when SAP is upgraded, and if these custom shell script changes to the .sh and .csh scripts will be overwritten during such an upgrade. Is there a better way to set environment variables so they can be used by the SAP server software when it has been started from <sid>adm user ?

    Hi,
    Thankyou. I was concerned that if I did that there might be a case where the .profile is not used, e.g. when a non-interactive process is started I was not sure if .profile is used.
    What do you mean with non-interactive?
    If you login to your machine as sidadm the profile is invoked using one of the files you meant. So when you start your Engine the Environment is property set. If another process is spawned or forked from a running process it inherits / uses the same Environment.
    Also, on one of my servers I have a .profile a .login and also a .cshrc file. Do I need to update all of these ?
    the .profile is used by bash and ksh
    The .cshrc is used by csh and it is included via source on every Shell Startup if not invoked with the -f Flag
    the .login is also used by csh and it is included via source from the .cshrc
    So if you want to support all shells you should update the .profile (bash and ksh) and one of .cshrc or .login for csh or tcsh
    In my /etc/passwd the <sid>adm user is configured with /bin/csh shell, so I think this means my .cshrc will be used and not the .profile ? Is this correct ?
    Yes correct, as described above!
    Hope this helps
    Cheers

  • Best practice for working on multiple computers

    How do I handle working on multiple devices without having to sync the local files with the remote/testserver everytime I change my machine?
    I have 2 computers, a desktop and a laptop. Usually I code on my desktop but from time to time, I need to make a few edits on my laptop, e.g. when I'm not at home.
    In my early days (CS3) I used to edit the files directly on the remote server, which is not possble anymore since - I think - CS5. Moreover I'm quite happy for having local files I can browse and search through very quickly.
    However everytime need to make a quick edit, I need to sync the whole site with my local files - which is very inconvenient for me. And sometimes I forget that I edited a file on my laptop, uploaded it to the server and then I start working again on the desktop with the old local version of this file. Some projects are quite large with thousands of files due to plugins (e.g. tinymce), for example a webshop. It is a real pain to wait for the sync when I just need to edit one word.
    So what is the default solution for this problem?

    Well, thank you for your anwers.
    Using an online drive system like dropbox seems to be a fine solution - however I wished I wouldn't need a 3rd party software to do so. There two concerns about this solution:
    Syncing problems: when I hit CTRL+S, Dreamweaver automaticalles saves my local files and upload them to the server. If there is an additional dropbox sync, isn't the whole solution prone to errors? (Any experience with ondrive? As it comes preinstalled and has 25 Gigs free, I might give it a try for syncing the local DW data)
    Most important: Password security. I story my mySQL connection information (dbname, passwords, hosts...) in a PHP file. As this connection information is in plain text, I'm not very happy that MS (or Dropbox, Google, ..) can see and scan this data.
    @Nancy O.: I will start using check-in/check-out, it seems to be a great feature. Just so define what it does what it does not do: As long as I checked-out a file, I can't edit on my other machine, which is nice. However, back to the new-file.html example, I won't see this file on my desktop unless I sync it (using DW sync, Dropbox, or anything else), correct?

  • Best Practices for patching Sun Clusters with HA-Zones using LiveUpgrade?

    We've been running Sun Cluster for about 7 years now, and I for
    one love it. About a year ago, we starting consolidating our
    standalone web servers into a 3 node cluster using multiple
    HA-Zones. For the most part, everything about this configuration
    works great! One problem we've having is with patching. So far,
    the only documentation I've been able to find that talks about
    patch Clusters with HA-Zones is the following:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi2g0
    Sun Cluster System Administration Guide for Solaris OS
    How to Apply Patches in Single-User Mode with Failover Zones
    This documentation works, but has two major drawbacks:
    1) The nodes/zones have to be patched in Single-User Mode, which
    translates to major downtime to do patching.
    2) If there are any problems during the patching process, or
    after the cluster is up, there is no simple back out process.
    We've been using a small test cluster to test out using
    LiveUpgrade with HA-Zones. We've worked out most of bugs, but we
    are still in a position of patching our HA-Zoned clusters based
    on home grow steps, and not anything blessed by Oracle/Sun.
    How are others patching Sun Cluster nodes with HA-Zones? Has any
    one found/been given Oracle/Sun documentation that lists the
    steps to patch Sun Clusters with HA-Zones using LiveUpgrade??
    Thanks!

    Hi Thomas,
    there is a blueprint that deals with this problem in much more detail. Actually it is based on configurations that are solely based on ZFS, i.e. for root and the zone roots. But it should be applicable also to other environments. "!Maintaining Solaris with Live Upgrade and Update On Attach" (http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach)
    Unfortunately, due to some redirection work in the joint Sun and Oracle network, access to the blueprint is currently not available. If you send me an email with your contact data I can send you a copy via email. (You'll find my address on the web)
    Regards
    Hartmut

  • Best practices for VTP / VLAN environment

    Hi,
    We currently have 1 VTP domain where all our network devices are configured.
    We now want to join another VTP domain to this domain and I wander what the best approach will be to do this.
    1. I can configure all the VLAN ID's at my own VTP server ( there are no overlapping ID's ) and configure the devices from the other domain as clients ( but what happens to the VLAN configurations made on the old VTP server ? ).
    2. connect our 2 networks
    or is it better to change only the VTP domain on the other VTP server ( the same as ours now ) and then connect the networks together ( what will happen with the VTP/vlan configuration of both servers, will they be added or will the sever with the highest revision number just copy his database to the other server and probably delete the current VTP/VLAN configuration ?
    Is is good to have 2 VTP servers ?

    I would like to warn you!
    As soon as the VTP server you want to get rid of will be moved in the other domain, its config revision number will be 0 (The change of domain name will reset the config revision number). Its vlan database will be erased by the VTP server of the new domain. All the vlans of the old VTP domain will be lost.
    I would proceed in this way (VTP A is the VTP you want to keep, VTP B is the VTP you want to merge into VTP A):
    1) configure all the VLANs of VTP B in VTP A
    2) reconfigure all VTP B's switches as VTP A client.
    That's it.
    Regards,
    Christophe

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice for licence server for RDS Farm & Certificate errors

    Hello,
    I am in the process of creating an RDS farm using Server 2008 R2.  I have three Session Hosts and a Connection Broker.
    I have a set of 10 user CALs available and also another 20 on our current RDS server which will need migrating once we go live with the farm.
    I understand the User CALs need to be installed on another Server 2008 R2 and I am wondering what is best practice.  We are running on an entirely virtual environment and it would be simple enough to create another server and install the CALs on there. 
    The only issue with that is that I would need to create a replica of this new machine for DR purposes, but this would take up valuable space which may not be necessary.
    We are planning on creating replicas of one of the Session hosts and the broker for DR, so I am guessing I would need to install some CALs on the Session Host which is going to be replicated.
    There are a few options and I am just wondering what is the best way to go about things.
    Also, as an aside, I am getting an annoying certificate error each time I log a test user onto the RDS farm - I think this is because I am using the DNS alias of the RDS Farm to log on. Is there an easy way to get around this, other than the 'Do not show
    this message again'. I have been doing some research and the world of Certificates is very confusing!!
    Thanks,
    Caroline
    C.Rafferty

    Hi Caroline,
    Firstly for your License related issue, you can perform the step on any VM or can create the new VM as replica for RDSH server also. But please be sure that you have installed RD License server on it, activate it and then install RDS CAL on it. But be safe
    if possible don’t install RD License server with RDCB, please make that out of it as little away. As you can also install RD License server with AD or make replica of that and install RDL on that.
    Best practices for setting up Remote Desktop Licensing (Terminal Server Licensing) across Active Directory Domains/Forests or Workgroup
    http://support.microsoft.com/kb/2473823
    What’s the specified certificate error which you are receiving?
    If you're going to allow users to connect externally and they will not be part of your domain, you would need to deploy certificates from a public CA. In meantime you can refer blog for getting insight for certificate case.
    Certificate Requirements for Windows 2008 R2 and Windows 2012 Remote Desktop Services
    http://blogs.technet.com/b/askperf/archive/2014/01/24/certificate-requirements-for-windows-2008-r2-and-windows-2012-remote-desktop-services.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

Maybe you are looking for

  • Handoff: No connection between iPad and iPhone

    Handoff: No connection between iPad and iPhone (IOS 8).

  • Need to store values in an array from a file

    My input file is a series of float numbers in a single line. I need to copy these values and store them in an array. Likewise i need to write these files stored in the array to a new file. can someone help me with the code. My data is something like

  • IPhoto Raw Workflow

    I have been experimenting with shooting in raw and am now trying to figure out how best to work with the files. I've always found iPhoto 11 to be sufficient for organizing my photo library (~50K images), while occasionally using Photoshop Elements to

  • Report With Print Option

    Hi Everyone, I have requirement, i need to use PRINT icon in Outputlist, if the User clicks it should Print, is there any way to do this and the Program type is 'M'. can anyone suggest the best way to do this ?? From Nemani

  • Clean Install but no Library Files

    What I meant by this is I would like to do a clean install of an operating system (possibly Mavericks) to a new external drive, then using Migration Assistant or some other means to bring over my User folder minus the Library which I assume contains