Best Practices for automated patching?

Howdy,
We've been having trouble getting patching to work on our servers for months.  I decided it's time to start over.  I've done some searching and have found tons of different people saying they have the best practices and recommended ways of handling
patches but they're all different.
I'm using SCCM 2012 R2 and want to setup ADRs so that every month we can set all our servers up to install their Windows Updates.  I have a few questions:
Is there an "Official" tutorial, video, walkthrough of how we should be doing this?
Is the best way to test this to set something up that is available ASAP but doesn't expire until far in the future?  Should I then be able to see the patches in Software Center as available but they just won't actually do anything since the expiration
would be way down the road?
Most of our servers are running 2012 or 2012 r2 and we're just concerned with the standard patches that come out every month.
Thanks for any links, input, advice, etc.

>  I never know if a tutorial I found is a good one or just some random person that recorded
what they were doing. 
That's just it... tutorial's are meant to show you how the features work. They are not meant to be "follow these step-by-step in your production environment and you're done" instructions. Gerry and Niall's guides are to walk you through building
things out in a lab environment so you can learn them.  You need to learn how the features work, look at the requirements of your company, and then
make conscious decisions about how to implement the tool to meet the requirements.
If you don't feel like you'll be able to synthesize that from knowledge learned in tutorials/guides/books, then I'd suggest bringing in a consultant with a deep understanding of the product to help you accomplish it.
I hope that helps,
Nash
Nash Pherson, Senior Systems Consultant
Now Micro -
My Blog Posts
If you found a bug or want the product to work differently,
share your feedback.
<-- If this post was helpful, please click the up arrow or propose as answer.

Similar Messages

  • Best Practice for Portal Patches and effort estimation

    Hi ,
    One of our client is applying the following patches
    1. ECC 6.0 SP15(currently SP14)
    2. ESS MSS SP15(currently SP14 with some level of functional customization )
    3. EP 7 SP18(currently SP14)
    We would like to kwow the best practice for applying portal patches and the effort estimation for redoing the portal devt on the new patch.
    o   What is the overall level of effort with applying Portal patches?
    o   How are all the changes to SAP objects handle?  Do they have to be
         manually re-entered?
    o  What is the impact of having a single NWDI instance across the
        Portal Landscape during the Patch process?
    Regards,
    Revathi Raju.

    Hi Revathi,
    o What is the overall level of effort with applying Portal patches?
    overall effort to apply the patch is apprx 1/2-1 days for NW7 system. This is exclude the patch files download because it's based on your download speed.
    o How are all the changes to SAP objects handle? Do they have to be
    manually re-entered?
    Depending on your customization. Normally it wont effect if you created the customzation application apart from SAP standard application
    o What is the impact of having a single NWDI instance across the
    Portal Landscape during the Patch process?
    Any change that related to NWDI, you might be need to re-deployed from NWDI itself.
    Thanks
    Regards,
    AZLY

  • Best Practice for applying patches in WL 8.1

    Below is what I did to apply patches to WL 8.1 SP2. Is there a better way?
    Best Practice? Thanks in advance.
    1) Created directory.
    C:\bea\weblogic81\server\lib\patches
    2) Copied jar files to patches directory.
    CR127930_81sp2.jar
    CR124746_81sp2.jar
    3) Modified startWebLogic.cmd to include jar files first in the classpath.
    set
    CLASSPATH=%WL_HOME%\server\lib\patches\CR124746_81sp2.jar;%WL_HOME%\server\l
    ib\patches\CR127930_81sp2.jar;%WEBLOGIC_CLASSPATH%;%POINTBASE_CLASSPATH%;%JA
    VA_HOME%\jre\lib\rt.jar;%WL_HOME%\server\lib\webservices.jar;%CLASSPATH%
    4) Restarted server and saw in console that the patches were applied.

    Hi:
               SAP Standard does not recommend you to update quantity field in asset master data.  Just leave the Qty Field Blank , just mention the Unit of Measure as EA. While you post acquisition through F-90 or MIGO this field will get updated in Asset master data automatically. Hope this will help you.
    Regards

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • Best Practice for utility in Sol Man 4.0

    We have software component ST-ICO of release 150_700 with Patch level 5
    We want a Template Selection for ‘Utility’ industry. I checked in
    the service market place and found that 'Baseline Package United
    Kingdom V1.50, Template: BP_BLKU150' is available in the above software
    component.
    But we are not getting any templates other than 'BP_UTUS147 - Best Practices for Water Utility' in the 'SOLAR_PROJECT_ADMIN'
    transaction.
    Kindly suggest any patch needs to be applied or some configuration need to be done.
    Regards
    Mani

    Hi Mani,
       Colud u plz give me the link of "where u find the template BP_BLKU150"?
    It will be helpful for me.
    Thanks
    Senthil

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • Best Practice for Retiring Superseded or Expired Updates

    If you want to clear out superseded or expired updates, do you delete them from your update package, the deployment package, or both? Or is there another best practice for this?
    Thanks,
    Bryan

    Hi Torsten, im reading this article because im actualy planing to solve some kind of the the same problem.
    I create Software Update Groups with monthly patches, and then i will deploy this to certain Client Collections.
    If there becomes a single Patch, the Status, "expired", out of the Software Update Group, what would be the effect for the whole Software Update Group wich still have an active deployment?
    Thanks for your sugestion.

  • What is the best practice for AppleScript deployment on several machines?

    Hi,
    I am developing some AppleScripts for my colleagues at work and I don't want to visit each of them to deploy my AppleScript on their Macs.
    So, what is the best practice for AppleScript deployment on several machines?
    Is there an installer created by the Automator available?
    I would like to have something like an App to run which puts all my AppleScript relevant files into the right place onto a destination Mac.
    Thanks in advance.
    Regards,

    There's really no 'right place' to put applescripts.  folder action scripts nees to go in ~/Library/Scripts/Folder Action Scripts (or /Library/Scripts/Folder Action Scripts), anything you want to appear in the script menu needs to go in ~/Library/Scripts (or /Library/Scripts), script applications should probably go in the Applications folder, but otherwise scripts can be placed anywhere.  conventional places to put them are in ~/Library/Scripts or in a subfolder of ~/Library/Application Support if they are run by an application.  The more important issue is to make sure you generalize the scripts: use the path to command to get local paths rather than hard-coding them in, make sure you test to make sure applications or unic executables you call are present ont he machine, use script bundles rather tna scripts if you scripts have private resources.
    You can write a quick installer script if you want to make sure scripts go where you want them.  Skeleton verion looks like this:
    set scriptsFolder to path to scripts folder from user domain
    set scriptsToExport to path to resource "xxx.scpt" in directory "yyy"
    tell application "Finder"
      duplicate scriptsToExport to scriptsFolder with replacing
    end tell
    say "Scripts are installed"
    save this as a script application, then open the application pacckage and create a folder called "yyy" in the resources folder and copy your script "xxx.scpt" into it.  other people can run the app to install the script.

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • The SAP Best Practices for Discrete Manufacturing V1.603

    Hai All,
    We are implementing SAP A1 for manufacturing company, so We selected SAP Best Practices for Discrete Manufacturing V1.603. Hear my question is after ecc 6.0 installation what level i need to update  the patches, because if i update all patches(up to date) it will give problem while building block updating.
    Thanks in Advance...
    Sushma Tatineni

    Dear,
    For detail information please refer the OSS notes,
    1040010 and 1223033 - SAP Best Practices for Discrete Manufacturing V1.603 CN
    Ensure that exactly the support package levels mentioned in the Quick Guide are implemented (not below and NOT HIGHER).
    Regards,
    R.rahmankar

  • SAP Best Practices for Discrete Manufacturing

    Hai All,
    We are implementing SAP A1 for manufacturing company, so We selected SAP Best Practices for Discrete Manufacturing V1.603. Hear my question is after ecc 6.0 installation what level i need to update the patches, because if i update all patches(up to date) it will give problem while building block updating.
    Thanks in Advance...
    Sushma Tatineni

    Dear,
    For detail information please refer the OSS notes,
    1040010 and 1223033 - SAP Best Practices for Discrete Manufacturing V1.603 CN
    Ensure that exactly the support package levels mentioned in the Quick Guide are implemented (not below and NOT HIGHER).
    Regards,
    R.rahmankar

Maybe you are looking for

  • Why is my iCloud storage full when I bought a 32GB iPad mini?

    When my mum bought me my iPad mini at Christmas (7 months ago), she bought me a 32GB and recently I have had notifications saying that my iCloud storage is full. I have deleted many apps and many photos, also when I try to delete the data of certain

  • InDesign CS6 crashing on export.

    I know this problem has been posted a lot in here, but i've looked at all the solutions from previous posts and none seem to relate to my problem. I dont have any third-party plugins installed, but I am working over a network drive. This doesnt happe

  • Why is my macbook air making such a loud noise?

    I have a Macbook Air 13". I have just switched it on and the fans are making are considerably louder than usual. The laptop isn't hot at all, in fact from what I can feel it's blowing out quite cool air. Any ideas on why it is doing this and how to s

  • Any Std Report to list POs for which IBD is created ?

    Hi Is there any std SAP report that tells us the list of PO numbers for which IBD has been created ?

  • OWB client standalone 11.2.0.3 error API5036 to repository 11.2.0.2

    Hi can we connect to a repository 11.2.0.2.0 with a OWB client standalone 11.2.0.3.0? actually we have this error: API5036: Client version 11.2.0.3.0 is not compatible with repository version 11.2.0.2.0 client is windows7 (64bits) please, how to solv