Piloting Software Updates Best Practices

Hello All
I am in the process of configuring our Software Updates infrastructure. My plan is to deploy patch Tuesday updates first to a Pilot Group (Collection) and then to what I will call Production group (Collection). Now when it comes to the pilot phase of software
update testing of course we are just trying to see if any updates may prove problematic. Are there any other extensive testing methods or tips anyone use. Any helpful info would be appreciated.
Thanks,
Phillip
Phil Balderos

Generally, folks let the pilot users know that updates are coming and to do their normal "thing(s)". If there is something business critical, then having them test that would also be prudent. The pilot users should be representative of a broad
cross-section of users to try to catch any issues among all of the various tasks that go on. Getting informed pilot users involved is of course helpful also so that they can provide relevant and pertinent feedback. At the end of the day though, there are no
guarantees so you should stress this management.
Jason | http://blog.configmgrftw.com | @jasonsandys

Similar Messages

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • IOS Update Best Practices for Business Devices

    We're trying to figure out some best practices for doing iOS software updates to business devices.  Our devices are scattered across 24 hospitals and parts of two states. Going forward there might be hundreds of iOS devices at each facility.  Apple has tools for doing this in a smaller setting with a limited network, but to my knowledge, nothing (yet) for a larger implementation.  I know configurator can be used to do iOS updates.  I found this online:
    https://www.youtube.com/watch?v=6QPbZG3e-Uc
    I'm thinking the approach to take for the time being would be to have a mobile sync station setup with configurator for use at each facility.  The station would be moved throughout the facility to perform updates to the various devices.  Thought I'd see if anyone has tried this approach, or has any other ideas for dealing with device software updates.  Thanks in advance. 

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Exchnage sp1 to sp3 Update Best Practices?

    Can someone outline Patching Best Practices?
    I need to upgrade Exchnage 2010 sp1 to sp3 for my potential customer?-There are Few questions in my mind
    -How do i proceed with Multi site Deployment , can i patch all CAS, MBx and Edge servers for site A and then move to site 2?
    -What are things that must be taken care of while performing whole process? -I've theoretical overview of how to proceed with the Upgrade, Just wanted to make sure if
    there is anything else that needs to be taken care of before performing this to production environments?
    Thanks in Advance.

    Hi,
    In addition to Will Martin's suggestion, I would like to verify if there is a DAG in your environment. If yes, please follow the steps below to upgrade it.
    1. Run the StartDagServerMaintenance.ps1 script to put the DAG member into maintenance mode and prepare it for the update rollup installation.
    2. Install the update rollup.
    3. Run the StopDagServerMaintenance.ps1 script to take the DAG member out of maintenance mode and put it back into production.
    4. Optionally rebalance the DAG by using the RedistributeActiveDatabases.ps1 script.
    Hope this can be helpful to you.
    Best regards,
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Amy Wang
    TechNet Community Support

  • Software Design best Practices

    Background:
    I have an existing J2EE web application in place with an Oracle 11 db and a WebSphere app server. The database is about 400 GB, has about 500 tables, and is usually processing about 35 db transactions per second. The average “web transaction” (submit) is about .21 seconds. The users are happy.
    I have been the DBA on the project for many years and also largely architected much the application framework and design components.
    I am having a debate with one of the developers about the best way to modify an existing page to meet some newly established business needs. I feel like I strongly know the answer to my question, but none the less our discussion is at an impasse and we need you help solve our debate.
    Scenario:
    There is an existing page in the application for client’s medical health profiles. This data is stored on MEDICAL_PROFILE. This table has 2 children: DOCTOR and MEDICATION. Of course a profile may have many doctors and many medications. The medical profile table has 30 columns on it.
    The change request is to add another tab on the page with about 40 more data fields and to expand the existing page by additional 50 data fields.
    Option 1:
    For me, I simply wanted to add these new columns to the MEDICAL_PROFILE table, as these are logically attributes of the profile itself (a one-to-one relationship) and this would be consistent with the way we do things in our other application areas. This would put about 120 columns on the table.
    Option 2:
    The developer prefers the object oriented approach. The developer is of the mindset that these new columns should go on a separate small tables. She agrees that they are attributes of the profile, but since they are presented on the different tab or in different profile's boxes they can be divided and the smaller logical chunks.
    Further she thinks that with a new tables, this will lend itself better to creating new and separate java objects that can plug-and-play elsewhere the application for code reuse should these objects ever be used in other places.
    She further finds that 120 columns on the table is really too much for ease of development – you can’t easily view or keep in mind a huge object with more than 250 getter/setter methods. She prefers small objects for better reusability and maintainability.
    The question:
    Who is right? What option is the best practice for an object oriented application on an RDBMS?
    Thanks.
    L and S

    LC wrote:
    The change request is to add another tab on the page with about 40 more data fields and to expand the existing page by additional 50 data fields.
    Option 1:
    For me, I simply wanted to add these new columns to the MEDICAL_PROFILE table, as these are logically attributes of the profile itself (a one-to-one relationship) and this would be consistent with the way we do things in our other application areas. This would put about 120 columns on the table.Although you haven't provided evidence to defend the relationship, if you are correct, then I agree that they should all be associated with the MEDICAL_PROFILE table/entity. It's crucial that the data model be correct for many, many reasons.
    >
    Option 2:
    The developer prefers the object oriented approach. The developer is of the mindset that these new columns should go on a separate small tables. She agrees that they are attributes of the profile, but since they are presented on the different tab or in different profile's boxes they can be divided and the smaller logical chunks.
    Further she thinks that with a new tables, this will lend itself better to creating new and separate java objects that can plug-and-play elsewhere the application for code reuse should these objects ever be used in other places.
    She further finds that 120 columns on the table is really too much for ease of development – you can’t easily view or keep in mind a huge object with more than 250 getter/setter methods. She prefers small objects for better reusability and maintainability.
    Why does there have to be a one-to-one Java object to database table mapping?
    What about using Object Relational Views to present different objects to the caller but keep the data model correct but allow for the developer's preferred approach?

  • EJB3 entity bean update, best practice

    I have an ejb3 entity bean that models a time that can be reserved in a booking system.
    I need a way to reserve the time for a specific user. Ofcourse the reservation should not overwrite if the time has already been reserved by another user.
    What is the best/cleaneste way to provide this service?
    I have thought of the following ways.
    I could put a version field on the entity. When the user is set in the frontend the entitybean will check that the time is not already reserved. If not it will be sent backup to the stateless session bean for persisting. If the time has been reserved by another user in the meantime JPA will throw an exception since the version doesn't match any more. The frontend can then show the error to the user.
    I could make the frontend call a method in a stateless session bean to reserve the time. The function could take the times primary id and the users primary id and loaded them from persistence. Then check if the time is already reserver else set the user and persist the time again. This should ofcourse be within a transaction and possibly also use a version attribute on the entity.

    Only fields detected as persistent-dirty will be updated in the database record.
    Laurent

  • Update Hierarchy Acrobat 9.x/Update Best Practices?

    Because of the many problems with Acrobat that seem to be caused by updates, I find it is common that I have to reinstall Acrobat (we are using 9) on someones machine.
    Our installer disc/files are Acrobat 9, missing of course all the updates.  I typically run the updates seperately as I have them already downloaded...but I have been going in each step (9.12, 9.13, 9.14, etc.) which takes forever.  By my count, I have 14 updates that need installed one by one after I install Acrobat.  This is very time consuming. 
    Is there any idea or list of what updates I can skip b/c their fixes are included in the next release?  Or, better yet is there a way that I can download a full installer of 9.4.4 or w/e is current that works with my 9.x activation keys?
    Wondering what other system admins do with the growing updates for this application.

    how would one create an installer as you mentioned?  I created a batch file to install each update silently one at a time, but it is still a pain b/c if you don't disable UAC, you sit there and press continue every time...not ideal for widespread use.

  • Diary or Jounrnal software "the best practice"

    Could anybody recommend a good cataloging software, preferably with timeline functionality. I would like to record personla life and events

    When you evaluate an app like this you need to bear in mind some important issues:
    How easy to get data in?
    How easy to get data out? (and this is a common complaint about Evernote)
    How good is the search?
    Some to consider:
    Journler
    MacJournal
    ViJournal
    Other places to search: MacUpdate or I Use This
    Regards
    TD

  • JDBC / SQL Update Best Practice

    My application updates a database table whenever a user modifies their profile.
    I have two questions on this...
    1. I've chosen to use PreparedStatements purely because it means I don't have to worry about special characters (e.g. '%"? in my data), and not because I re-use the statements. Is this a respected approach?
    2. Is it worth dynamcially building the update SQL and adding paramaters because in most cases only a subset of the possible fields will be modified? (e.g. avoid setting col1="test" if col1 already equals "test"). Is there an acknowledged pattern / algorithm / library that does this?
    Thanks,
    Steve

    My application updates a database table whenever a
    user modifies their profile.
    I have two questions on this...
    1. I've chosen to use PreparedStatements purely
    because it means I don't have to worry about special
    characters (e.g. '%"? in my data), and not because I
    re-use the statements. Is this a respected approach?Yes.
    >
    2. Is it worth dynamcially building the update SQL
    and adding paramaters because in most cases only a
    subset of the possible fields will be modified? Probably not. The only time this is going to matter is if there is a significantly sized field (like a blob) that often does not get updated. In that case you would probably want to exclude that.
    (e.g. avoid setting col1="test" if col1 already equals
    "test"). Is there an acknowledged pattern / algorithm
    / library that does this?Officially not as far as I know.
    There are several patterns that I have used.
    1. A modified flag for each field. If the set method is called then the flag is set to true.
    2. A modified flag for each field. If the set method is called then the new value is compared to the old and the flag is set depending on the outcome.
    3. The database layer holds (or retrieves) the previous data. It compares the two, noting the fields that have changed.
    In the above note that primary keys must be dealt with. Usually the primary key is either set or not set. If not set then it is a new record. If set then it is an update.

  • Best Practice for Software Update Structure?

    Is there a best practice guide for Software Update Structure?  Thanks.  I would like to keep this neat and organized.  I would also like to have a test folder for updates with test group.  Thanks.

    Hi,
    Meanwhile, please refer to the following blog get more inspire.
    Managing Software Updates in Configuration Manager 2012
    http://blogs.technet.com/b/server-cloud/archive/2012/02/20/managing-software-updates-in-configuration-manager-2012.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best practice in getting compliance rates of Software Update Deployments

    Hi,
    Would like to ask around on how others generate reports about software update deployment compliance. What do you use to get this report? Are there best practices for gathering software update compliance reports?

    There is not really a best-practice on reports that you need to use for compliancy on software updates. One of the reports I often use to check the compliancy is
    Compliance 1 - Overall compliance as it provides a good overview of a specific collection for an update group. For more details you can use
    Compliance 3 - Update group (per update).
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Creating Software Update Packages - Best Practice?

    I am setting up our SCCM 2012 R2 environment to begin using it for Windows Updates, however I'm not sure 100% the best method of setting it up.
    Currently my plan is to break out the deployment packages by OS, but I read\told that I should avoid creating to many dynamic deployment packages, as every time it changes all the computers will re-scan that package.  So What I want to do is create
    various packages for OS and years, so I would have a package that contains all updates for Windows 7, older then January 31, 2013 (assuming the package doesn't have 1000+ updates), and are not superseded\Expired. Then I would create Packages for the 2014
    monthly updates each month, then at the end 2014, combine them all in 1 package, and restart the process for 2015.  Is this a sound plan or is there a better course of action?
    If this the best practice method, is there any way to automatically create these packages?  I tried the Automatic Deployment Rules, but I can not set a Year of release, only the a time frame of the release,(older then 9 Months), unless I am missing
    something.  The only way I can see doing this is going into All Software Updates, and filtering on my requirements, and then manually creating the package, but this would less desirable, as after each year I would like to remove the superseded and expired
    without having to recreate the package.
    Mark.

    First, please learn what the different objects are -- not trying to be rude, just stating that if you don't do this, you will have fundamental issues. Packages are effectively meaningless when it comes to deploying updates. Packages are simply a way of grouping
    the binary files so they can be distributed to DPs and in-turn made available to clients. The package an update is in is irrelevant. Also, you do not "deploy" update packages and packages are not scanned by clients. The terminology is very important because
    there are implications that go along with it).
    What you are actually talking about above are software update groups. These are separate and distinct objects from update packages. Software Update groups group updates (not the update binaries) into logical groups that can be in-turn deployed or used for
    compliance reporting.
    Thus, you have two different containers that you need to be concerned about, update packages and update groups. As mentioned, the update package an update is in is pretty meaningless as long as the update is in a package that is also available to the clients
    that need it. Thus, the best way (IMO) to organize packages is by calendar period. Yearly or semi-annually usually works well. This is done more less to avoid putting all the updates into a single package that could get corrupted or will be difficult to deploy
    to new DPs.
    As for update groups, IMO, the best way is to create a new group every month for each class of products. This typically equates to one for servers, one for workstations, and one for Office every month. Then at the end of every year (or some other timeframe),
    rolling these monthly updates into a larger update group. Keep in mind that a single update group can have no more than 1,000 updates in it though. (There is no explicit limit on packages at all except see my comments above about not wanting one huge package
    for all updates.)
    Initially populating packages (like 2009, 2010, 2011, etc) is a manual process as is populating the update groups. From then on, you can use an ADR (or really three: one for workstations, one for servers, and one for Office) that runs every month, scans
    for updates released in the past month, and creates a new update group.
    Depending upon your update process, you may have to go back and add additional deployments to each update group also, but that won't take too long. Also, always QC your update groups created by an ADR. You don't want IE11 slipping through if it will break
    your main LOB application.
    Jason | http://blog.configmgrftw.com

  • Best practice deploying additional updates

    Hello what is the best practice concerning monthy windows updates. We are currently adding additional windows updates to the existing 1 package and updating the content on the DP's. However this seems to work with inconsistant results.
    DPs are not finalising content .
    Other places I have worked we would create a seperate package each month for additional updates and never had an issue. Any thoughts?
    SCCM Deployment Technician

    The documented best practices are all related to the maximum number of patches that are part of one deployment. That number should not pas the 1000,
    Remember this is a hard limit of 1000 updates per Software Update Group (not deployment package). It's quite legitimate to use a single deployment package.
    I usually create static historical Software Updates Groups at a point in time (eg November 2014). In this case it is not possible to have a single SUG for all products (Windows 7 has over 600 updates for example). You have to split them. I deploy these
    updates (to pilot and production) and leave the deployments in place. Then I create an ADR which creates a new SUG each month and deploy (to pilot and production).
    You can use a single deployment package for all the above.
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Best practice to update to Snow Leopard

    I just placed my family pack order on Amazon.com for Snow Leopard. But this will be the first time for me doing an OS upgrade on a Mac (all 4 Macs in my house came with Leopard on them so we've only done the "software update" variety). I am a reformed PC guy so humor me!
    What is the best practice to upgrade from 10.5.8 to 10.6? On a PC, my inclination would be to back up my data, reformat the whole drive and install Windows fresh... then all my apps.. then the data. I hate that and it takes hours.
    What is the best practice way to upgrade the Mac OS?

    The best option is Erase and Install. The next best option is Archive and Install. Use the latter if you do not want to or can't erase your startup volume.
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.

Maybe you are looking for

  • HELP !!!!!!! i new to OS X Server and need help setting up WINS (in DHCP)

    i Need help on how to setup the WINS part of DHCP can any one help ? what should my settings be for ? WINS/NBNS PRIMARY SERVER: WINS/NBNS SECONDARY SERVER: NBDD SERVER: NBT NODE TYPE: netBIOS SCOPE ID: i have DNS, SMB, AFP, Open Directory all setup j

  • Error Using DataExport function in Essbase

    Hi there, I'm using Hyperion Essbase version 9.3.1. And rite now I'm trying to export data in Essbase by the new CalcScript function DataExport directly to SQL server. I have 11 dimensions in my Essbase application. I did create 10 fields in SQL serv

  • Code for email button, anyone?

    Hi, I'm currently using this code on a button and it isn't working - I just want it to open up an email, so the viewer can get in touch: var mail:URLRequest = new URLRequest("mailto:[email protected]"); myButton.addEventListener(MouseEvent.CLICK, goM

  • CS6 3D error (bug?) "Could not complete your request because the path is too complex."

    I'm running into a bit of a problem with having a shape extruded in 3D. If I draw a rectangle shape set to path over an image and then choose "source - work path" and "3D extrusion" I will get a 3D rectangle with the image on the front side. If I dra

  • My iweb program won't open.

    My iweb program won't open. I tried to open a previous html page created by it, now when I start the program it reverts to the last attempted function, informs me it can't open the html file and then shuts down when I click on "OK". Any suggestions?