Expired Updates Removal - Best Practices

http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
I was searching for best practices for removing expired updates from environment and found this useful link.
There are some queries :
1) It actually says to remove all the expired updates in one go without removing them from SUG's first. When the expired updates which are also part of active SUG are removed, wouldn't this trigger a software update rescan request for all clients in the collection
to which these SUG's were targeted to, to rescan the patches required as there was a change in the SUG ? 
2) How about deleting the deployments from collections and then removing the expired updates from only those SUG and proceed in this way. Wouldn't this lower the processing ?
3) The expired update not part of any SUG will be removed, just to make sure, if the expired update is part of SUG but not targeted to any collection, will it still be removed ?
4) Once the expired update is removed, what will be the process of its removal from the Distribution Point. What other automated tasks will be triggered for this like updatation of software update packages on DP once there is any change etc? I have been prestaging
software update packages and extracting them on DP's. For any new DP, as the prestage still contains the older updates (expired, which were removed), Will they get extracted on new DP ? 
Are all the steps i mentioned above valid in case of superseded updates instead of expired ?

I am not clear of the below Jacob :
 If you delete the deployment, all of the policy for those updates will be removed. 
But, that removes every single update and not just the ones you removed.  A bit more processing goes into removing everything.
What my concern here is, suppose there are 10 SUG's deployed to 40 collections each. lets
say there are 1000 updates.
If i select all the expired updates and just edit their membership, suppose random udpates
are part of all 10 SUG deployments. Removing these will trigger the policy cycle for all the collection clients.
What i was talking about is, if I pick up 1 SUG out of 10 and remove it from 40 collections
first. Once it is done, then go ahead with removing the expired updates from this SUG.
This is what i need some clarification on.

Similar Messages

  • Creating Software Update Packages - Best Practice?

    I am setting up our SCCM 2012 R2 environment to begin using it for Windows Updates, however I'm not sure 100% the best method of setting it up.
    Currently my plan is to break out the deployment packages by OS, but I read\told that I should avoid creating to many dynamic deployment packages, as every time it changes all the computers will re-scan that package.  So What I want to do is create
    various packages for OS and years, so I would have a package that contains all updates for Windows 7, older then January 31, 2013 (assuming the package doesn't have 1000+ updates), and are not superseded\Expired. Then I would create Packages for the 2014
    monthly updates each month, then at the end 2014, combine them all in 1 package, and restart the process for 2015.  Is this a sound plan or is there a better course of action?
    If this the best practice method, is there any way to automatically create these packages?  I tried the Automatic Deployment Rules, but I can not set a Year of release, only the a time frame of the release,(older then 9 Months), unless I am missing
    something.  The only way I can see doing this is going into All Software Updates, and filtering on my requirements, and then manually creating the package, but this would less desirable, as after each year I would like to remove the superseded and expired
    without having to recreate the package.
    Mark.

    First, please learn what the different objects are -- not trying to be rude, just stating that if you don't do this, you will have fundamental issues. Packages are effectively meaningless when it comes to deploying updates. Packages are simply a way of grouping
    the binary files so they can be distributed to DPs and in-turn made available to clients. The package an update is in is irrelevant. Also, you do not "deploy" update packages and packages are not scanned by clients. The terminology is very important because
    there are implications that go along with it).
    What you are actually talking about above are software update groups. These are separate and distinct objects from update packages. Software Update groups group updates (not the update binaries) into logical groups that can be in-turn deployed or used for
    compliance reporting.
    Thus, you have two different containers that you need to be concerned about, update packages and update groups. As mentioned, the update package an update is in is pretty meaningless as long as the update is in a package that is also available to the clients
    that need it. Thus, the best way (IMO) to organize packages is by calendar period. Yearly or semi-annually usually works well. This is done more less to avoid putting all the updates into a single package that could get corrupted or will be difficult to deploy
    to new DPs.
    As for update groups, IMO, the best way is to create a new group every month for each class of products. This typically equates to one for servers, one for workstations, and one for Office every month. Then at the end of every year (or some other timeframe),
    rolling these monthly updates into a larger update group. Keep in mind that a single update group can have no more than 1,000 updates in it though. (There is no explicit limit on packages at all except see my comments above about not wanting one huge package
    for all updates.)
    Initially populating packages (like 2009, 2010, 2011, etc) is a manual process as is populating the update groups. From then on, you can use an ADR (or really three: one for workstations, one for servers, and one for Office) that runs every month, scans
    for updates released in the past month, and creates a new update group.
    Depending upon your update process, you may have to go back and add additional deployments to each update group also, but that won't take too long. Also, always QC your update groups created by an ADR. You don't want IE11 slipping through if it will break
    your main LOB application.
    Jason | http://blog.configmgrftw.com

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Best Practice for Retiring Superseded or Expired Updates

    If you want to clear out superseded or expired updates, do you delete them from your update package, the deployment package, or both? Or is there another best practice for this?
    Thanks,
    Bryan

    Hi Torsten, im reading this article because im actualy planing to solve some kind of the the same problem.
    I create Software Update Groups with monthly patches, and then i will deploy this to certain Client Collections.
    If there becomes a single Patch, the Status, "expired", out of the Software Update Group, what would be the effect for the whole Software Update Group wich still have an active deployment?
    Thanks for your sugestion.

  • Best practice - updating figure numbers in a file, possibly to sub-sub-chapters

    Hi,
    I'm a newbie trying to unlearn my InDesign mindset to work in FrameMaker. What is best practice for producing figure numbers to accompany diagrams throughout a document? A quick CTRL+F in the Framemaker 12 Help book doesn't seem to point me in a particular direction. Do diagrams need to be inserted into a table, where there is a cell for the image and a cell for the figure details in another? I've read that I should  use a letter and colon in the tag to keep it separate from other things that update, e.g. F: (then figure number descriptor). Is there anything else to be aware of, such as when resetting counts for chapters etc?
    Some details:
    Framemaker12.
    There are currently 116 chapters (aviation subjects) to make.
    Each of these chapters will be its own book in pdf form, some of these chapters run to over 1000 pages.
    Figure number ideally takes the form: "Figure (a number from one of the 1-116 chapters used) - figure number" e.g. "Figure 34 - 6." would be the the 6th image in the book 'chapter 34'.
    The figure number has to cross reference to explaining text, possibly a few pages away.
    These figures are required to update as content is added or removed.
    The (aviation) chapter is an individual book.
    H1 is the equivalent of the sub-chapter.
    H2 is the equivalent of the sub-sub-chapter.
    H3 is used in the body copy styling, but is not a required detail of the figure number.
    I'm thinking of making sub-chapters in to individual files. These will be more manageable on their own. They will then be combined in the correct order to form the book for one of these (1 of 116) subject chapters.
    Am I on the right track?
    Many thanks.
    Gary

    Hi,
    Many thanks for the link you provided. I have implemented your recommendation into my file. I have also read somewhere about sizing anchored frames to an imported graphic using 'esc' + 'm' + 'p'.
    What confuses me, coming from InDesign is being able to import these graphics at the size they were made ( WxH in mm at 300ppi) and keeping them anchored to a point in the text flow.
    I currently have 1 and 2 column master pages built. When I bring in a graphic my process is:
    insert a single cell table on the next space after current text > drop the title below the cell > give the title a 'figure' format. When I import a graphic it either tries to fit it in the current 2 column layout with only part of it showing in a box which is half the width of a single column!
    A current example: page 1 (2 column page) the text flows for 1.5 columns. At the end of the text I inserted a single cell table, then imported and image into the cell.
    Page 2 (2 column page) has the last line of page 1's text in the top left column.
    Page 3 (2 page column)  has the last 3 words of page 1 in its top left column.  The right column has the table in it with part of the image showing. The image has also bee distorted, like it's trying to fit. These columns are 14 cm wide, the cell is 2 cm wide at this point. I have tried to give cells for images 'wider' attributes using the object style designer but with no luck.
    Ideally I'm trying to make 2 versions. 1) an anchored frame that fits in a 1 column width on a 2 column width page. 2) An anchored frame that fits the full width of my landscape pages (minus some border dimension),  this full width frame should be created on a new proceeding page. I'd like to be able drop in images to suit these different frames with as much automation as possible.
    I notice many tutorials tell you how to do a given area of the program, but I haven't been able to find one that discusses workflow order. Do you import all text first, then add empty graphic boxes and/or tables throughout and then import images? I'm importing text from Word,  but the images are separate, having been vectored or cleaned up in Photoshop - they won't be imported from the same word file.
    many thanks

  • IOS Update Best Practices for Business Devices

    We're trying to figure out some best practices for doing iOS software updates to business devices.  Our devices are scattered across 24 hospitals and parts of two states. Going forward there might be hundreds of iOS devices at each facility.  Apple has tools for doing this in a smaller setting with a limited network, but to my knowledge, nothing (yet) for a larger implementation.  I know configurator can be used to do iOS updates.  I found this online:
    https://www.youtube.com/watch?v=6QPbZG3e-Uc
    I'm thinking the approach to take for the time being would be to have a mobile sync station setup with configurator for use at each facility.  The station would be moved throughout the facility to perform updates to the various devices.  Thought I'd see if anyone has tried this approach, or has any other ideas for dealing with device software updates.  Thanks in advance. 

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Best Practice for Software Update Structure?

    Is there a best practice guide for Software Update Structure?  Thanks.  I would like to keep this neat and organized.  I would also like to have a test folder for updates with test group.  Thanks.

    Hi,
    Meanwhile, please refer to the following blog get more inspire.
    Managing Software Updates in Configuration Manager 2012
    http://blogs.technet.com/b/server-cloud/archive/2012/02/20/managing-software-updates-in-configuration-manager-2012.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for adding and removing eventListeners?

    Hi,
    What is the best practice in regards to CPU usage and performance for dealing with eventListeners when adding and removing movieclips to the stage?
    1. Add the eventListeners when the mc is instantiated and leave them be until exiting the app
    or
    2. Add and remove the eventListeners as the mc is added or removed from the stage (via an addedToStage and removedFromStage listener method)
    I would appreciate any thoughts you could share with me. Thanks!
    JP

    Thanks neh and Applauz78.
    As I understand it, the main concern with removing listeners is to conserve memory. However, I've tested memory use and this is not really an issue for my app, so I'm more concerned if there will be any effect on CPU (app response) if I'm constantly adding and removing a list of listeners every time a user loads an mc to the stage, as compared to just leaving them active and "ready to go" when needed.
    Is there any way to measure CPU use for an AIR app on iOS?
    (It may help to know my app is small - I'm talking well under 100 active listeners total for all movieclips combined.)

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

Maybe you are looking for