Best practice - Heartbeat discovery and Clear Install Flag settings (SCCM client) - SCCM 2012 SP1 CU5

Dear All,
Is there any best practice to avoid having around 50 Clients, where the Client version number shows in the right side, but client doesn't show as installed, see attached screenshot.
SCCM version is 2012 SP1 CU5 (5.00.7804.1600), Server, Admin console have been upgraded, clients is being pushed to SP1 CU5.
Following settings is set:
Heartbeat Discovery every 2nd day
Clear Install Flag maintenance task is enabled - Client Rediscovery period is set to 21 days
Client Installation settings
Software Update-based client installation is not enabled
Automatic site-wide client push installation is enabled.
Any advise is appreciated

Hi,
I saw a similar case with yours. They clients were stuck in provisioning mode.
"we finally figured out that the clients were stuck in provisioning mode causing them to stop reporting. There are two registry entries we changed under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CCM\CcmExec]:
ProvisioningMode=false
SystemTaskExcludes=*blank*
When the clients were affected, ProvisioningMode was set to true and SystemTaskExcludes had several entries listed. After correcting those through a GPO and restarting the SMSAgentHost service the clients started reporting again."
https://social.technet.microsoft.com/Forums/en-US/6d20b5df-9f4a-47cd-bdc3-2082c1faff58/some-clients-have-suddenly-stopped-reporting?forum=configmanagerdeployment
Best Regards,
Joyce
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

Similar Messages

  • What criteria are used by the site maintenence task: Clear Install Flag?

    I want to understand what exact criteria, i.e. what fields in the database, are used by the "Clear Install Flag" Maintenance task.
    Is it just LastDDR that needs to be either NULL or older than the configured "rediscovery period"?
    Or are other fields also considered and if so which and what are the criteria.

    OK, sorry, my response is not meant to be rude, but to pull additional details out of you since the blog post and TechNet article linked do fully address the question(s) as the heartbeat time being the only criteria used and IsClient being reset
    to 0. Also note that you did explicitly challenge the veracity of the blog post and so that called for some clarification on the source and an understanding of why you were questioning it.
    Having said that, it's certainly possible that other fields are reset during the task also, but why does this matter for your scenario? On that note, what is your scenario?
    The "why", motivation, and context of the question or request for information is almost always important because it allows us to address your scenario and ensure that you have all of the information needed to properly address that scenario. Put
    another way, why does it matter if some other fields are blanked out? Knowing this may help us point you in a better direction or even influence the product group if it's something that's never really been considered before.
    Jason | http://blog.configmgrftw.com

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • How can i get best practice for SD and MM

    Please, can any body tell me how can i get best practices for SD and MM for functional approach?
    Thanks
    Utpal

    Hello Utpal,
    I am really surprised, in just 10 minutes you searched that site and found it not useful. <b>Check out my previous reply "you will not find screen shot in this but you can add it in this"</b>
    You will not find readymade document, you need to add this as per your requirement.
    btw, the following link gives you some more link for new SAP guys, this will be helpful. <b>Check out HOW to BASIC transaction</b>
    New to Materials Management / Warehouse Management?
    Hope this helps.
    Regards
    Arif Mansuri

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • SAP Business One Best-Practice System Setup and Sizing

    <b>SAP Business One Best-Practice System Setup and Sizing</b>
    Get recommendations from SAP and hardware specialists on system setup and sizing
    SAP Business One is a single, affordable, and easy-to-implement solution that integrates the entire business across financials, sales, customers, and operations. With SAP Business One, small businesses can streamline their operations, get instant and complete information, and accelerate profitable growth. SAP Business One is designed for companies with less than 100 employees, less than $75 million in annual revenue, and between 1 and 30 system users, referred to as the SAP Business One sweet spot. The sweet spot covers various industries and micro-verticals which have different requirements when it comes to the use of SAP Business One.
    One of the initial steps during the installation and implementation of SAP Business One is the definition of the system landscape and architecture. Numerous factors affect the system landscape that needs to be created to efficiently run SAP Business One.
    The <a href="http://wiki.sdn.sap.com/wiki/display/B1/BestPractiseSystemSetupand+Sizing">SAP Business One Best-Practice System Setup and Sizing Wiki</a> provides recommendations on how to size and configure the system landscape and architecture for SAP Business One based on best practices.

    For such high volume licenses, you may contact the SAP Local Product Experts.
    You may get their contact info from this site
    [https://websmp209.sap-ag.de/~sapidb/011000358700001455542004#India]

  • DNS best practices for hub and spoke AD Architecture?

    I have an Active Directory Forest with a forest root such as joe.co and the root domain of the same name, and root DNS servers (Domain Controllers) dns1.joe.co and dns2.joe.co
    I have child domains with names in the form region1.joe.com, region2.joe.co and so on, with dns servers dns1.region1.joe.co and so on.
    Each region has distribute offices that may have a DC in them, servers named in the form dns1branch1.region1.joe.co
    Over all my DNS tests out okay, but I want to get the general guidelines for setting up new DCs correct.
    Configuration:
    Root DC/DNS server dns1.joe.co adapter settings points DNS to itself, then two other root domain DNS/DCs dns2.joe.co and dns3.joe.co.
    The other root domain DNS/DCs adapter settings point to root server dns1.joe.co and then to itself dns2.joe.co, and then 127.0.0.1
    The regional domains have a root dns server dns1.region1.joe.co with adapter that that points to root server dns1.joe.co then to itself.
    The additional region domain DNS/DCs adapter settings point to dns1.region1.joe.co then to itself then to dn1.joe.co
    What would you do to correct this topology (and settings) or improve it?
    Thanks in advance
    just david

    Hi,
    According to your description, my understanding is that you need suggestion about your DNS topology.
    In theory, there is no obvious problem. Except for the namespace and server plaining for DNS, zone is also needed to consideration. If you place DNS server on each domain and subdomain, confirm that if the traffic browsed by DNS will affect the network performance.
    Besides, fault tolerance and security are also necessary.
    We usually recommend that:
    DC with DNS should point to another DNS server as primary and itself as secondary or tertiary. It should not point to self as primary due to various DNS islanding and performance issues that can occur. And when referencing a DNS server on itself, a DNS client
    should always use a loopback address and not a real IP address. detailed information you may reference:
    What is Microsoft's best practice for where and how many DNS servers exist? What about for configuring DNS client settings on DC’s and members?
    http://blogs.technet.com/b/askds/archive/2010/07/17/friday-mail-sack-saturday-edition.aspx#dnsbest
    How To Split and Migrate Child Domain DNS Records To a Dedicated DNS Zone
    http://blogs.technet.com/b/askpfeplat/archive/2013/12/02/how-to-split-and-migrate-child-domain-dns-records-to-a-dedicated-dns-zone.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice: parameters, reports and control flow

    I am developing an application that has a number of different reports, each of which has a combination of similar parameter LOVs.
    I defined the LOVs on page 0, with a corresponding DISPLAY hidden field for each one, with each set to conditional display if its DISPLAY=Y. I have a page process on each page with a standard block setting the appropriate _DISPLAY's to Y or N depending on whether they are needed on that page or not.
    It is becoming difficult to maintain, and I would prefer to have a single block of code that is called when entering all pages for the first time; where a CASE statement can switch on and off the various LOVs for each page by setting their correspondings _DISPLAY hiddens.
    I cannot find a clear answer for this in the forums; and I am not very clear if it is possible, or if it is the best practice.
    If anyone has any advice, please let me know!!
    Thanks
    Mark

    Hi Mark,
    One of the first points of best practice in Apex is that any non-trivial chunks of PL/SQL coding should be centralised in the database as stored code.
    In your case, your generic code would check the page that is being loaded and through a case statement, selectively set values to display the required fields for that page. One problem with this is that you still need to modify this procedure every time you add a new page.
    An alternative to this would be to do away with the _DISPLAY items and have the LOV items Condidtion type set to
    Current Page is Contained Within Expression 1 (Comma delimited list of pages)
    You then only need to list the pages the item is available for as a comma separated list in Expression 1.
    You could go even further by storing the display logic for each LOV item in tables in the database and make this completely dynamic, but this may be seen as overkill.
    Regards
    Andre

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • PKGBUILD best practice for autotools and missing required files

    I am trying to update one of my packages in the AUR.  Upstream using GNU automake/autoconf tools and has worked just fine for previous versions.  This time around, the download from upstream is missing several of the mandatory files required by autoconf.  I am trying to figure out the best way to deal with this.
    1.  I can add just create them, and distribute them with the Tarbell, and push them into src directory prior to invoking autoconf.
    or
    2. I can use the --add-missing flag, but that requires the running of autoconf multiple times (unless I am confused) 
    What is the best practice when files such as NEWS and README are missing?

    I highly recommend you review Brad Hedlund's videos regarding UCS networking here:
    http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
    You may want to focus on Part 10 in particular, as this talks about running UCS in end-host mode without vPC or VSS.
    Regards,
    Matt

  • New Best Practice for Titles and Lower Thirds?

    Hi everyone,
    In the days of overscanned CRT television broadcasts, the classic Title Safe restrictions and the use of larger, thicker fonts made a lot of sense. These practices are described in numerous references and forum posts.
    Nowadays, much video content will never be broadcast, CRTs are disappearing, and it's easy to post HD video on places like YouTube and Vimeo. As a result, we often see lower thirds and other text really close to the edge of the frame, as well as widespread use of thin (not bold) fonts. Even major broadcast networks are going in this direction.
    So my question is, what are the new standards? How would you define contemporary best practice?
    Thanks for your thoughtful replies!
    Les

    stuckfootage wrote:
    I wish I had a basket of green stars...
    Quoted for stonedposting.
    Bzzzz, crackle..."Discovery One, what is that object?
    Bzz bzz."Not sure, Houston, it looks like a basket...." bzzz
    Crackle...."A bas...zzz.. ket??"
    Bzzz. "My God, It's full of stars!" bzz...crackle.
    Peeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeep!

  • Best Practice for SUP and WSUS Installation on Same Server

    Hi Folks,
    I have a question, I am in process of deploying SCCM 2012 R2... I was in process of deploying Software Update Point on SCCM with one of the existing WSUS server installed on a separate server from SCCM.
    A debate has started with of the colleague who says that the using remote WSUS server is recommended by Microsoft because of the scalability security  that WSUS will be downloading the updates from Microsoft and SCCM should be working as downstream
    server to fetch updates from WSUS server.
    but according to my consideration it is recommended to install WSUS server on the same server where SCCM is installed... actually it is recommended to install WSUS on a site system and you can used the same SCCM server to deploy WSUS.
    please advice me the best practices for deploying SCCM and WSUS ... what Microsoft says about WSUS to be installed on same SCCM server OR WSUS should be on a separate server then the SCCM server ???
    awaiting your advices ASAP :)
    Regards, Owais

    Hi Don,
    thanks for the information, another quick one...
    the above mentioned configuration I did is correct in terms of planning and best practices?
    I agree with Jorgen, it's ok to have WSUS/SUP on the same server as your site server, or you can have WSUS/SUP on a dedicated server if you wish.
    The "best practice" is whatever suits your environment, and is a supported-by-MS way of doing it.
    One thing to note, is that if WSUS ever becomes "corrupt" it can be difficult to repair and sometimes it's simplest to rebuild the WSUS Windows OS. If this is on your site server, that's a big deal.
    Sometimes, WSUS goes wrong (not because of ConfigMgr)..
    Note that if you have a very large estate, or multiple primary site servers, you might have a CAS, and you would need a SUP on the CAS. (this is not a recommendation for a CAS, just to be aware)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • SAP BO Dashboards 4.1 best practice on layout and components

    Dear SCN,
    I have requirement to create a BO 4.1 dashboard with data & Visualization based on a excel sheet which is currently in use as a Mgmt dashboard. The current excel dashboard is having more than 100 KPIs in one view which is readable only if you put in on a slide and view it in full screen by running a slideshow.
    Question 1:
    1. Being the suggested size of the Xcelsius canvas not more than 1024 X 768 so that it is viewable with out scroll bar in BI launchpad or in any browser or in pdf, I am trying to confirm in this forum that the canvas size of 1024 X 768 is the recommended maximum size for the dashboard to get the clear view in any browser/BI launchpad . Pls confirm as it will help me in doing the design for the KPIs and its visualization.
    Question 2:
    1. I am using the BICS connection and accessing the source data from BW. Because the no. of KPIs are more and ranging between 10 cubes and 40 queries as the data is across different modules, I would like to know what is the recommended no. of connections for queries /cubes in dashboard using BICS connectivity which does not affect the performance
    2. For the same dashboard using BICS connection, What is ideal number of components like Charts/Scorecard/Spreadsheet table that is recommended to use to ensure better performance?
    I appreciate your answers which can help the finalization of the dashboard design for this dashboard of data and visualization requirements which is very high when compared to the normal dashboards.
    Thanks and Regards
    Jana

    Hi Suman,
    Thanks for your answers.You answers and links which you have attached are helpful and It answered my questions related to canvas size and Connections.
    I am expecting some benchmark numbers as per the best practices with respect to the No. of components to be used to ensure the better loading of the dashboard. As the increase in number of components increase the size of the dashboard and also it requires more time to load the data for the components, I am looking for the number as per the best practice by considering the below points.
    1. When I say the no. of components, I am not considering the components like label, text box,combo box or list box. I am considering the components which is used for visualization and interactive drill down on top of the visualized charts ( For Eg. Column charts, Pie charts, Gauges ).
    2.I am not going to use more calculations/formulas in my dashboards as the values and structure are almost the same with the BEx query.
    3.Having around 10 to 12 connections.
    4.The data sets are not more than 900 rows totally. For any control, we will be binding only 100 rows at the max as the data for the KPIs are summarized at the year/month level at the BW layer.
    Since the KPIs are more, the Visualizations are more and we can't re-use the Visualization charts for most of the KPIs. Currently I am ending up with ~35 charts/ gauges along with other label and selection controls which I will be using to show 100 KPIs with unique visualization requirements and I am going for the tab-wise layout with more dynamic to accommodate and separate logically.
    Hope these details will give clear picture of why I am looking for the Benchmark on No. of components .
    I appreciate your help!
    Thanks and Regards
    Jana

Maybe you are looking for

  • PI 7.11: IDoc Message mapping

    Hi there, I'm currently working on a graphical message mapping using IDoc HRMD_A06 as source structure (HR master data transferred via PFAL). In my result structure I have to fill a field CostCenter with the following logic: If field KOSTL (cost cent

  • Auto PO for freight charges

    Hi, As you may know that During shipment cost document process from the delivery it is created a PO automatically for freight charges. From which transaction this PO created autonmatically. Secondaryl,During this PO creation ,I want to create this PO

  • Selective data transfer to new MacBook Pro?

    Hello, I have a iMac at home and just purchased a MacBook Pro.  I want to just selectively transfer things such as some of the photos from iPhoto and some of the music from iTunes but I want to keep the MacBook "cleaner" than the iMac is, what is the

  • PAL or NTSC for France?

    I have a project I need to burn for use in France. Should this be a PAL or NTSC disc?

  • MacBook Pro fails to Login after battery ran out.

    Hi! I have been working on my MacBook Pro (Retina 15") for the past few hours, untill it ran out of battery. I then walked to my power cord and pluged it in. The MacBook boots and comes up with the login screen. When I log in, the screen goes black a