Best practices for mass reimaging? Having unmanaged duplicate clients with mismatched Resource IDs. (2012 R2)

We reimage about 5000-8000 clients each summer. Last summer we were still at RTM, but have since moved on to R2. We build images (Win7 SP1) in vsphere and capture via capture media TS. Last year's image didn't get SCCM client uninstalled prior to capture
so we would have random issues with computers going to unmanaged status and some that would show up without a client cert. To avoid this issue we stripped the client out prior to capture this year.
I believe this is how we handled the reimage process last year as well, but I am not positive on that. We were also dealing with a lot of new laptops last summer, where as they obviously have existing records this summer. Since SCCM replaces the wired MAC
address with the wireless MAC (laptops) we can't just toss these into an OSD collection because it won't pickup the OSD advertisement / PXE. (Is there any workaround for this?) Since this is the case, we are blowing away each client's AD account and DDR in
SCCM, then doing mass import of hostname and wired MAC into SCCM, dumping them into the appropriate OSD collection, and they image unless they happen to pickup last year's PXE deployment that first has to be cleared, or unless they had a motherboard replaced
and our MAC database didn't get updated. We did the mass import a week ago and the manual entries are listed with hostname and MAC and entry date of 7/9/2014. This week we started imaging. Almost immediately after reimaging (at which time the AD record is
created upon rejoining domain) we see a second account show up in SCCM from AD Discovery with dates of 7/14/2014 and 7/15/2014. Neither account is managed or shows that SCCM client is installed, but it shows a site code.
The manual entry lists an agent name, agent site, wired MAC, name, NetBIOS name, Resource ID of 167xxxxx, assigned site, and CCM GUID.
The AD Discovery record shows agent name, agent site, domain, IPv4 and IPv6 addresses, name, NetBIOS name, primary group ID, domain, Resource ID of 20971xxxxx, resource name and type, SID, assigned sites, container name, UAC, etc.
Why won't these records merge and show up as being properly managed? I am not yet sure if they fix themselves after one record or the other is deleted. Obviously this process isn't working well and it removes the clients from their direct membership collections
and AD groups. I'd think that all of this could be avoided if we just had the wired MAC persist in the DDR.

Last year's image didn't get SCCM client uninstalled prior to capture so we would have random issues with computers going to unmanaged status and some that would show up without a client cert. To avoid this issue we stripped the client out prior to capture.
There is no reason or need to do this. There is no correlation between the two as long as the client agent was properly prepared (which does happen with capture media although you should strongly consider using a build and capture task sequence). Clients
are perfectly capable of living within an image -- I do it all the time and it is a common practice.
Since SCCM replaces the wired MAC address with the wireless MAC (laptops) we can't just toss these into an OSD collection because it won't pickup the OSD advertisement / PXE. (Is there any workaround for this?)
This is not correct and thus also unnecessary as ConfigMgr will use the MAC Address *or* the SMSBIOS GUID of the system to determine targeting during OSD. The SMBIOS GUID is an immutable unique ID set by the OEM that is part of the resource record in ConfigMgr
also.
Jason | http://blog.configmgrftw.com

Similar Messages

  • Best Practice for Mass Deleting Transactions

    Hi Gurus
    Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
    We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
    That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
    Thanks in Advance.
    Regards.
    -MP
    Edited by: Mohanpreet Singh on Apr 15, 2010 5:18 PM

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best practice for auditing a SP 2010 BCS scenario with a SQL Server pooled connection.

    I'm using SP 2010 BCS to connect to a SQL Server db. For this, I've used SSS and am passing SQL credentials to take advantage of pooled connections. I'd like to pass the user context (user ID/User name) to the database so that I can do auditing, such as
    created by and last modified by. What is the best approach to make this work while still using a pooled connection?
    I've thought about modifying the external list forms so that I capture and pass down user context info, however, I'd like not to rely on this in case the external content type is consumed by another consumer such as an Office product, etc.

    Hi,
    There is no good way to do it.
    You can set the created by and modified by columns as input parameters for the create and update operations and using ajax to set values for both of columns.
    http://troyscott.ca/2010/07/17/creating-an-update-operation-for-an-external-content-type/
    Regards,
    Seven

  • Need Best Practice for creating BE in ZFS boot environment with zones

    Good Afternoon -
    I have a Sparc system with ZFS Root File System and Zones. I need to create a BE for whenever we do patching or upgrades to the O/S. I have run into issues when testing booting off of the newBE where the zones did not show up. I tried to go back to the original BE by running the luactivate on it and received errors. I did a fresh install of the O/S from cdrom on a ZFS filesystem. Next ran the following commands to create the zones, and then create the BE, then activate it and boot off of it. Please tell me if there are any steps left out or if the sequence was incorrect.
    # zfs create –o canmount=noauto rpool/ROOT/S10be/zones
    # zfs mount rpool/ROOT/S10be/zones
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z1
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z2
    # zfs mount rpool/ROOT/s10be/zones/z1
    # zfs mount rpool/ROOT/s10be/zones/z2
    # chmod 700 /zones/z1
    # chmod 700 /zones/z2
    # zonecfg –z z1
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z1
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zonecfg –z z2
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z2
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zoneadm –z z1 install
    # zoneadm –z z2 install
    # zlogin –C –e 9. z1
    # zlogin –C –e 9. z2
    Output from zoneadm list -v:
    # zoneadm list -v
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    2 z1 running /zones/z1 native shared
    4 z2 running /zones/z2 native shared
    Now for the BE create:
    # lucreate –n newBE
    # zfs list
    rpool/ROOT/newBE 349K 56.7G 5.48G /.alt.tmp.b-vEe.mnt <--showed this same type mount for all f/s
    # zfs inherit -r mountpoint rpool/ROOT/newBE
    # zfs set mountpoint=/ rpool/ROOT/newBE
    # zfs inherit -r mountpoint rpool/ROOT/newBE/var
    # zfs set mountpoint=/var rpool/ROOT/newBE/var
    # zfs inherit -r mountpoint rpool/ROOT/newBE/zones
    # zfs set mountpoint=/zones rpool/ROOT/newBE/zones
    and did it for the zones too.
    When ran the luactivate newBE - it came up with errors, so again changed the mountpoints. Then rebooted.
    Once it came up ran the luactivate newBE again and it completed successfully. Ran the lustatus and got:
    # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    s10s_u8wos_08a yes yes no no -
    newBE yes no yes no -
    Ran init 0
    ok boot -L
    picked item two which was newBE
    then boot.
    Came up - but df showed no zones, zfs list showed no zones and when cd into /zones nothing there.
    Please help!
    thanks julie

    The issue here is that lucreate add's an entry to the vfstab in newBE for the zfs filesystems of the zones. You need to lumount newBE /mnt then edit /mnt/etc/vfstab and remove the entries for any zfs filesystems. Then if you luumount it you can continue. It's my understanding that this has been reported to Sun, and, the fix is in the next release of Solaris.

  • Need best practice to deploy similar content to multiple clients with specific skins/images

    Hello, my name is Bernadette. When I was with a previous employer for several years I created WebHelp using RoboHelp 8. Then, the last two years I worked there I transitioned to MadCap Flare again creating WebHelp. I have been with a new employer two years now, using RoboHelp again. However, this company still has us creating HTML Help only. My manager has just assigned me to create and launch our first WebHelp project. Currently, we have RoboHelp 8, with a P.O. in process to purchase version 9 hopefully very soon. However, I cannot wait and have just started the project using version 8.  Since it's been a few years, I am a tad rusty in my use of RoboHelp/WebHelp. For this first project, I have been asked to customize the skin for EACH client that transitions to the new proprietary application. My question is, since it is likely that the contents of the completed WebHelp project will be identical (with the exception of the customized customer-specific skins, logos and window images) is there a creative, timesaving method for rolling out primarily the same content to each customer WITHOUT the need for me to maintain numerous, individual WebHelp projects?  I so appreciate any input or suggestions. Thank you!

    Hello Peter and Jeff, there IS the potential for eventually reaching a HUGE number of customers, but I would expect that to take at least a good number of months or beyond to get to that stage. As for Peter's question about variables, I have used conditional build tags in the past, which I know is a variable, but I am not certain what else might be included in the category.
    I was not aware that it is possible, within a single project, to create and maintain different versions of the same layout type, namely WebHelp by creating  duplicates for each customer. This could be challenging, but fun! So, as I get further into this I assume you are both "on call" for additional questions? Thank you both so much for your rapid responses.  

  • Suggest the Best Practice for Procurement of Commodities like crude, copper

    Dear Gurus,
    Please suggest the best practice for following business Process.
    My client is having the procurement need for a comodity whose prices are fluctuating. Say Crude Oil . The prices are changing every day. Now The client would like to pay to vendor on day of good receipt. But it may be possible that the price on GR day is much higher. How to control these kind of procurement. Presently I have activated the price variance through invoice posting but this is not working.
    Can you suggest best practice for  procurement of comodities.
    Thank you for your consistant support.
    Regards
    Vinod Kakade
    Edited by: vinodkakade on Jul 14, 2011 2:34 PM

    Hi Vinod,
    Would you know the price by the time the GR is to happen, in this case you can ask the vendor to send the confirmations, just prior to that with the correct price.
    Please refer this link
    Change in PO Price after goods receipts and goods issue
    though the thread is marked unanswered.
    Regards
    Shailesh

  • Best Practices for ASA 5500 Device Monitoring

    I have looked high and low and am unable to find anything on this topic. I am hoping that somebody here may be able to share some insight into what are considered the best practices for monitoring ASA's--specifically the 5510 with Sec+ License.
    My current monitoring application keeps reporting issues with outbound interface buffers being too high, but there are not any performance issues and I believe the thresholds are just set absurdly low.
    Thank you in advance for any assistance.

    Hi James,
    You probably won't be able to find any all-encompassing documentation for these types of best practices that cover all scenarios. The better method would be to define exactly what items you'd like to monitor and we can provide some guidance on how to best get that working for you.
    -Mike

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • What are the Best Practices for Optimizing Images in InDesign Files

    Is there a best practice for using images InDesign to optimize the document before converting to a PDF? Specifically, what I'm asking is, will the PDF file compress better if the images are cropped prior to placing them in Indesign? I'd like to know the answer for both creating PDF files for printing using images that are 300dpi and for creating PDF files for online delivery using images that are 72dpi. I have an employee that insists images need to be cropped to actual dimensions before placing in the InDesign document. I've never done it that way and believe that her recommended process is way too time consuming and leaves you with no leeway to tweak your page design since the images are tightly cropped.

    As for absolute cropping, I agree with your stance. Until the layout is fixed, preserving your ability to easily manipulate photo size and positioning is key.
    Some clever image management methods have been described in the discussion forums, and one that appealed most to me was the use of duplicate linked image folders. Having a high-res (CMYK) folder and a low-res (RGB) folder to switch between for different output enables you to use both to your advantage. Use the low-res images for layout, for internal proofing, and for EPUB/online PDF/HTML output. Then it's simply a quick switch to the high-res image folder for print purposes. You can easily prepare the alternate collection of images with a Photoshop batch convert script or with the Photoshop Image Processor. Save your presets!

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Workflow not completed, is this best practice for PR?

    Hi SAP Workflow experts,
    I am new in workflow and now responsible to support existing PR release workflow.
    The workflow is quite simple and straightforward but the issue here is the workflow seems like will never be completed.
    If the user released the PR, the next activity is Requisition released that using task TS20000162.
    This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow.
    The thing here is, in our organization, user does not access SAP inbox hence there are thousands of work item that has not been completed. (our procurement system started since 2009).
    Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes.
    Since the documentation is not clear enough, i can't digest why the implementer used this approach.
    May I know whether this is the best practice for PR workflow or not?
    Now my idea is to modify the send email program to complete the workitem after the email being sent to user outlook mail.
    Not sure whether it is common or not though in workflow world.
    Any help is deeply appreciated.
    Thank you.

    Hello,
    "This will send work item to user (pr creator) sap inbox which when they double click it will complete the workflow."
    It sounds liek they are sending a workitem where an email would be enough. By completing the workitem they are simply acknowledging that they have received notification of the completion of the PR.
    "Our PR creator will receive notification of the PR approval to theirs outlook mail handled by a program that is scheduled every 5 minutes."
    I hope (and assume) that they only receive the email once.
    I would change the workflow to send an email (SendMail step) to the initiator instead of the workitem. That is normally what happens. Either that or there is no email at all - some businesses only send an email if something goes wrong. Of course, the business has to agree to this change.
    Having that final workitem adds nothing to the process. Replace it with an email.
    regards
    Rick Bakker
    hanabi technology

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

Maybe you are looking for