Discovery - Best Practice

Greetings,
I was wondering how most engineers approach Discovery, especially on larger networks.  I'm not interested in the methods so much but more in how you "define" a valid discovery range.  I think on most networks, for each subnet there will be one or two ip addresses for network devices and the rest of the range will be servers, workstations, etc.  Since you can't do an include and an exclude on the filters how do you make it such that your network devices are discovered but servers, firewalls, etc. aren't?  Do you manually "include" every device in the filter?  Use the SysObjectID somehow?  Is there a way to exclude devices that you don't want to repeatedly discover without using the exclude filter?
The ideal outcome would be to discover everything relevant and nothing that wasn't but also retain the ability to discover new devices that might be plugged into the network at a later time.  How do most engineers handle this issue?  Anyone know of a good document that covers this subject?  Thanks in advance!
George          

So far this has always been quite a bit of searching, mapping, etc to find what is in place. And this is a manual exercise :-|
Selecting devices to manage is usually best done on systemobjectid. Most management system have such list of vendor OID. If you have many vendors you list will be longer. Sometimes naming conventions indicate router or switch but it is not 100% reliable.
Choosing the management address is a bit harder. I would never use a physical interface however, always a virtual. I first want to establish a device is reachable, after that retrieve the state of the interfaces.
What I tend to do is try and find out about IP ranges reserved for management. Next would be to try and map between the system-name or DNS-name of a device and the desired IP address.
With this info you can then 'discover' the network and based on the lookup of system-name or DNS-name retrieve the IP you need.
In an ideal world you would be able to get a list of IP addresses of all devices in your network combined with the interface names and select an address from there. After all it is not uncommon that a specific VLAN is used as a management VLAN, however even though this info can be gathered simply from mib2,  I'm not aware of any product out there, that can do this.
LMS for instance, can select the lowest lo0 loopback (if it is available) as a management interface.
Sometimes info is already present in IP address reservation systems, sometimes in access control systems, sometimes it is already for a large part in a DNS.
If 'nothing' is in place I would suggest to define a loopback/VLAN interface on every device. Of course this will add to your routing tables. But you can then take the management VLAN to separate QOS class giving it the priority you feel it needs.
I tend to keep a subnet per site and tried to allow for subnet summarization of subnet's on the WAN, but in the MPLS backbones this is often not really a requirement anymore.if your network topology is a star then it may still be worthwhile
Just a little brainstorm,...
Cheers,
Michel

Similar Messages

  • Best practice - Heartbeat discovery and Clear Install Flag settings (SCCM client) - SCCM 2012 SP1 CU5

    Dear All,
    Is there any best practice to avoid having around 50 Clients, where the Client version number shows in the right side, but client doesn't show as installed, see attached screenshot.
    SCCM version is 2012 SP1 CU5 (5.00.7804.1600), Server, Admin console have been upgraded, clients is being pushed to SP1 CU5.
    Following settings is set:
    Heartbeat Discovery every 2nd day
    Clear Install Flag maintenance task is enabled - Client Rediscovery period is set to 21 days
    Client Installation settings
    Software Update-based client installation is not enabled
    Automatic site-wide client push installation is enabled.
    Any advise is appreciated

    Hi,
    I saw a similar case with yours. They clients were stuck in provisioning mode.
    "we finally figured out that the clients were stuck in provisioning mode causing them to stop reporting. There are two registry entries we changed under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CCM\CcmExec]:
    ProvisioningMode=false
    SystemTaskExcludes=*blank*
    When the clients were affected, ProvisioningMode was set to true and SystemTaskExcludes had several entries listed. After correcting those through a GPO and restarting the SMSAgentHost service the clients started reporting again."
    https://social.technet.microsoft.com/Forums/en-US/6d20b5df-9f4a-47cd-bdc3-2082c1faff58/some-clients-have-suddenly-stopped-reporting?forum=configmanagerdeployment
    Best Regards,
    Joyce
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • New Best Practice for Titles and Lower Thirds?

    Hi everyone,
    In the days of overscanned CRT television broadcasts, the classic Title Safe restrictions and the use of larger, thicker fonts made a lot of sense. These practices are described in numerous references and forum posts.
    Nowadays, much video content will never be broadcast, CRTs are disappearing, and it's easy to post HD video on places like YouTube and Vimeo. As a result, we often see lower thirds and other text really close to the edge of the frame, as well as widespread use of thin (not bold) fonts. Even major broadcast networks are going in this direction.
    So my question is, what are the new standards? How would you define contemporary best practice?
    Thanks for your thoughtful replies!
    Les

    stuckfootage wrote:
    I wish I had a basket of green stars...
    Quoted for stonedposting.
    Bzzzz, crackle..."Discovery One, what is that object?
    Bzz bzz."Not sure, Houston, it looks like a basket...." bzzz
    Crackle...."A bas...zzz.. ket??"
    Bzzz. "My God, It's full of stars!" bzz...crackle.
    Peeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeep!

  • Best practice migration

    Hi
    i search, i link to explain the best practice to migrate SCCM 2007 to SCCM 2012 R2. (Ex.: It's necessary to configure a discovery method on SCCM 2012 before a start migration. Ex.: after migrate computer the new client deploy automaticaly or not?)
    Thanks

    Hi,
    There is a CM 2012 Migration guide below that a lot of articles and blog posts available to help you with migration process.
    http://anoopcnair.com/2012/07/06/sccm-configmgr-2007-to-2012-migration-reference-guide/
    Note: Microsoft provides third-party contact information
    to help you find technical support. This contact information may change without notice. Microsoft does not guarantee the accuracy of this third-party contact information.
    Best Regards,
    Joyce

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • IPS Tech Tips: IPS Best Practices with Cisco Remote Management Services

    Hi Folks -
    Another IPS Tech Tip coming up and this time we will be hearing from some past and current Cisco Remote Services members on their best practice suggestions. As always these are about 30 minutes of content and then Q&A - a low cost high reward event.
    Hope to see you there.
    -Robert
    Cisco invites you to attend a 30-45 minute Web seminar on IPS Best   Practices delivered via WebEx. This event requires registration.
    Topic: Cisco IPS Tech Tips - IPS Best Practices with Cisco Remote Management   Services
    Host: Robert Albach
    Date and Time:
    Wednesday, October 10, 2012 10:00 am, Central Daylight Time (Chicago,   GMT-05:00)
    To register for the online event
    1. Go to https://cisco.webex.com/ciscosales/onstage/g.php?d=203590900&t=a&EA=ralbach%40cisco.com&ET=28f4bc362d7a05aac60acf105143e2bb&ETR=fdb3148ab8c8762602ea8ded5f2e6300&RT=MiM3&p
    2. Click "Register".
    3. On the registration form, enter your information and then click   "Submit".
    Once the host approves your registration, you will receive a confirmation   email message with instructions on how to join the event.
    For assistance
    http://www.webex.com
    IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and   any documents and other materials exchanged or viewed during the session to   be recorded. By joining this session, you automatically consent to such   recordings. If you do not consent to the recording, discuss your concerns   with the meeting host prior to the start of the recording or do not join the   session. Please note that any such recordings may be subject to discovery in   the event of litigation. If you wish to be excluded from these invitations   then please let me know!

    Hi Marvin, thanks for the quick reply.
    It appears that we don't have Anyconnect Essentials.
    Licensed features for this platform:
    Maximum Physical Interfaces       : Unlimited      perpetual
    Maximum VLANs                     : 100            perpetual
    Inside Hosts                      : Unlimited      perpetual
    Failover                          : Active/Active  perpetual
    VPN-DES                           : Enabled        perpetual
    VPN-3DES-AES                      : Enabled        perpetual
    Security Contexts                 : 2              perpetual
    GTP/GPRS                          : Disabled       perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 250            perpetual
    Total VPN Peers                   : 250            perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    This platform has an ASA 5510 Security Plus license.
    So then what does this mean for us VPN-wise? Is there any way we can set up multiple VPNs with this license?

  • SAP Best Practices --- is it available for mySAPErp 2004 ECC5.0

    Is it possible to implement SAP Best Practices on mySAPErp 2004 ECC 5.0?
    Also, is best practices available with the Discovery Box?
    Thanks
    Devendra Koppikar

    Hi Devendra,
    Yes, that is possible. If you like to find out the details I would like to recommend visiting the Service Marketplace. Just follow this link. It should directly take you to the information you are looking for:
    http://help.sap.com/content/bestpractices/baseline/bestp_baseline_baseline_V500.htm
    Best regards,
    Frauke

  • Naming convention best practice for PDB pluggable

    In OEM, the auto discovery for a PDB produces a name using the cluster as the prefix and the database name suffix, such as:
    odexad_d_alpcolddb_alpcolddb-scan_PDBODEXAD
    If that PDB is moved to another cluster, I imagine that name will not change but the naming convention has been violated.
    Am I wrong and does anyone have a suggestions for a best practice naming the PDB's

    If the PDB moves to another cluster, OEM would auto-discover it in the new cluster.  So it would "assign" it a new name. 
    As a separate question, would you be renaming the PDB (the physical name) when you move it to another cluster ?
    Hemant K Chitale

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

  • What are Printing Security Best Practices for Advanced Features

    In the Networking > Advanced "Enabled Features" what are the best practices settings for security. Trying to find out what all of these are.  Can't find them in the documentation. Particularly eCCL & eFCL?
    Enabled Features
    IPv4 IPv6 DHCP DHCPv6 BOOTP AUTOIP LPD Printing 9100 Printing LPD Banner Page Printing Bonjour AirPrint LLMNR IPP Printing IPPS Printing FTP Printing WS-Discovery WS-Print SLP Telnet configuration TFTP Configuration File ARP-Ping eCCL eFCLEnable DHCPv4 FQDN compliance with RFC 4702
    Thanks,
    John

    I do work with the LAST archived project file, which contains ALL necessary resources to edit the video.  But then if I add video clips to the project, these newly added clips are NOT in the archived project, so I archive it again.
    The more I think about it, the more I like this workflow.  One disadvantage as you said is duplicate videos and resource files.  But a couple of advantages I like are:
    1. You can revert to a previous version if there are any issues with a newer version, e.g., project corruption.
    2. You can open the archived project ANYWHERE, and all video and resource files are available.
    In terms of a larger project containing dozens of individual clips like my upcoming 2013 video highlights video of my 4  year old, I'll delete older archived projects as I go, and save maybe a couple of previous archived projects, in case I want to revert to these projects.
    If you are familiar with the lack of project management iMovie, then you will know why I am elated to be using Premiere Elements 12, and being able to manage projects at all!
    Thanks again for your help, I'm looking forward to starting my next video project.

  • Site Maintenance Task Best Practice

    As per our understanding,  we need to either enable "Clear Install Flag" task or "Delete Inactive Client Discovery Data" task.
    please do let us know, what will be consequences if we enabled the both tasks & what are the best practices.
    Prashant Patil

    Clear Install Flag
    task is highly dependent on heartbeat discovery. If you install client on computer and heartbeat sent the information to Site making its Install flag as Active in Database and at later stage ,If you uninstall client,still the Install Flag will be active
    until it is discovered by heartbeat Discovery. When the client is not discovered by Heartbeat discovery,Install Flag will be cleared.
    As a thumb rule,When
    enabling this task, set the Client Rediscovery period to
    an interval longer than the Heartbeat Discovery schedule.
    More information about how Clear Install Flag works is given here  http://myitforum.com/cs2/blogs/jgilbert/archive/2008/10/18/client-is-installed-flag-explained.aspx
    Delete Inactive Client Discovery Data:
    suggest you to look at technet document,its clearly explained http://technet.microsoft.com/en-us/library/bb693646.aspx 
    Eswar Koneti | Configmgr blog:
    www.eskonr.com | Linkedin: Eswar Koneti
    | Twitter: Eskonr

  • ICMP Best Practices for Firewall

    Hello,
    Is there a such Cisco documentation for ICMP best practices for firewall?
    Thanks

    Hello Joe,
    I havent look for such a document but what I can tell you is the following?
    ICMP is a protocol that let us troubleshoot or test whether IP routing is good on our network or if a host is live on our network so I can tell you that from that perspective this is definetly something good (Not to mention some of the other good usage that we can provide to this protocol such for PATH MTU Discovery, etc).
    But you also have to be careful with this protocol as we all know it's also used to scan or discover hosts on our network.. Even to perform DoS attacks (Smurf attack, etc).
    So what's the whole point of this post:
    Well at least on my opinion I would allow ICMP on my network but I would definetly permit only the right ICMP code messages and I would protect my network against any known vulnerability regarding DoS attacks with ICMP, In this case I will still take advantage of the really useful protocol while protecting my enviroment,
    Hope that I could help
    For Networking Posts check my blog at http://laguiadelnetworking.com/
    Cheers,
    Julio Carvajal Segura

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

Maybe you are looking for