LMS 4.1 Periodic Polling

Does Periodic Polling also collect or or just checks for changes and then  a Periodic Collection must be run to collect the changed configs?
i ask this because i see someone has configed both to run daily on our system.

if someone would succeed in blocking SNMP, and thus preventing change detection, the weekly collection will still pick up any changes.
The polling is done on pure SNMP, where the collection uses all protocols that are allowed in RME,(TELNET, SSH, RCP, SCP, TFTP, etc)
The SYSLOG automated actions is a part of RME that listens for SYSLOG messages from the devices. You devices must be configured to send SYSLOG to the server of course. Certain messages indicate a (potential) change and trigger a collection job.
Cheers,
Michel

Similar Messages

  • LMS 4.0 Archive Poller starts failing after some working cycles

    Hi,
    We have a LMS 4.0 running in our network with arround 650 nodes.
    We are having problems with the Archive Poller.
    The first few days it worked fine, but then it started to fail for some devices and on the next day it failed to poll all devices. I have restarted the daemon manager after the failure and it came back working fine for another few days. Since then the Change Poller works for a few days, then it degrades polling to a complete failure, a restart of the daemon manager get it working only for a few days.
    Here is some output of a failed node:
    *** Device Details for zagora4-sw *** 
    Protocol ==> Unknown / Not Applicable 
    Selected Protocols with order ==> Telnet,SSH 
    Execution Result:
    Unable to get results of job execution for device. Retry the job after increasing the job result wait time using the option:Admin > Collection Settings > Config > Config Job Timeout Settings
    It seems that the poller is not even trying to poll nodes...?
    Do you think the DB might got corrupted, as I restarted the machine without shutting down the processes some weeks ago...
    Any hints or ideas would be appreciated !

    Hi @ all
    I have the same Problem as Ruslan. Do ansbody have a idea to solve this problem ? Afer some succesfull runs ob the ArchivePoll it stop with the mentioned error...
    *** Device Details for nunw-n30-05-005 ***
    Protocol ==> Unknown / Not Applicable
    Selected Protocols with order ==> Telnet,SSH,HTTPS
    Execution Result:
    Unable to get results of job execution for device. Retry the job  after increasing the job result wait time using the option:Admin > Collection  Settings > Config > Config Job Timeout Settings
    I´d tried to increase the "wait time" but notthing happens ...
    HELP Please ...
    Thanks
    Regard Mario

  • LMS v.4 Periodic Reports

    Hello Dears
    I Need your support as i have problem with Periodic Reports, I need to stop creating them or change thier date
    Can any one help me with this issue
    Best Regards,
    Mohamed Atef

    We can suggest the best way to schedule the job you want in LMS. But you have to share more details on which job is it? Any system job or user created job? Name of job - like Netconfig, Sync Archive etc.
    Ideally each job has as number in 4 digit : say 1234. And when the job is scheduled, it runs with an instance number after a decimal, like 1234.1, 1234.2, 1234.3, .... , 1234.n incrementing each time it runs.
    To edit, you have to modify the first instance by searching that job in job browser under admin.
    In a system job, which is there for inventory or configuration archive, you can simply try to disable it once and enable with new schedules.
    Please share more details on job type and LMS version.
    -Thanks
    Vinod
    **Encourage Contributors. RATE them **

  • LMS 4.2 MIBs Polled

    Hi All,
    Want to understand the poller management and how many MIBs are polled by each interface if we are using interface utilization template.
    I have created the poller with adding the interface utilization template and there were 10 interfaces added to the poller.
    In the doc got to know  3 MIBs are polled interface utilization template.but after creating the poller and creating the report says there were totally 40 instances in the report. Each interface polling the 4 MIBs.
    64txutilization
    64rxutilization
    ifHCoutoctets
    ifHCinOctets
    for 10 interfaces x 4 mibs = 40 instances in the report.
    Can anyone clear me what are the mibs are polled for interface utilization template and how many for each instance.
    Regds,
    Channa

    Hi Elisabeth,
    Kindly check the below thread answered by my Friend Vinod.  Hope it will answer your query.
    https://supportforums.cisco.com/thread/2222945
    Thanks-
    Afroz
    [Do rate the useful post]
    ****Ratings Encourages Contributors ****

  • LMS 4.2.2 - RME Config Change

    I am currently receiving emails from LMS 4.2.2 that alert me of a config change on my switches.  The alerts are emailed after the periodic polling job that is run daily.  My problem is the email contain only the ip address of the device.  It would be nice to open the emails and see what change(s) made on the device.  Does anyone know if this can be done?  TIA                  

    i have seen this with SNMP timeout needed to be increased.
    please try to increase those values, and not the job completion values.
    I just saw in CP LMS 4.2 that the values are assembled on a new page.
    all of them ...
    Admin > Network > Timeout and Retry Settings > Inventory/Config Timeout and Retry Settings
    also in the navigator of the same last windows your screenshot is from try the device specific timeouts:
    Admin > Collection Settings > Config > Edit the Inventory/Config Timeout and Retry Settings
    HTH

  • UP-link, cpu use reports and Polling times

    Hello,
    1.-Do you have the polling and collecting times for each view in LMS3.1?
    RME=?
    CM=?
    DFM=?
    Because when I get a report with some modification like interface administratively DOWN or changing different MAC address in a port switch, this report doesn't show the change inmediately.
    I think that this is not usefull to troubleshooting,please, What is your recomendation?
    Because I needed seek a MAC address that jumps in differents ports into a switch in real time.
    2.-Could you please helpme How to get an UP-link report. I need statistics like behavior of UP-links.
    3.-I don't find how to obtain a CPU used report of my routers and switches, Can you help me?
    DATA:
    LMS3.1
    Windows2003
    Thanks in advance!!!

    The operationally down events in DFM are discovered via periodic polling.  By default, DFM polls every four minutes.  This would explain the time discrepancy you are seeing.  If you need to track MAC address changes in real-time, consider enabling dynamic User Tracking in Campus Manager. See http://SERVER/help/CMcore/CmHelp/ut_UstndingDynamicUpdate.html#wp1434393 (the context sensitive online help) for more details on what dynamic UT is, and how to enable it on your switches.
    As for getting a report on CPU usage, you will need to install the Health and Utilization Monitor add-on to LMS.  It ships with LMS 3.1, but you need to purchase a separate license for it.  Out of the box, it comes with a 90-day evaluation license so you can see if it will meet your needs.

  • Ciscoworks LMS RME / ASA Firewall configuration pre-shared key savings

    Does anybody know the concept about saving pre-shared by Ciscoworks LMS /RME ?
    Is there a way to get the unencrypted values from Ciscoworks LMS /RME for an ASA Firewall ?
    ASA config. saved with RME
    pre-shared-key *
    ASA config. saved to TFTP from ASA
    pre-shared-key 1ZdmaKVwEkQ66nD37d9kA9fj9z75

    If you enable "shadow directory" (RME - Admin - Config Mgmt - Archive Mgmt - Archive Settings), you can find the raw configs in locations such as /var/adm/CSCOpx/files/rme/dcma/shadow/Security_and_VPN/PRIMARY on Solaris, or its Windows equivalent, after one requisite cycle of Periodic Polling and/or Periodic Collection. That's the same config one'd get saving to TFTP manually.
    However, I don't recall how to unscramble the "asterisks" in the RME GUI, if at all possible.

  • LMS 3.2 on Solaris 10 - New devices and RME archive job

    Hi All,
    I've created a RME job that archive syncs our devices. When I add new devices to Common Services, which updates RME, will those new devices have their configurations archived by that same RME job, or do I need to create a new job?
    When the existing job was created the option "All Devices" was selected.
    Thanks,
    Jose Ribeiro

    Hi Jose,
    If you really wants to track the devices configuration with the help of RME Jobs ( sometimes for auditing purpose we have to have a job ), then you can schedule a job as well but with the option Group Selector like (Select A Layer: Device SelectorGroup Selector ). You can see this option at the very top while creating the job. If you select this option, the membership of the group is dynamic in nature i.e this job will run on all those devices which exist in the group that you have selected at the time of running of job and not at the time of creation of job. For example, if you had created job on Normal Group of RME, then this job will attempt to take the backup of config of all those device which exist in Normal Group. So if your RME is synchronized with DCR (Common Services), and if you are adding the new device to Common Service, then you need not to add this device again for this schedule job since DCR will send this device to RME and once this device have inventory collected, it will automatically be in the normal group of RME and since we had selected Group selector option, RME will collect the config for this device too.
    However periodic polling and periodic collection is also desinged to acheive the same purpose and known as kind of administrative task for managing the devices in RME.
    Now the question arise what is difference between periodic polling and periodic collection :- Its very simple actually -
    Periodic Polling :- Under Periodic polling, RME will first poll the device and if it find any change in the actual configuration of device with the existing configuration in RME . it will trigger collection of backup config for the device. If it will not find any difference between the actual configuration on the device and the existing configuration in RME, it will not trigger any new sync archive on the device thus save your network resources.
    Periodic Collection :- Under Periodic Collection, it will not do any comparison and always trigger the sync archive on the device. However you will be able to see the new version of device configuration only when any change on the device happen since there is no point of keeping the same config many times in RME database.
    Hope it answer your querries.
    Thanks & Regards
    Gaganjeet

  • MBP Harddisk - periodic click-clack Sounds

    My Harddisk makes a really strange click-clack sound every ˜30 seconds. I just come from my local Apple Service Provider. The service man told me, that he also hears the sounds, but he doesn't know if that is a hardware defect.
    How can I scan for bad sectors in OSX?
    Is there a way to obtain detailed S.M.A.R.T Information?

    Hi weichsel,
    Your drive is probably fine. Be sure though to always perform backups. Here is a program that can check your drive and tell you all is well.....
    http://macupdate.com/info.php/id/14825
    SMARTReporter is an application that can warn you of ATA hard-drive failures before they actually happen! It does so by periodically polling the S.M.A.R.T.-status of your hard-drives. S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a technology built into most modern hard-drives that acts as an "early warning system" for pending drive problems. Because SMARTReporter relies on the S.M.A.R.T. implementation of Mac OS X, it only supports ATA or S-ATA hard-drives, if you want S.M.A.R.T. support for your SCSI, USB or FireWire hard-drive, send feedback to Apple. SMARTReporter can notify you of impending drive failures by sending e-mails, displaying a warning dialog or executing an application. The current status of your drives is always displayed through the customizable menu item.

  • How to check process is running for an application pool on remote server

    Hi,
    I am creating console application which will check whether WCF is up and running. I am thinking of the approach whether w3wp prcoess is running for a particular application pool using some .net API. We have different website with different application pool.
    I know we can use metatdata or any operation inside WCF service to test svcutil or proxy methods but i don't want that. Please help.
    Thanks,
    Dhanaji

    I assume your goal is to simply provide a heartbeat to periodically make sure that a service is running?  There is no real guarantee that a service will be available until you actually try to call it.
    Also please realize that you're going to impact the server's performance by doing this.  IIS is set up to idle out apps that aren't being used. If you periodically poll the service then you will prevent IIS from doing this.  The result will be
    that you'll see a WCF service running even if nobody every uses it.  Is this really what you want?  Hence if you have a lot of WCF services being checked then the server is wasting resources on services that might not be needed. 
    IIS doesn't work like a normal application.  It hosts your WCF service.  You should let IIS handle the lifetime of your service rather than using a heartbeat.  It would take a catastrophic failure for IIS to be unable to start a service when
    a request comes in.  The app pool itself may be running even without the WCF app running. An app pool is a collection of apps so if any app is running then the pool is running.  Checking the pool tells you nothing about the apps in it.
    You mentioned that you don't want to use IIS metadata or a proxy but that is the only 2 options you have.  Personally I would just hit the endpoint with a HttpWebRequest and verify you get a response but the IIS metadata would be a close second.
    Michael Taylor
    http://blogs.msmvps.com/p3net

  • Refresh an open web page on closing an applet launch from Java Web Start

    Hi
    I launch an aplication via Java Web Start from a link in a web page. I am looking to a solution to refresh my web page when I close my applet.
    Thanks in advance to give me the solution or only to let me know if it's possible or not.

    everything's possible. but this is not trivial. there is no direct connection between the html page that launched the app and the app. You can create an indirect connection by having the app notify a server process when it closes and having the page periodically polling the server for the app's status.

  • How can i redirect to another JSP page automatically after some event

    I am developing a Tic-Tac-Toe game which can be played between two players who are on two different machines.
    When the first player comes to the Welcome page he will redirected to a Waiting Page when he clicks on the 'Start' button.
    When the second player comes to the Welcome page he will be redirected directly to the Game page, on clicking the 'Start' button.
    So how can i redirect the first player to the Game page after the second player is available for the game.
    And if i want to manage multiple instances of my game how can i do that//
    I am using JSP, javascript and MySQL for developing my project, and I am new to all these tools, but i would still like to carry on with all these only

    This is a bit of a challenge because of the nature of the web. Generally the web is "pull only" meaning that the browser has to initiate any interactions - the server can't push data to the browser if it wasn't asked to.
    The easiest way to solve this is using AJAX via JavaScript to periodically poll the server for any status changes. There are other ways (the Comet protocol is one) but they start to get a bit difficult and are still a bit new and not completely supported in a standards way. And to be honest they are still basically polling though in a more efficient way.
    Are you using a JavaScript framework? Most of the JavaScript frameworks that I've used have built in support for polling in the background. You'd have to have the JSP/servlet side be able to handle these polling requests from the browser and, when another person joins the game, the server indicates that and sends that back to the browser.
    As far as multiple instances I would have the server automatically pair up users as needed. So when the first player arrives he has to wait for another player. When the second player arrives a new game is created for those two players. Now a third player arrives and waits until a fourth player shows up. When player 4 joins another separate game is created. Presumably the conversation between the browser and the server will need to include a "game number" or other unique number so that the server can keep track of the games.

  • Is It Possible to Add a Fileserver to a DFS Replication Group Without Connectivity to FSMO Roles Holder DC But Connectivity to Site DC???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi,
    A DC in siteB should helpful. I still not see any article mentioned that a DFS client have to connect to PDC every time trying to access a DFS domain based namespace.
    Please see following article. I pasted a part of it below:
    http://technet.microsoft.com/en-us/library/cc782417(v=ws.10).aspx
    Domain controllers play numerous roles in DFS:
    Domain controllers store DFS metadata in Active Directory about domain-based namespaces. DFS metadata consists of information about entire namespace, including the root, root targets, links, link targets, and settings. By default,root servers
    that host domain-based namespaces periodically poll the domain controller acting as the primary domain controller (PDC) emulator master to obtain an updated version of the DFS metadata and store this metadata in memory.
    So Other DC needs to connect PDC for an updated metadata.
    Whenever an administrator makes a change to a domain-based namespace, the
    change is made on the domain controller acting as the PDC emulator master and is then replicated (via Active Directory replication) to other domain controllers in the domain.
    Domain Name Referral Cache
    A domain name referral contains the NetBIOS and DNS names of the local domain, all trusted domains in the forest, and domains in trusted forests. A
    DFS client requests a domain name referral from a domain controller to determine the domains in which the clients can access domain-based namespaces.
    Domain Controller Referral Cache
    A domain controller referral contains the NetBIOS and DNS names of the domain controllers for the list of domains it has cached. A DFS client requests a domain controller referral from a domain controller (in the client’s domain)
    to determine which domain controllers can provide a referral for a domain-based namespace.
    Domain-based Root Referral Cache
    The domain-based root referrals in this memory cache do not store targets in any particular order. The targets are sorted according to the target selection method only when requested from the client. Also, these referrals are based on DFS metadata stored
    on the local domain controller, not the PDC emulator master.
    Thus it seems to be acceptable to have a disconnect between sites shortly when cache is still working on siteB.
    If you have any feedback on our support, please send to [email protected].

  • No display due to long running Actions

    Hi,
    I have a small issue in my application.
    Please help providing the solution for the following problematic scenario faced in my application.
    My project is generally a report generation project,
    arch used: our-own architecture(similar to struts)
    Server: deployed in tomcat server..
    On clicking a button from X.jsp, it triggers the action with the request = REQ1
    so in my configuration Xml file it checks for it and executes the corresponding Action classes and it should display the Presentation(Jsp)
    What happens is , the action class execution is taking more than 10 min, becaz of which i am getting blank screen.
    From the action to jsp , it is executing without any error, which i could see in the log files..
    Actions taken:
    improved the performance of the query, but cudnt reduce much time..
    session time is 180min, and connection timeout set to an hr. - No change- It seems this doesnt have to do anything with this.
    my Observance:
    I guess, as per arch, it is executing till jsp, but it seems that HttpRequest is getting timedout becaz of long running actions.
    Please help to find the cause and approach method to resolve this issue.
    Note: no issues to wait till 10min.. but need to get the values (reports) in the screeen..
    Thanks in advance.....

    Design is inappropriate.
    Long running processes, and specifically reports, shouldn't rely on session connections.
    Instead the following should be done.
    - Process initiation (user request), post a task request to a task request queue. Return a unique identifier to requestor.
    - Separate process processes tasks in the task queue.
    - Requestor (such as a gui) periodically polls using the indentifier to see if the task is complete.
    - When the task is complete the requestor asks for the results (GUI displays them.)
    The problem with extending timeouts is that there are valid error scenarios where long timeouts will mean resources not returned to the system for long periods of time when they should have been.
    Per your current solution there is probably some other timeout somewhere. Could be several. The JSP forum might be a better place to ask where all of the possible timeouts could be.

  • Do I need to use a semaphore when reading/writing a functional global from reentrant VIs?

    I have a program that spawns multiple reentrant VIs running simultaneously.  These VIs write their status to a functional global.  The VI that monitors them periodically polls this global to check out how they're doing and takes action based on their collective status.  (BTW, I'll mention that this monitoring VI is NOT the parent that spawned the reentrants, just in case this might affect memory management as it pertains to my question.)
    Anyway, 90% of the time, stuff goes off without a hitch.  However, once in a while the whole thing hangs.  I'm wondering if there's any chance that I've overlooked something and that some kind of collision is occurring at the global.  If that's the case, then should I be setting a semaphore for the global read/writes?
    And, if this is a problem, then there is something deep about functional globals that I don't yet understand.  My notion of them is that they should negate the need for a semaphore, since there is only one global instance, which cannot be simultaneously called by the various reentrants.  Indeed, this is arguably THE WHOLE POINT about functional globals, is it not?  Or am I missing something?
    Thanks,
    Nick 
    "You keep using that word. I do not think it means what you think it means." - Inigo Montoya

    Thanks Uwe,
    This is a good hunch.  However, functional globals typically run at "subroutine" priority.  With this priority, it is not possible to select a specific execution system; it is always "same as caller."
    I will try your suggestion by switching to "time-critical" priority.  However, I do not know if this could lead to a different set of issues (non-determinism?).  It will probably take a little while to hear back from my guys on whether this makes a difference or not, because the error is sporadic, and sometimes doesn't come along for quite a while.
    While probing all of this, I looked at the execution settings for my reentrant VI.  It has standard settings: "normal" priority, running in the "same as caller" execution system.  My impression has always been that LV creates the clones with unique names.  This allows the clones to be in the same execution system with no problem, and the fact that the execution dialog allows me to choose "same as caller" for a reentrant VI supports this assertion.  This is logical, since there could potentially be many more clones than available execution systems.  "Preallocate clone for each instance" is selected, which is what I want, I think, though I don't know if it matters in my application.
    In summary, I am trying out your suggestion, but with skepticism.  Any other suggestions from anyone out there?  Any misunderstandings on my part that need clarification?
    Thanks,
    Nick 
    "You keep using that word. I do not think it means what you think it means." - Inigo Montoya

Maybe you are looking for