Best practice for monitoring MXE3500 v3.2.1

Hi all
We have recently upgraded to v3.2.1 on our MXE3500, all has gone well but I am looking for the best way to monitor the system health of the device for our support teams.
The Show and Share and DMM servers come with SNMP monitoring but I can't see the equivelent for the MXE.
Due to the lock down nature of the Windows VM on the 3.2.1 software I do not want to install our standard OS monitoring software which is BMC patrol as Cisco have adviced to not install any additional software.
Has anyone got any ideas on this?
Adam

Hello,
Of course too sensitive might cause failover to happen when some packets get lost, but remember the whole purpose of this is to provide as less downtime to your network as possible,
Now if you tune these parameters what happen is that failover will be triggered on a different time basis.
This is taken from a cisco document ( If you tune the sla process as this states, 3 packets will be sent each 10 seconds, so 3 of them need to fail to SLA to happen) This CISCO configuration example looks good but there are network engineers that would rather to use a lower time-line than that.
sla monitor 123
type echo protocol ipIcmpEcho 10.0.0.1 interface outside
num-packets 3
frequency 10
Regards,
Remember to rate all of the helpful posts ( If you need assistance knowing how to rate a post just let me know )

Similar Messages

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best Practice for Monitoring, on VPS 2012 Server Standard. Extended Events or Profiler?

    Hi,
    What tools do you use to determine if you should tweak SQL Server configuration and optimize code route, or simply bump up your virtual resources? Can someone share a bag of Extended Events to monitor at the VPS level?  
    I'm a reasonably decent SQL Developer but never advanced far with DBA efforts. Especially when the mainstream went virtual. Seemed to me that all SQL Servers flexibility with managing disk and memory went out the window now that everything is 'shared', NATed,
    and Plesked. So I basically dropped out of the conversation and built stuff with SSIS and TSQL.
    Now, I'm charged with assessing a bottleneck on a VPS Windows 2012 Standard running SQL 2012 Express. I've read that running profiler and traces are deprecated and I've looked a bit at the servers extended events on the hosted environment. I have not run anything.
    My question: Does it make sense to think in terms of 'levels' in deciding what to monitor? I consider the SQL Server as a level, then the Windows Server, and finally the Virtual Level. What I'm getting at, is sure, I can monitor SQL Server with a profile tool,
    but it won't know SQL is on a VPS. So do I miss something?
    There used to be day when we had a dedicated physical box for SQL Server. We ran traces using profiler and got good clues on how to improve performance. In todays VPS world we can use sliders to increase virtual memory and disk space. What tools do you use
    to determine if you should tweak SQL Server configuration and optimize code route, or simply bump up your virtual resources? Can someone share a bag of Extended Events to monitor at the VPS level?  
    What Extened Events at the VPS level tell me if SQL Server is struggling with the limited 1Gb virtual memory? I realize this is not a direct question but hopefully someone will point this developer in the right direction.
    John

    Hi John,
    From SQL point of view it doesnt really matter whether the box is physical or virtual. So if you feel there is performance issues with sql and you are well versed with sql profiler troubleshooting go ahead with that. If you feel performance is good, then
    you know where to look into.
    I would first prefer to find whether it is really a problem with SQL before trying to troubleshoot sql side and I use perfmon counters to do that.
    Also I would look at the SQL Error log to see if there are any obvious errors.
    I havent used extended events much so leaving that for others to comment on. :)
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Best Practice for monitoring RAC 11gR2

    Hi,
    I have RAC 11gR2+ASM on two nodes.
    I would like to get your advice what are the most critical things i should monitor - Regarding RAC COMPONENCTS
    Thanks

    Hi,
    here i am mentioning some basic monitoring
    1)interconnect switch is working properly or not(private network)
    2) check which instances are running on which nodes
    3) check if ASM,listeners,nodeapps,gsd,vip..... are running or not.
    Each instance carrying planned load (balanced?).
    – Shared storage access is equal.
    – Interconnect Load
    -Latency
    – High CPU usage - Oracle processes getting enough resources.?
    Thanks

  • Best Practices for ASA 5500 Device Monitoring

    I have looked high and low and am unable to find anything on this topic. I am hoping that somebody here may be able to share some insight into what are considered the best practices for monitoring ASA's--specifically the 5510 with Sec+ License.
    My current monitoring application keeps reporting issues with outbound interface buffers being too high, but there are not any performance issues and I believe the thresholds are just set absurdly low.
    Thank you in advance for any assistance.

    Hi James,
    You probably won't be able to find any all-encompassing documentation for these types of best practices that cover all scenarios. The better method would be to define exactly what items you'd like to monitor and we can provide some guidance on how to best get that working for you.
    -Mike

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Best Practices for MDS Monitoring

    We are deploying Solarwinds and would like some tips on what would be useful to monitor on MDS switches.  Of course, the normal port utilization, cpu, interface counters are already set up, but thought I would check to see if anyone has any suggestions on any other OID's I should look into.
    Thanks!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best practice on monitoring Endeca health / defining outage

    (This is a double post from the Endeca Experience Management forum)
    I am looking for best practice on how to define Endeca service outage and monitor the health of the system. I understand this depends on your user requirements and it may vary from customer to customer. Specifically what criteria do you use to notify your engineer there is a problem? We have our load balancers pinging dgraphs on an interval. However the ping operation is not sufficient in our use case. We are also experimenting running a "low cost" query to the dgraphs on an interval and using some query latency thresholds to determine outage. I want to hear from people on the field running large commercial web site about your best practice of monitoring/notifying health of the system.
    Thanks.

    The performance metric should help to analyse the query and metrics for fine tuning.
    Here are few best practices:
    1. Reduce the number of components per page
    2. Avoid complex LQL queries
    3. Keep the LQL threshold small
    4. Display the minimum number of columns needed

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice for backing bean population? (also, ActionListener RANT)

    Hello,
    I am about 3/4 of the way through development of a small to medium size JSF application. Sometimes I really like JSF, but much of the time I am left puzzled or frustrated for hours trying to find workarounds to JSF's bugs/glitches and design flaws.
    For example, early on, I was impressed with how easily it was to invoke a method from a page using an actionlistener. Now that I'm actually building things with JSF, the actionlistener funtionality still seems cool, but incredibly half baked. I find myself using request parameters LIKE CRAZY to work around the fact that JSF doesnt support passing parameters directly to backing bean methods. This feels awkward and wrong considering the fact that JSF is intended to abstract the HTTP underpinnings. To add insult to injury, I often have to iterate through ALL of the request parameters looking for one that has an id with an ending matching my desired property name (since JSF appends it's own crap to the beginning). I don't like doing things in a hacky way. This seems very hacky, and I feel dirty doing it.
    So, my first question is, what is the best practice for populating backing beans??? How do others accomplish this. I can think of several other approaches, but none feel less hacky.
    Second, are there plans in the next spec (please say there are) to allow parameters to be passed to backing bean methods? If not, WHY THE HECK NOT?
    Even though JSF expert group people have been conspicuously absent from this forum of late, I'd really appreciate responses from you as well.
    Thank you for your thoughts.

    Hi BrownBear,
    I've been using JSF for about 6 months now and I'd be glade to help as much as I can.
    Concerning parameters, I'm not sure what your issue is but I use the f:param tag to pass them. If you could post an example of what you are trying to do, I could see exactly what your issue is. Maybe the f:param can't help you.
    As for best practice for populating backing beans, I personaly try to let JSF do as much as possible. For example, if I have a backing bean with five properties, I make sure that they all are on the JSP page the bean serves. If one of the property is just there as an Id like, lets say, a Person ID (DB row key), then I put it on my JSP page as a hidden input field. I do the same with the properties that only for display, if I want them to be back in my bean when request comesback.
    Hope this help some how. Please, feel free to ask specific questions related to your specific problem and I monitor this post and trnasfer to you the ;little JSF experience I have.
    I'm pretty happy with JSF as it is but it sure needs improvements. :) What the heck, it's version 1.01 after all, and the next release should be a great one with the integration of JSTL.
    Cheers

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • What are the best practices for audit report for SharePoint 2013 farm ?

    Hello,
    I am looking for the best practices for audit reporting in SharePoint 2013 farm.Can anyone please provide me checklist/tools/guidelines on same ?
    your help will be much appreciated.
    Thanks and Regards,
    Dipti Chhatrapati

    This is quite open ended question. A sharepoint farm should be well maintained as per :
    1. Microsoft's recommendations on : Topology, Hardware and Software requirements, Operational procedures and most important Capacity guidelines:
    http://technet.microsoft.com/en-us/library/ff758645(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/cc262787(v=office.15).aspx
    2. Organisation's IT policies and procedures : Farm Configuration, Workload and monitoring
    http://technet.microsoft.com/en-us/library/ff758658(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/ee748651(v=office.15).aspx
    3. Industry best practices
    I would suggest to start thinking over these lines and create a plan for your Sharepoint farm.
    You can then create powershell scripts to run these reports at certain frequency to find the changes, any deviation from the standard and health of the entire farm.
    Hope this helps!!
    I LOVE MS..... Thanks and Regards, Kshitiz (Posting is provided "AS IS" with no warranties, and confers no rights.)

Maybe you are looking for

  • How to chage password of user's webLogic  programmatically

    Hi My English isn't very good. I use jdeveloper 11.1.1.3.0 I created a User Id and Password in Application -> Secure -> Users in jazn-data.xml. I know it isn't a common way but now I should use it. I need a way that each user can change his/her passw

  • Pages 5.1 - vertical lines in a table

    I have a large document that, for regular formatting purposes, is entirely a table.  At various points there is a black vertical line that appears on the inside of the right-most edge of the table.  I do not know what it signifies, nor how to get rid

  • Issue at running configurator

    Hi guys, I am upgrading Hyperion from 11.1.2.1 to 11.1.2.3, I have applied the maintenance pack, and make the upgrade of SOA. After that, I launch the configurator to "Configure EPM System products", but I am getting the following error: ./epmsys_reg

  • Could not create an new account for use

    Last night I logged in with my AppleID to make a post. As it was the first time, I had to create an account/profile in order to do so. But that was not possible using 10.6.6 and the latest Safafi. What happened was this : Name: AppleID: Log out of th

  • How do you PROPERLY clean a DV7-1025NR cooling system?

    I use a program called "Speccy" from Piriform and it is showing a temperature of 149 degrees on my video card (GeForce 9600M GT) and a temperature of 105 degrees on my CPU (Intel Mobile Core 2 Duo P8400). I don't know if these are high or not, but I