SCOM Performance Collection

All,
We are using SCOM 2012 and trying to create a Windows performance collection rule for a process
I know the name of EXE file. During the rule creation wizard, When we enter name of any WIndows CLient computer to browse, we get an error.
Is it Firewall/ports issue or something else??
I can browse to servers which are in same network with cilents but cannot browse clients in wizard

Hi,
Please also try the following suggestions.
1. Verify that the management server can ping the computer on which you want to install the agent, using the fully qualified domain name (FQDN). If ping is successful, click Start, click Run, and then type the FQDN. If the management server cannot browse
the target computer, check the firewall settings.
2. Perform a manual agent installation.
Niki Han
TechNet Community Support

Similar Messages

  • WMI Performance counters not shown in SCOM Performance view

    Hi,
    We're unable to see %CPU usage, avg cpu queue length and many other WMI counters on a Performance View for one of our servers.
    - The agent is displayed as healthy.
    - We are getting data from counters generated by script.
    - WMI counters are shown as enabled.
    - We're able to see the counters locally on server manager, on performance counters.
    - we have run lodctr /R, without success.
    - no alerts on the scom agent or server.
    - no overrides for these counters exist.
    So definitely it looks like the WMI infrastructure is working ok but somehow data is not being sent to the server. Any hints on debugging this issue?
    Thanks in advance

    Ops Manager, Ops DW, or both?  If performance collection is ENABLED for this server, and it's a W2K8 box, and you can see the perf in perf monitor on the server, then you should look to see if other performance is missing.  If there are no performance
    counters being collected, then you can do a few things, but before doing that, do some spot checking.
    Is this rule enabled?  Did someone override disable it?  If enabled, and no overrides, then create a view (my views or whatever that custom workspace is) targetting the generic class of "said" target for the collection rule.  Do you see any
    computers listed?  How many computers are listed?  Jot that down.  Go to discovered inventory in the monitoring pane, and change scope to the targeted class for the collection rule.  Count what is returned, do the number of servers returned
    match?
    If you have the same number of green healthy objects in both the custom view and the discovered inventory scope, then you are good, if there are a few missing, you need to troubleshoot those agents.
    If you are missing all of them, you probably have a problem with your RMS/SQL, which means you have to start traces on SQL and or at the very least look at the ops manager event log on the rms.
    So, back to the "missing a few" scenario.  You could start to do logging, etc, but instead of that, reinstall the agent on one of the machines not showing up in both views.  Wait a few hours.  Check the views again, and keep an eye on the
    ops manager event log on that sick agent.  Increase the size of the log file, if you have too, so you don't over right events.
    After a few hours if it's not showing up, you can either start down the path of enabling tracing on the agent (there are kbs on how to turn this on, and the logs are not hard to go through), or you can uninstall the agent.  Run cleanmom on the box.
     Reinstall the agent.
    I would perform the repair, clean mom, before I troubleshoot the agent, but that's just me.  Keep in mind, any data that is collected and or queued will be lost when you do the clean mom.  But since you have no data as it is, no real loss.
    While the agent is showing up in the console, and before you start down the previous steps, I would verify that you have events collected as well (another query) and stop a service on the box that is supposed to be monitored, see if it triggers and alert
    and a state change.  Also peek at the RMS ops manager event log, you might have some SQL problems.
    Good luck.
    Regards, Blake Email: mengotto<at>hotmail.com Blog: http://discussitnow.wordpress.com/ If my response was helpful, please mark it as so, if it answered your question, then please also mark it accordingly. Thank you.

  • After adding seperate performance collections to MAP 9.1 database, the database has become corrupted

    Hi,
    I have installed MAP 9.1(9.1.265.0) on my notebook, with Windows 7 Enterprise. The inventory of our environment was successfull and I have successfully added some performance collections to the database.
    First I ran the performance collection for one hour, then added a performance collection of one week. this was okay.
    Then I waited one week and added another collection of about five days. That collection would not stop: It was scheduled to run between 2014-07-28 12:49:31 until 2014-08-01 05:00:04. But it was still running at 2014-08-01 05:30:25. I waited for a little
    bit longer, but the collection kept on running according to the status screen.
    So I cancelled out of the collection at 2014-08-01 05:45. In the performance data it said that the colection ran from Jul 14 2014 08:04 AM until Aug 1 2014 4:57AM. So that looked okay.
    But now, when I try to Get the performance metrics data from MAP, it states that I have to do a "Refressh Assessment", because I problably have cancelled out of a collection. This "Refresh Assessment" wil run for about an hour and than
    completes with a message "Failed"
    I get these errors in the MapToolkit.log
    <2014-08-05 05:14:51.09
    AssessInventoryWorker@StoredProcAssessment,I> RunAssessment() - [Perf] [[Perf_Assessment].[ClearPerfdata]] : 125 ms
    <2014-08-05 05:14:56.13
    AssessInventoryWorker@StoredProcAssessment,I> RunAssessment() - [Perf] [[Perf_Assessment].[CreateTimeIntervals]] : 5039 ms
    <2014-08-05 05:22:47.76
    AssessInventoryWorker@StoredProcAssessment,I> RunAssessment() - [Perf] [[Perf_Assessment].[CreateMetricsPerTimeInterval]] : 471591 ms
    <2014-08-05 05:52:54.66
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught InvalidOperationException trying to roll back the transaction: This SqlTransaction has completed; it is no longer usable.
    <2014-08-05 05:52:54.79
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught a SQL transaction timeout exception. Will retry 3 more time(s). Retrying in 5000 milliseconds.
    <2014-08-05 05:53:15.86
    AssessInventoryWorker@DataAccessCore,W> OpenConnection() - Caught a SqlException trying to connect to the database.  Will retry connection 3 more time(s).  Retrying in 5000 milliseconds.
    <2014-08-05 05:53:20.99
    AssessInventoryWorker@DataAccessCore,W> OpenConnection() - Caught a SqlException trying to connect to the database.  Will retry connection 2 more time(s).  Retrying in 10000 milliseconds.
    <2014-08-05 05:53:45.00
    AssessInventoryWorker@DataAccessCore,W> OpenConnection() - Caught a SqlException trying to connect to the database.  Will retry connection 1 more time(s).  Retrying in 15000 milliseconds.
    <2014-08-05 06:24:03.69
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught a SQL transaction timeout exception. Will retry 2 more time(s). Retrying in 10000 milliseconds.
    <2014-08-05 06:54:13.91
    AssessInventoryWorker@DataAccessCore,W> DoWorkInTransaction<T>() - Caught a SQL transaction timeout exception. Will retry 1 more time(s). Retrying in 15000 milliseconds.
    <2014-08-05 07:24:28.99 AssessInventoryWorker@Analyzer,E> RunAssessments() - Assessment threw an exception:
    <2014-08-05 07:24:29.03
    AssessInventoryWorker@AssessInventoryWorker,I> AssessmentCompletedEventHandler: Assessment completed event.
    <2014-08-05 07:24:29.09
    AssessInventoryWorker@TaskProcessor,I> WorkerCompleted: Worker: 'AssessInventoryWorker'
    <2014-08-05 07:24:29.15 TID-16@TaskProcessor,I> Run: Completed. Status: Failed
    Is there maybe a restriction to the intervalls of adding performance collection data, or is there something else I am doing wrong?
    (I made a backup of the database after the first week of performance data, that database is still usable, so I can try to add more performance collections to that version of the database)
    I hope someone has an idea what is going on.
    Thanks!

    Time between isn't the problem. If you look in the log file, SQL is timing out. I think the problem is machine resources and time related. After the performance data collection has run for the predefined amount of time, MAP has SQL execute various assessments
    on the data to aggregate the raw data into something MAP can use. The more raw data that exists, the longer SQL will take and the more CPU and memory resources SQL will need.
    I would recommend that you have at least 4 cores or vCPU's and 6-8 GB of memory dedicated to the machine on which MAP is running. I would also follow the directions in this Wiki article to increase the timeout in MAP so that MAP will give SQL the time it
    needs to complete the job.
    http://social.technet.microsoft.com/wiki/contents/articles/10397.map-toolkit-increasing-the-sql-database-timeout-value.aspx
    Please remember to click "Mark as Answer" on the post that helps you, and to click
    "Unmark as Answer" if a marked post does not actually answer your question. Please
    VOTE as HELPFUL if the post helps you. This can be beneficial to other community members reading the thread.

  • Turning off 504 non reporting performance collection rules in the Exchange 2010 MP

    Hello
    I read the following MS article about tuning the Exchange 2010 Management Pack
    http://support.microsoft.com/kb/2592561
    It advises to turn off 504 non reporting performance collection rules, then only turn back on the ones of interest.
    However it does not state how to identify / locate these 504 rules. I would prefer to find/disable them via PowerShell rather than manually.
    Can someone please advise how to locate these 504 rules please and disable on mass.
    Thanks
    AAnotherUser__
    AAnotherUser__

    you’ll need a script that performs the following steps:
    Retrieve the management pack in which to store the overrides
    Retrieve the class that will be targeted by the override
    Retrieve the rules or monitors that will be disabled
    Disable the rule or monitor
    To disable bulk of rule, Here is the sample that disables all rules matching the “*events/sec*” filter.
    $MP =
    Get-SCOMManagementPack -displayname
    "Exchange 2010 Overrides" |
    `
    where {$_.Sealed
    -eq $False}
    $Class =
    Get-SCOMClass -DisplayName
    "Exchange Performance rule"
    $Rule =
    Get-SCOMRule -DisplayName
    "*Events/sec"
    Disable-SCOMRule
    -Class $Class -Rule
    $Rule -ManagementPack
    $MP -Enforce
    Also, you can refer below link
    http://www.systemcentercentral.com/opsmgr-2012-disabling-rules-and-monitors-in-bulk-in-powershell/
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"

  • SCOM Performance report

    HI All,
    I want to pull report for 2012 OS servers in SCOM 2012 reporting. When i search with %Processor time as performance counter it is showing 2003 and 2008 OS performance counter only , it is not showing 2012 performance counter, can anyone help me why 2012
    performance counter is not reflecting in report and how to pull % Processor time report

    Hi,
    It sounds like the operating system instance is not being discovered. Please ensure the Agent has been installed on the Windows Server 2012.
    Niki Han
    TechNet Community Support

  • SCOM Performance/Availability metrics on VMWare guests - query

    Hi,
    I'm monitoring our VMWare server infrastructure with SCOM 2012.  I'm looking for a bit of clarification Re the metrics which SCOM produces to those that are provided by vCenter.
    Basically this has stemmed from seeing some alerts such as Available megabytes of Memory monitor which has alerted due to SCOM reporting the agent has < 100MB available yet the vCentre memory reports show guest has well above the threshold available and
    therefore shouldnt have alerted.  So when discussing with our server team who would rely on the vCenter metrics, its causing them to lose faith in any metrics/alerts which SCOM is gathering. - the various CPU/Memory performance graphs from SCOM are not
    consistent with the vCenter ones.
    Can anyone advise if the SCOM metrics are (more) reliable than the native vCenter ones as indicators of health and provide the evidence for arguing the case either way.
    Any help much appreciated - and Merry Christmas :-)

    Hi,
    I think you need to check the memory usage on the guest manually and compare the result. In the productive environment, please test, compare and then decide which one is more reliable and suitable in your environment.
    Niki Han
    TechNet Community Support

  • Exporting SCOM Performance, Problem Alerts with PowerShell to CSV?

    Is there a way to export System Center Operations Manger Performance and Problem Alerts to a CSV with the number of Counts and Milliseconds for the application that caused the issue?
    When using System Center Operations Manager to gather performance events on applications, it would be nice to be able to export this data out to Excel. 
    Is there a PowerShell script that is available to export different event types? For example with for a performance alert or a problem alert, or highest usage. 
    Using System Center 2012 R2.
    Thank you!
    Jeff

    Hi Jeff,
    I dont have a powershell script , But surely have a SQL query which will pull the performance data from the operationsmanager DB.
    Below is the one.
    select Path, ObjectName, CounterName, InstanceName, SampleValue, TimeSampled
    from PerformanceDataAllView pdv with (NOLOCK)
    inner join PerformanceCounterView pcv on pdv.performancesourceinternalid = pcv.performancesourceinternalid
    inner join BaseManagedEntity bme on pcv.ManagedEntityId = bme.BaseManagedEntityId
    where ObjectName = 'Memory' and CounterName = 'Available MBytes' and Path like '%Your Agent name%'
    order by countername, timesampled
    You will need to mention the Object name and counter for what you are looking for and in the path mention the agent name.
    If you want the counter for all the agents then the Command should be altered as follows: Path like '%%'
    '%%' - Represents that you want the perfmon data of the specific Object and counter on all the SCOM Agents.
    ===============================
    ==============================================
    All perfmon data of all servers:
    select Path, ObjectName, CounterName, InstanceName, SampleValue, TimeSampled
    from PerformanceDataAllView pdv with (NOLOCK)
    inner join PerformanceCounterView pcv on pdv.performancesourceinternalid = pcv.performancesourceinternalid
    inner join BaseManagedEntity bme on pcv.ManagedEntityId = bme.BaseManagedEntityId
    where ObjectName LIKE '%%' and CounterName LIKE '%%' and Path like '%%'
    order by countername, timesampled
    ====================================================
    All perfmon data for only a specific agent:
    select Path, ObjectName, CounterName, InstanceName, SampleValue, TimeSampled
    from PerformanceDataAllView pdv with (NOLOCK)
    inner join PerformanceCounterView pcv on pdv.performancesourceinternalid = pcv.performancesourceinternalid
    inner join BaseManagedEntity bme on pcv.ManagedEntityId = bme.BaseManagedEntityId
    where ObjectName LIKE '%%' and CounterName LIKE '%%' and Path like '%Your Agent name%'
    order by countername, timesampled
    Gautam.75801

  • How old historical data can be available for SCOM performance views to show up in graphs?

    I have created performance rule based on performance counter and created a performance view on the target of the rule. It shows me the graph of how performance counter value changed over the period of time.
    In SCOM console there is an option in performance views, to choose start date-time and end date-time for the performance graph. It allows me to select a range of say few years.
    However going by the Microsoft documentation, it appears that performance data is stored in OperationsManagerDW database only for limited period.
    I have a feature requirement where we need to have performance views or reports or some means where for compliance purpose we should be able to show older data as well.
    So my query is exactly how much historical data(up to how old) is actually available for performance views? Is there any specific information or MS documentation available where I can get this information (duration of historical data in specific number of days
    or years or so.) I need to decide based on this if I can use performance views or I need to go for some kind of reports like SSRS reports instead.

    Hi,
    Additionally, I would like to share the following article with you. Hope it helps.
    Understanding and modifying Data Warehouse retention and grooming
    http://blogs.technet.com/b/kevinholman/archive/2010/01/05/understanding-and-modifying-data-warehouse-retention-and-grooming.aspx
    Niki Han
    TechNet Community Support

  • Performance collection counters

    Hi,
    A customer has asked for the following and I am stuck when it gets to territory like Directory Services.
    The customer wants ALL counters under each Object to be selected.
    I am stumped how to do this for all counters at once like the few in the pic below. Perfmon allows ALL to be selected, but I can only select one at a time in SCOM (that I can see).
    Thx,
    John Bradshaw
    ============================================
    Can we setup the following performance counters for Active Directory? We need to get it for the case of physicals versus virtual for domain controllers.
    Performance Counters
    \Process\*
    \DirectoryServices(*)\*
    \PhysicalDisk\*
    \Processor(*)\*
    \Memory\*
    \System\*
    \Server\*
    \Network Interface(*)\*
    \UDPv4\*
    \TCPv4\*
    \IPv4\*
    I would like it to be 1 min intervals

    Roger,
    What would I put in at this point in the script from that link?
    JB
     -ComputerName
    '$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$' 
    You just need to type
     -ComputerName
    '$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$' 
    When you add performance counter, the computer name is required and -computername is a variable which point the target machine computer name.
    Roger

  • SCOM - performance widgets empty

    Hi,
    does anybody knows this probleme and can help me?
    In the SCOM Console we have different dashboards with grid layout and on this grid layout we have performance widgets (for example for monitoring network utilization). From time to time and in the moment the hole time (since about 2 days) all performance
    widgets does not show any data (no graph, no data in the legend). State or alert widgets are working correctly - performance views also.
    I didn't found any usefull hint in the event log.
    SCOM version is 2012 R2, but this also happend with 2012 SP1.
    Thanks,
    Bernd

    I apologize if this seems rude, but this question was not answered correctly. At the end of the original post Bernd says, "SCOM version is 2012 R2, but this also happend with 2012 SP1". This implies that the product that needs support is 2012
    R2 and not 2012 SP1 which is what the provided links in the 'answer' lead to. I know this because I am having the same issue and have been researching an answer for weeks now; and I have come across many, many, many... posts just like this one that gives some
    'magical' hotfix fix for a PREVIOUS VERSION. Please answer the question correctly.
    <mytubeelement data="{"bundle":{"label_delimitor":":","percentage":"%","smart_buffer":"Smart Buffer","start_playing_when_buffered":"Start playing when buffered","sound":"Sound","desktop_notification":"Desktop
    Notification","continuation_on_next_line":"-","loop":"Loop","only_notify":"Only Notify","estimated_time":"Estimated Time","global_preferences":"Global Preferences","no_notification_supported_on_your_browser":"No
    notification style supported on your browser version","video_buffered":"Video Buffered","buffered":"Buffered","hyphen":"-","buffered_message":"The video has been buffered as requested
    and is ready to play.","not_supported":"Not Supported","on":"On","off":"Off","click_to_enable_for_this_site":"Click to enable for this site","desktop_notification_denied":"You
    have denied permission for desktop notification for this site","notification_status_delimitor":";","error":"Error","adblock_interferance_message":"Adblock (or similar extension) is known to interfere
    with SmartVideo. Please add this url to adblock whitelist.","calculating":"Calculating","waiting":"Waiting","will_start_buffering_when_initialized":"Will start buffering when initialized","will_start_playing_when_initialized":"Will
    start playing when initialized","completed":"Completed","buffering_stalled":"Buffering is stalled. Will stop.","stopped":"Stopped","hr":"Hr","min":"Min","sec":"Sec","any_moment":"Any
    Moment","popup_donate_to":"Donate to","extension_id":null},"prefs":{"desktopNotification":true,"soundNotification":true,"logLevel":0,"enable":true,"loop":false,"hidePopup":false,"autoPlay":false,"autoBuffer":false,"autoPlayOnBuffer":false,"autoPlayOnBufferPercentage":42,"autoPlayOnSmartBuffer":true,"quality":"default","fshd":false,"onlyNotification":false,"enableFullScreen":true,"saveBandwidth":false,"hideAnnotations":false,"turnOffPagedBuffering":false}}"
    event="preferencesUpdated" id="myTubeRelayElementToPage"></mytubeelement><mytubeelement data="{"loadBundle":true}" event="relayPrefs" id="myTubeRelayElementToTab"></mytubeelement>

  • SCOM 2012 collect Windows Audit logs and forward them to a Linux Syslog server

    Hello:
    1. We have a SCOM 2012 server.
    2. We have SNARE agents for PCI systems, but now we want to save money by gathering all events for all Windows servers using its native features.
    3. We also have a centralized Linux server running SYSLOG which aggregates the logs to our Dell LogVault retention appliance (for PCI purposes)
    Thus, my question:
    In effort to remove the SNARE agents from the windows servers, can we implement Audit Collections Services (ACS) in the windows environment so that they collect/forward audit/event logs to the SCOM 2012 server and then SCOM forwards the events to the centralized
    syslog Linux server? In which case they are aggregated to the Dell appliance.
    We prefer to use the Linux syslog as the centralized log server but would like to know how to go about implementing the solution above.
    Many thanks,
    Robert Perez-Corona

    Hi,
    Here is a thread about how to make SCOM 2012 work as a syslog server, hope this can be helpful for you:
    https://social.technet.microsoft.com/Forums/en-US/524ea527-c069-40f9-96ef-026a4aa06fe9/make-scom-2012-a-syslog-server?forum=operationsmanagergeneral
    Regards,
    Yan Li
    Regards, Yan Li

  • How Enable Performance collection in 10g forms

    Hi,
    I'm using 10g forms to collection peforms data. I have followed the steps as said by Oracle by following this document "perf_collect.pdf". Though the document talks about Oracle forms 6i, I assume the same will work in Oracle 10g forms.
    The code added to the HTML file is as follows:-
    <html> <head> ORACLE FORMS.</head>
    <body onload="document.pform.submit();" >
    <form name="pform" action="http://chn-U100S77.CVNS.corp.covansys.com:8889/forms90/f90servlet/temp.html" method="POST">
    <input type="hidden" name="form" value="D:\DevSuiteHome\forms90\server\myapps\MODULE2.fmx">
    <input type="hidden" name="userid" value="SCOTT/TIGER@ppaprod">
    <input type="hidden" name="obr" value="yes">
    <input type="hidden" name="buffer_records" value="YES">
    <input type="hidden" name="debug_messages" value="YES">
    <input type="hidden" name="array" value="YES">
    <PARAM NAME=“serverArgs” value = "module=module2.fmx userid=scott record=collect log=C:\perf_”>
    </form> </body></html>
    Any early reply is highly appreciated.

    Hi,
    Thanks, it creates XML file right now..
    I have created tables etc. i was trying to write the same to an oracle table.
    =========================================
    D:\>java oracle.forms.diagnostics.Xlate datafile=d:\devsuitehome\forms90\trace\f
    orms\forms_1524.trc URL=10.3.144.249:1521:ppaprod UserName=system Password=system CollectionId=2
    Invalid Parameter: URL
    Exception in thread "main" java.lang.NoClassDefFoundError: oracle/forms/registry
    /MessageManager
    at oracle.forms.diagnostics.Xlate.printUsage(Unknown Source)
    at oracle.forms.diagnostics.Xlate.readArgs(Unknown Source)
    at oracle.forms.diagnostics.Xlate.<init>(Unknown Source)
    at oracle.forms.diagnostics.Xlate.main(Unknown Source)
    =============================================
    I gave hostname and IP address in the output parameter, it reports the sames error.

  • A question about Logical Disk Performace collection Rules and how the Data is displayed in a Report view

    Hello
    I am currently on SCOM 2007 R2 CU6 and Window Server Operating System MP version 6.0.6989.0 (I cannot use the latest version of the MP as we still have some Windows 2000 Servers we need to support, yes I know :( )
    Any way the issue is, I have never found the Logical Disk performance counter data very reliable from SCOM.
    For example, I have a Windows 2008 R2 Server and when looking at a local Logical Disk (which holds an SQL temp DB on a busy SQL Server) and look at the performance counter
    The SCOM collection rule is called "Collection Rule for Average Disk Seconds per Transfer"
    The actual Windows Perfmon counter is called "Avg. Disk Bytes/Transfer"
    if you look at the description of the above Perfmon counter it is described as 
    "Avg. Disk Bytes/Transfer is the average number of bytes transferred to or from the disk during write or read operations."
    The problem I have is as follows:
    The resulting SCOM performance chart over several days (which has a scale ox 1x) states the value never reach 3 (e.g. maximum wa s 2.7 say). I cannot believe the a drive holding the tempDB databases for a busy SQL Server does not transfer more then 2.7 "bytes"
    of data at a given to to its tempDB databases!
    Indeed when I look at Permon on the Server and looks at this counter over say 20 minutes or so, the figure is often in the 10,000 or 30,000 bytes etc. It does fall back to 0 (zero) momentarily but mostly it is in the 1000s, or 10,000s etc.
    Therefore when my boss says show me the "Avg. Disk Bytes/Transfer" and SCOM says it has not exceeded 2.7 over the last business week (i.e. the chart never peak above this value on the chart with scale 1x) he naturally does not believe it!!
    Any advice please regarding the above. Is it the fact if the counter ever falls to zero it messes up the SCOM report charts?
    Thanks
    AAnotherUser
    AAnotherUser__

    Create your own collection rule, to mirror the sample times, and what not.  Look at the data from your rule vs the mp default rule.  It probably has to do with the chart scale imho.
    Regards, Blake Email: mengotto<at>hotmail.com Blog: http://discussitnow.wordpress.com/ If my response was helpful, please mark it as so, if it answered your question, then please also mark it accordingly. Thank you.

  • Data Warehouse performance since changing retention settings?

    Hi,
    I dont know if its a coincidence but I have managed to get into a bit of a state with regard to our data warehouse.
    Firstly when the server was specced I dont think anybody actually worked out what size the databases would work out to be. I have started troubleshooting initially because of a lack of disk space. The DW was set to the defaults of 400 days etc and had grown
    to around 700GB. Our operations DB is around 50GB in size. The disk at this point had around 50GB of space left.
    Anyway I did some work on the retention in the DW to knock a lot of stuff down to say 9 months as needed. A week later and the data is now 500GB although the physical size is still 700GB.
    Now I dont know if its coincidence but in the last couple of days I am getting performance alerts such as not being able to  store data in the DW in a timely manner, failing to perform maintenace and visual indications that things have slowed down on
    the DW. For example an SLA report for the month for all servers now times out when before it ran in a few minutes.
    So I am wondering if the "blank" space in the DW is now causing issues as there is data at both ends of the database perhaps. I would like to get this blank space back but no expert on SQL and wondering if any other considerations needs to be taken
    for SCOM "to get this back".
    I also understand that perhaps more disk space is required for the actual grooming so maybe I need to get down to 6 montghs of data before this can happen.
    The performance part may not be tied to the issue but I guess either way I would like to get the space back if possible
    thanks

    There are sereval causes
    1) check any events or performance collection which generate a hugh DB and DW size by using the following SQL query in
    http://blogs.technet.com/b/kevinholman/archive/2007/10/18/useful-operations-manager-2007-sql-queries.aspx
    2) You can also refer to the following post
    http://deploymentportal.blogspot.ru/2012/08/operations-manager-data-warehouse.html
    3) check SQL logs on the datawarehouse, especially for blocking problems
     Also, check Disk IO on the data warehouse (the windows mp collects these metrics). If it affects all Management Servers and the message does say "timeout" then the problem is likely to be at the SQL end. It may not be maxed out by CPU or Memory but
    there are likely to be other bottlenecks on the SQL box. What is the Disk Queue for disks that the operations manager data warehouse data and log files reside? These are on seperate physical disks aren't they??
     My guess is that it is a SQL issue .... temporary timeouts suggest that SQL is busy  doing something else. And I'd tend to concentrate my thoughts on Disk IO rather than memory or cpu.
    Roger

  • Performance View with data from Custom Rule

    Hello again everybody,
    so we created a vbscript which measures time needed for a specific process inside a program.
    Now we made a rule for getting this data inside SCOM. Alerting and Health-Monitoring works fine.
    For example, we made "if TimeNeeded>5" critical Health state, works like a charm.
    But now, we want to view the data (the script runs every 30 seconds) inside a Performance view.
    - We checked if the rule works as intended. Check
    - Set the rule to "Performance Collection". Check
    - Set the right target group. Check
    - Override for specific target. Check
    - Created a perf.monitor "collected by specific rules" (added our rule) with right targeting. Check
    But the performance-view keeps unpopulated.
    What now?

    I exported the MP as XML file but the rule is not in there, only references.
    But thats not the point anyway, the rule is working.
    The only thing missing is to grab the data the script returns (via oAPI propertybag) and show it in a view. Thats all
    Maybe the question was not clear enough:
    Our script returns a value every 30 seconds, for example 6, then 4, then 8, then 10 and so on...
    All we want now is to show these values in a Performance-View.
    Graphical example:
    10|                            X
    08|                     X
    06|         X
    04|               X                              X
    02|                                    X
    00|
    I guess you now know what I mean.
    This Article
    corelan.be/index.php/2008/04/16/using-customnon-performance-counter-data-to-build-graphs-in-operations-manager-2007
    is EXACTLY what I want and what I did.
    But my Performance-View refuses to show
    ANYTHING...
    Whats the problem?

Maybe you are looking for