Logfile Monitoring - too many alerts!

Hello!
We are using the sapccmsr-agent to monitor a logfile for an error pattern. The problem is that sometimes the monitored pattern is written every minute into the logfile so we get every minute an alertmail.
What can we do to avoid receiving 60 mails per hour?
Is there an option to reduce the alert processing?
Kind regards.
André
.ini-file:
LOGFILE_TEMPLATE
DIRECTORY=/BISAS/log
FILENAME=jboss-console.log
MONITOR_CONTEXT=jboss
MTE_CLASS=SEETESTPM1_jboss
MONITOR_NEWEST_FILES=1
MONITOR_LAST_FILE_MODIF=1
SHOWNEWLINES=0
RESCANFROMBEGIN=0
IGNORE_CASE=1
PATTERN_0="SpyJMSException: Connection Failed"
VALUE_0=RED
SEVERITY_0=50
MESSAGECLASS_0="SAP-T100"
MESSAGEID_0="Z_CCMS 567"

This behavior happened in case of the services of SQL or SCOM is stopped which worked in startup.
Verify from the following:
SQL DB worked successfully.
Check log of SQL Server and SCOM Server.
Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"

Similar Messages

  • BPM : too many alerts in a process step - how to remove older alerts ?

    Hello
    One question about Business Process Monitoring :
    We have too much alerts in a process step, so it takes 20 minutes to display them.
    How can we reduce the number of alerts in this process step ?
    Thanks in advance.
    Regards
    Fred

    It worked !!
    Thanks John for your valuable inputs.

  • Too many alerts taking too much system resources in the XI system

    We have been running Alerts in XI and looks like there are too many now. The alerts are stopped now from generation.
    I can't even run the report RSALERTPROC successfully to delete the existing alerts from the system.
    Anyone knows an alternative way to do this? Or what is the table name where the alerts are stored so that we can delete some entries.
    Thanks a lot.

    Hi,
    a. If the "Suppress Multiple Alerts of This Rule" is checked for the Rule, we shall recieve only one alert mail for a particular error category. We have to confirm the Alerts by clicking complete in the Alert Inbox.
    If we dont check this "Suppress Multiple Alerts of This Rule" , we will get all the alerts without being supressed.
    b. Run the Report RSALERTPROC and delete all the alerts. Let me know the difficulty in running this report as you have mentioned...
    c. Run the Report RSALERTTEST to display the Alert status.
    Thanks,
    Tanuj

  • Too many alerts after reinstall agent

    Have an issue:
    About a month ago we turned off our Hyper-V server without getting it off from AD or DNS. Few days ago we put HDD from its server to another (the same model) and turned it on. Server works OK, but SCOM wend billion of alerts from the host. Tries to uninstall
    and install agent (uninstall from scom console, then uninstall using msi from host) and added to scom console - still it generated a large amount of alert (mostly old). Don'no what to do.
    SCOM 2012 R2, Host - W2k8R2 /w Hyper-V Role

    Hi,
    If those alerts are coming from rules, then we can use below script to delete them:
    SCOM 2012 script to close old alerts coming from Rules
    http://gallery.technet.microsoft.com/SCOM-2012-script-to-close-c4511481
    After delete them, we can check whether new alerts come out.
    Regards,
    Yan Li
    Regards, Yan Li

  • Mail Alert with Triangle Exclamation Mark with iMap - Too Many Connections?

    I have one Laptop (Macbook mid-07) on Leopard 10.5.6 and a Macbook Pro 09 on Leopard 10.5.8. I just switched over to iMap where I have two email accounts set up on both laptops. Both laptops have identical accounts set up on them.
    I'm currently geting the triangle with the exclamation mark alert on the inboxes of both accounts when both laptops have Apple Mail open. Is this an issue with Apple Mail or my Service Provider possibly complaining that I have too many connections? I don't get these alerts when only one laptop has the Mail App open. The alert reads:
    Alert
    There may be a problem with the mail server or the network. Check the settings for the account "IMAP" or try again.
    The server error encountered was: The attempt to read data from the server "mail.mydomain.com" failed.
    However, Mail seems to send and receive okay with the alerts popping up and so I'm not sure if this alert is anything to be serioulsy concerned about. Or, is there a way to fix this?

    I set mail to fetch messages every 15 minutes where I delayed one laptop in opening up mail initially by 5 minutes. That way, both laptops are not fetching mail concurrently and there's a gap of time for them to download mail before the other starts. This seemed to have fix this alert issue and I'm no longer getting these messages.
    Although, 15 minutes seems to be a bit long for auto fetching since my email can be under some time constraints. Does anyone know how to set the time to check every 10 minutes which I can probably live with? Currently, I have the option to set for 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, or manually.

  • CS3 Camera Raw Too Many Files Open at the Same time alert!!!

    Please help me. I keep getting this error message in Camera Raw that says there are too many files open at the same time - but I only have 100 open:( Please help - I was getting this error with CS2 and thought upgrading to CS3 would fix it and it didn't!!!

    > "10 or 100 - you can quickly go through a stack of images in ACR and make any desired changes you want. Whether making the same or similar adjustment to similar files, or making radically different adjustments to different images as appropriate".
    I've done this with far more than 100! I think my maximum is 425 raw files, invoking ACR from Bridge without Photoshop even loaded, and it worked well. (I've also done 115 JPEGs in order to crop them under extreme time constraints).
    It can be very slick. For example, if I use a ColorChecker a number of times in a shoot, it is easy to select just the set (perhaps 100 or so) that a particular ColorChecker shot applies to and set the WB for all of them.
    Furthermore, in case people don't know, you can set ratings on raw images while many of them are open in ACR. (Just click under the thumbnail). It isn't as powerful as Lightroom, but it is not to be dismissed.
    I suspect that it is possible to apply sensor-dust-healing to lots of images in the same way, and certainly it is easy to apply presets based on various selections.
    Perhaps with AMG (Adobe Media Gallery) it will be sensible to use the above capability to process 100s of raw files, then create a set of web pages for the best of them, in not much more time than it would have taken in Lightroom. I judge that Lightroom is the "proper" tool for the job (perhaps after 1.1!), but Bridge+ACR can go a long way.

  • Too many failed message tracking resquests via task in Exchange 2010 SP3

    Hi,
    We are getting this alert every day and I don't find anything to solve it...
    Some days it's Yellow (3%) and the others it's Red (5%)... But we don't have 5% of the messages failed.
    Do there is a solution to this problem or even somewhere to look ?

    To Fix your issue, try to override monitor rule of KHI: Too many failed message tracking requests via task – Red (>5%) and also rule of KHI: Too many failed message tracking requests via task – Yellow (>3%)
    Also you can refer below link
    http://social.technet.microsoft.com/Forums/en-US/089b0c7c-77c5-47ef-9e48-7e44a8f1d2a1/khi-too-many-failed-message-tracking-requests-via-task-in-exchange-2010-sp1-management-pack?forum=operationsmanagermgmtpacks
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"

  • Event monitoring / notification - AQ, Alerts, or what ?

    I need to know whether Oracle Advanced Queuing, or some other Oracle database facility is able to support the event notification requirements for our event driven user interface.
    I am working on moving an application to Oracle (Oracle database is my main expertise but I haven't ever used AQ). There are varying numbers of GUI clients logged in at a time, plus some background services. The application design depends on updating entities in the database and having an event notification sent to all interested GUI clients. I need to support this in the oracle environment.
    Key issues are:
    - Database triggers to initiate notification of updates
    - Different event topics for each notifiable table. Many rows in each table.
    - Every event notification must arrive. DBMS_ALERT only delivers the most recent alert.
    - Clients come and go fairly often, and the client set is not fixed. Clients are only interested in events that occur while the client is connected.
    - We don't want to be polling a table - we want to either wait for events or be notified of them.
    Is there a way of doing this using Oracle Advanced Queuing multiconsumer queues?
    Is there a better way of doing this with some other tool that comes with the 10G database?
    The descriptions of AQ I have read are oriented to having a limited, fixed set of message consumers, whereas our environment has a varying set of subscribers, so I am concerned about:
    - How heavyweight is the subscribe / unsubscribe step ? I have not seen references to it being done on the fly like this.
    - Is there a way of making the subscription be dropped when the database connection is dropped, or are they always persistent ?
    DBMS_ALERT won't do because if many alerts are raised while a listener is busy, only the latest alert is delivered.
    We are running the 10GR2 database but not the application server.
    Thanks for your advice,
         David Penington

    David,
    I have the same requirement as you.
    Basically what I have put in place is:
    - Package with sets of wrapper method to ease queue related process (enqueue, dequeue, listen, ...)
    - Related object type, queue table and different queues based on different events (update, insert, delete, ...)
    - Triggers in certain tables that are related to events described above (will only enqueue messages)
    - Application that subscribe to different queues and then calling a blocking listen in a separate thread (the listen call will be unblocked with the queue name as the out parameter)
    - I activated the Queue Monitor to clean up the consumed and expired messages
    - Other information: Oracle version 9i; non-persistent queue was not yet tested - OCI + raw :(
    So anyway, everything is ready and when I do an update on a given record I will have all the applications notified. I launch several application instances and everything is working as I expected.
    But then I have another problem: it generates too many redo logs (which in turn become archive logs). I observed this when doing a performance test; it was something like updating 20 records for 10000 times with delay between interval set to 0,5 second. That was when I noticed the excessive generation - which is finally normal because everything is done with transaction integrity.
    Until now I haven't find any solution yet... I was thinking about NO LOGGING in the tablespace specially created for the queue table, but it was not at all the solution - well I tried my luck and it didn't work :)
    Is there anyone out there that has the same requirement as me and David and has the solution for this? Maybe another way of implementing the mechanism...
    Looking forward to have some feedbacks.
    Regards,
    Kiky Shannon

  • Too many open socket connections causing ColdFusion to crash?

    I’m currently working on an e-commerce site which sends and receives information to/from the client’s order management system via XML over a TCP/IP socket.  It uses a very old java-based custom tag called CFX_JSOCKET (which appears to have been written in 2002) to open the socket, send the data, and get the response.  The code that calls the custom tag and sends/receives data from the OMS pre-dates my working on the site, but its always worked, so I haven’t paid it much attention.
    Back in the summer of 2009 we started experiencing issues with ColdFusion (v.7 on Window 2003 at the time) locking up on a more and more frequent basis, until it ultimately became a daily issue.  After extensive research we narrowed the issue down to the communication between the web server and our client’s order management server.  It seemed the issue with ColdFusion hanging was either related to there being too many connections open, or to these connections hanging and resulting in dead threads.  This an educated guess based on a blog post I’d seen online, not actual monitoring of either CF or the TCP/IP connections.  As soon as we dialed back the timeout on the CFX_JSOCKET tag from 20 seconds to 10, the issue disappeared, so we left it at that and moved on.
    Fast forward to this January. The site is hosted at a new location, on a 64-bit Windows 2008 box running ColdFusion 9.  Over the years traffic on the site has continued to grow.  The nature of the clients business means that August and January are their business times of the year (back to school for college kids) and in January ColdFusion once again started locking up on an almost-daily basis.  
    One significant difference is that the address cleansing software that previously ran on the box and was used to verify shipping addresses is not available for 64-bit, so when we moved to the new server last summer, that task was moved to the client’s order management software and handled via XML like all other interaction with that system. However, while most XML calls to that server (order input, inventory check, etc) take under a second to complete, the address cleansing call regularly takes over 5 seconds to return data, and frequently times out. 
    Once we eliminated the address cleansing call from the checkout process, ColdFusion once again stopped locking up regularly.  So it appears that once again it’s the communication between the web server and the order management server that’s causing problems. We currently have that address cleansing call disabled on the web site in order to keep ColdFusion from crashing, but that’s not a long term solution.
    We don’t have, nor can I find online, the source code for the CFX_JSOCKET custom tag, so I decided I’d write some CF code utilizing the java methods to open the socket, send the data, get the response, and close the connection.  My test code is working fine (under no load).  However, in trying to troubleshoot an issue I had with it, I started monitoring the TCP/IP connections using TCPView.  And I noticed that all the connections to the order management server, whether opened via the custom tag or my new code, remain open in either a TIME_WAIT or FIN_WAIT2 status for well over 2 minutes, even though I know for a fact that my new code is definitely closing the connection from the web server side. 
    They do all close eventually, but I’m wondering 1. Why they’re remaining open that long; 2. Is that normal; and 3. If all these connections remaining open could be what’s causing ColdFusion to choke. 
    Does this sound plausible?  If so, does anyone have any suggestions/recommendations about how to fix it?  My research seems to indicate this might be a matter of the order management system not closing the connection on its end, but I’m in way over my head, and before I go to client and tell them it’s their OMS causing the issue, I need to feel a little more confident that I’m on the right track. 
    Any help or advice would be very greatly appreciated.  And thanks for taking the time to read through my long-winded explanation of the problem.
    Set-up details:
    ColdFusion Version: 9,0,0,251028  Standard 
    Operating System: Windows Server 2008 
    Java Version: 1.6.0_14 
    Java VM Name: Java HotSpot(TM) 64-Bit Server VM 
    Java VM Version: 14.0-b16 
    Thanks,
    Laurie

    Hi Laurie,
    Not aware of custom tag called CFX_JSOCKET. I guess the process you described very well is consuming a resource then you are getting a problem. Trick is what parameter to adjust. Perhaps you are running out of one the threads in CFadmin > Server Settings > Request Tuning.
    I expect if you enable CF Metrics logging where you can log the threads and other resources then you can find out which parameter needs adjusting. Let me know if you want some details on enabling CF Metrics. Perhaps others will have much better idea than me and help without the overhead of logging.
    The other interesting thing is you are using CF9.0.0. Do you have some reasons for not being on updater1 CF9.0.1?
    HTH, Carl.
    PS I posted before however seems to have gone, just hope does not come back and then I have posted twice.

  • TOO many OPEN CURSORS during loop of INSERT's

    Running ODP.NET beta2 (can't move up yet but will do that soon)
    I don't think it is related with ODP itself but probably on how .Net works with cursors. We have a for/next loop that executes INSERT INTO xxx VALUES (:a,:b,:c)
    statements. Apparently, when monitoring v$sysstat (current open cursors) we see these raising with 1 INSERT = 1 cursor. If subsequently we try to perform another action, we get max cursors exceeded. We allready set open_cursor = 1000, but the number of inserts can be very high. Is there a way to release these cursors (already wrote oDataAdaptor.dispose, oCmd.dispose but this does not help.
    Is it normal that each INSERT has it's own cursor ? they all have the same hashvalue in v$open_cursor. They seem to be released after a while, especially when moving to another asp.net page, but it's not clear when that happens and if it is possible to force the release of the (implicit?) cursors faster.
    Below is a snippet of the code, I unrolled a couple of function-calls into the code so this is just an example, not sure it will run without errors like this, but the idea should be clear (the code looks rather complex for what it does but the unrolled functions make the code more generic and we have a database-independend datalayer):
    Try
    ' Set the Base Delete statement
    lBaseSql = _
    "INSERT INTO atable(col1,col2,col3) " & _
    "VALUES(:col1,:col2,:col3)"
    ' Initialize a transaction
    lTransaction = oConnection.BeginTransaction()
    ' Create the parameter collection, containing for each
    ' row in the list the arguments
    For Each lDataRow In aList.Rows
    lOracleParamters = New OracleParameterCollection()
    lOracleParameter = New OracleParameter("luserid", OracleDbType.Varchar2,
    _ CType(aCol1, Object))
    lOracleParamters.Add(lOracleParameter)
    lOracleParameter = New OracleParameter("part_no", OracleDbType.Varchar2, _
    CType(lDataRow.Item("col2"), Object))
    lOracleParamters.Add(lOracleParameter)
    lOracleParameter = New OracleParameter("revision", OracleDbType.Int32, _
    CType(lDataRow.Item("col3"), Object))
    lOracleParamters.Add(lOracleParameter)
    ' Execute the Statement;
    ' If the execution fails because the row already exists,
    ' then the insert should be considered as succesfull.
    Try
    Dim aCommand As New OracleCommand()
    Dim retval As Integer
    'associate the aConnection with the aCommand
    aCommand.Connection = oConnection
    'set the aCommand text (stored procedure name or SQL statement)
    aCommand.CommandText = lBaseSQL
    'set the aCommand type
    aCommand.CommandType = CommandType.Text
    'attach the aCommand parameters if they are provided
    If Not (lOracleParameters Is Nothing) Then
    Dim lParameter As OracleParameter
    For Each lParameter In lOracleParameters
    'check for derived output value with no value assigned
    If lParameter.Direction = ParameterDirection.InputOutput _
    And lParameter.Value Is Nothing Then
    lParameter.Value = Nothing
    End If
    aCommand.Parameters.Add(lParameter)
    Next lParameter
    End If
    Return
    ' finally, execute the aCommand.
    retval = cmd.ExecuteNonQuery()
    ' detach the OracleParameters from the aCommand object,
    ' so they can be used again
    cmd.Parameters.Clear()
    Catch ex As Exception
    Dim lErrorMsg As String
    lErrorMsg = ex.ToString
    If Not lTransaction Is Nothing Then
    lTransaction.Rollback()
    End If
    End Try
    Next
    lTransaction.Commit()
    Catch ex As Exception
    lTransaction.Rollback()
    Throw New DLDataException(aConnection, ex)
    End Try

    I have run into this problem as well. To my mind
    Phillip's solution will work but seems completey unnecessary. This is work the provider itself should be managing.
    I've done extensive testing with both ODP and OracleClient. Here is one of the scenarios: In a tight loop of 10,000 records, each of which is either going to be inserted or updated via a stored procedure call, the ODP provider throws the "too many cursor errors at around the 800th iteration. With over 300 cursors being open. The exact same code with OracleClient as the provider never throws an error and opens up 40+ cursors during execution.
    The applicaation I have updates a Oracle8i database from a DB2 database. There are over 30 tables being updated in near real time. Reusing the command object is not an option and adding all the code Phillip did for each call seems highly unnecessary. I say Oracle needs to fix this problem. As much as I hate to say it the microsoft provider seems superior at this point.

  • Linux logfile monitoring does not work after using "privileged datasource"

    Hello!
    I have noticed a strange behaviour on one of my Linux Agents (lets call it server_a) regarding logfile monitoring with the "Microsoft.Unix.SCXLog.Datasource" and the "Microsoft.Unix.SCXLog.Privileged.Datasource".
    After successfully testing monitoring of /var/log/messages on server_a with the "Privileged Datasource". This test has been on server_a and the MP containing this rule has been delete from the management gorup before the following tests.
    I wanted to test another logfile (lets call it logfile_a) using the normal datasource "Microsoft.Unix.SCXLog.Datasource" on server_a. So I created the usual logfile rule (rule_a) in XML (which I have done countless times before) for monitoring
    logfile_a. Logfile_a has been created by the "Linux Action Account User" with reading rights for everyone. After importing the Management Pack with the monitoring  for logfile_a I had the following warning alert in the scom console managing
    server_a:
      Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_a" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
    An internal error occurred.  (the userid has been changed to keep the anonimity of our action account).
    To make sure I did not make any mistakes in the XML i have created a new logfile rule (rule_b) monitoring "logfile_b" on "server_a" using the "Logfile Template" under the authoring tab. logfile_b was also created by the "Linux
    Action Account User" and had reading rights for everyone. Unfortunately this logfile rule created the same error:
      Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_b" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
    An internal error occurred.  (the userid has been changed to keep the anonimity of our action account).
    Although both rules (rule_a and rule_b) used the "Microsoft.Unix.SCXLog.Datasource" which uses the Action Account for monitoring logfiles, the above error looks to me as SCOM wants to use the privileged user, which in this case it not necessary
    as the Action Account can read logfile_a and logfile_b without any problems.
    So after a few unsuccessfull tries to get both rules to raise an alert I tried to use the "Microsoft.Unix.SCXLog.Privileged.Datasource" for rule_a as last resort. Then suddenly after importing the updated Management Pack I finally received the
    alert I desperately waited for this whole time.
    Finally after lot of text here are my questions:
    Could it be that the initial test with the Privileged Log Datasource somehow screwed up the agent on server_a so it could not monitor logfiles with the standard log datasource? Or may anyone of you might have an idea what went wrong here.
    Like I said both logfile could be accessed and changed by the normal Action Account without any problems. So privileged right are not needed. I even restarted the scom agent in case something hanged.
    I hope I could make the problm clear to you. If not, don´t hesitate to ask any questions.
    Thank you and kind regards,
    Patrick

    Hello!
    After all that text, I fogrot the most essential information..
    We are currently using OpsMgr 2012 SP1 UR4, the monitored server (server_a) has agent version 1.4.1-292 installed.
    Thanks for the explanation of how the logprovider works. I tried to execute the logfilereader just to see if there are any errors and everything looks fine to me:
    ActionAccount @server_a:/opt/microsoft/scx/bin> ./scxlogfilereader -v
    Version: 1.4.1-292 (Labeled_Build - 20130923L)
    Here are the latest entry in the scx.log file:
    * Microsoft System Center Cross Platform Extensions (SCX)
    * Build number: 1.4.1-292 Labeled_Build
    * Process id: 23186
    * Process started: 2014-03-31T08:29:09,136Z
    * Log format: <date> <severity>     [<code module>:<process id>:<thread id>] <message>
    2014-03-31T08:29:09,138Z Warning    [scx.logfilereader.ReadLogFile:23186:140522274359072] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_slogfilewithoutsudo.txtEDST02
    2014-03-31T08:29:09,138Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
    2014-03-31T08:29:09,138Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
    * Microsoft System Center Cross Platform Extensions (SCX)
    * Build number: 1.4.1-292 Labeled_Build
    * Process id: 23284
    * Process started: 2014-03-31T08:30:06,139Z
    * Log format: <date> <severity>     [<code module>:<process id>:<thread id>] <message>
    2014-03-31T08:30:06,140Z Warning    [scx.logfilereader.ReadLogFile:23284:140016517941024] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_stest.txtEDST02
    2014-03-31T08:30:06,142Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
    2014-03-31T08:30:06,143Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
    Strangely I could not acces the "Action Account User" directory under /var/opt/microsoft/scx/log as "ActionAccount" user. Is it ok for the directory to have the following rights:  drwx------ 2 1001 users? Instead of "1001" it should say "ActionAccount",
    right?
    This could be a bit far fetched, but perhaps the logfile provider can´t access logfiles as the "ActionAccount" on this server because it needs to write in the scx.log file. But as the "ActionAccount" can´t access the file, the logfile provider throws
    an error. And as "Privileged Account" the rule works flawlessly, as the logfile provider running in root context can access everything.
    Don´t know if that makes sense, but right now it sounds logical to me.

  • Need to change the targeting group of a Rule or monitor after a alert is created.

    Hi All,
    I have created many alerts and they are working fine. Currently due to business requirement we have installed Windows server 2012 operating systems in our production environment. But we have targeted the
    "Windows server 2008 r2 full operation system" group as per the below screen shot. As we now have to import the management pack for Windows server 2012 as well.
    What we have planned is to change the targeting group from "Windows server 2008 r2 full operation system"
    to "Windows server operating system group" so the alert / monitor or rule will target all windows server which has been discovered in SCOM rather that only the servers running Windows server 2008 r2.
    I was also not able to set overrides for this as that server was not coming under Windows server 2008 r2 full operation system as it was a Windows server 2012 agent.
    I can also go ahead and create new alerts but i have created custom of 1000 alerts and i cannot go ahead and re create them.
    Is there any way to change them. If yes Can i do a bulk change via powershell ?
    Below is the screenshot of what i really want. Can any on e please help.
    Gautam.75801

    You can't really change the target class of a monitor in a sealed vendor pack. If this is your own custom pack, then you can change the target class no problem, but this would need to be done on the unsealed XML (using VSAE or some other authoring tool).
    Then you can seal the pack and re-import (should be upgrade compatible, since you are just changing the target).
    I'm not familiar with this particular monitor in your screenshot, but it looks like this should probably target Exchange? If this is the case, then I would recommend targeting the closest typed class that the monitor should run against. In this case, some
    type of Exchange class that is already in the Exchange management pack.
    Otherwise, you can also create your own custom class for targeting, which I describe in detail on my blog.
    Here are all my sample VSAE fragments.
    Here is an example of
    using the Application Component base for your new class.
    Here is an example of
    using Local Application base for your new class.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Too Many Index in a Table

    Dear Gurus,
    I´ve got some performance problems of an especific table called CC_FICHA_FINANCEIRA, the structure of the table is described below:
    NUMBER OF RECORDS: ABOUT 1.600.000
    NAME NULL TYPE
    CD_FUNDACAO NOT NULL VARCHAR2(2)
    NUM_INSCRICAO NOT NULL VARCHAR2(9)
    CD_PLANO NOT NULL VARCHAR2(4)
    CD_TIPO_CONTRIBUICAO NOT NULL VARCHAR2(2)
    ANO_REF NOT NULL VARCHAR2(4)
    MES_REF NOT NULL VARCHAR2(2)
    SEQ_CONTRIBUICAO NOT NULL NUMBER(5)
    CD_OPERACAO NOT NULL VARCHAR2(1)
    SRC NUMBER(15,2)
    REMUNERACAO NUMBER(15,2)
    CONTRIB_PARTICIPANTE NUMBER(15,2)
    CONTRIB_EMPRESA NUMBER(15,2)
    DIF_CONTRIB_PARTICIPANTE NUMBER(15,2)
    DIF_CONTRIB_EMPRESA NUMBER(15,2)
    TAXA_ADM_PARTICIPANTE NUMBER(15,2)
    TAXA_ADM_EMPRESA NUMBER(15,2)
    QTD_COTA_RP_PARTICIPANTE NUMBER(15,6)
    QTD_COTA_FD_PARTICIPANTE NUMBER(15,6)
    QTD_COTA_RP_EMPRESA NUMBER(15,6)
    QTD_COTA_FD_EMPRESA NUMBER(15,6)
    ANO_COMP NOT NULL VARCHAR2(4)
    MES_COMP NOT NULL VARCHAR2(2)
    CD_ORIGEM VARCHAR2(2)
    EXPORTADO VARCHAR2(1)
    SEQ_PP_PR_PAR NUMBER(10)
    ANO_PP_PR_PAR NUMBER(5)
    SEQ_PP_PR_EMP NUMBER(10)
    ANO_PP_PR_EMP NUMBER(5)
    SEQ_PP_PR_TX_PAR NUMBER(10)
    ANO_PP_PR_TX_PAR NUMBER(5)
    SEQ_PP_PR_TX_EMP NUMBER(10)
    ANO_PP_PR_TX_EMP NUMBER(5)
    I think that the indexes of this table can be the problem, there are too many. I will describe them below:
    INDEX COLUMNS
    CC_FICHA_FINANCEIRA_PK CD_FUNDACAO
    NUM_INSCRICAO
    CD_PLANO
    CD_TIPO_CONTRIBUICAO
    ANO_REF
    MES_REF
    SEQ_CONTRIBUICAO
    CD_OPERACAO
    ANO_COMP
    MES_COMP
    CC_FICHA_FINANCEIRA_IDX_002 CD_FUNDACAO
    NUM_INSCRICAO
    CD_PLANO
    CD_TIPO_CONTRIBUICAO
    ANO_COMP
    ANO_REF
    MES_COMP
    MES_REF
    SRC
    CC_FICHA_FINANCEIRA_IDX_006 CD_ORIGEM
    CC_FICHA_FINANCEIRA_IDX_007 CD_TIPO_CONTRIBUICAO
    CC_FICHA_FINANCEIRA_IDX2 CD_FUNDACAO
    ANO_REF
    MES_REF
    NUM_INSCRICAO
    CD_PLANO
    CD_TIPO_CONTRIBUICAO
    CONTRIB_EMPRESA
    CC_FICHA_FINANCEIRA_IDX3 CD_FUNDACAO
    ANO_REF
    MES_REF
    CD_PLANO
    CD_TIPO_CONTRIBUICAO
    SEQ_CONTRIBUICAO
    There are columns that have 4 indexes. Is it right? How is the better way to analyze those indexes?
    Regards...

    Hi,
    You can monitor index usage to know if it used by application.
    See metalink note 136642.1 Identifying Unused Indexes with the ALTER INDEX MONITORING USAGE Command
    Nicolas.

  • HT1212 My iPad is disabled because of too many password incorrect answers. I tryed connecting to iTunes but it still said Enter Password On iPad, and it said only Try Again, Cancel and Find an answer to this problem, or something simular to that. What do

    My ipad is disabled because of too many itemps and I tryed connecting to iTunes but it said nothing on the Apple Support page said it would. What do I do???

    The answer is in the article. Finish reading the second part:
    If you see one of following alerts, you need to erase the device:
    "iTunes could not connect to the [device] because it is locked with a passcode. You must enter your passcode on the [device] before it can be used with iTunes."
    "You haven't chosen to have [device] trust this computer"
    If you have Find My iPhone enabled, you can use Remote Wipe to erase the contents of your device. If you have been using iCloud to back up, you may be able to restore the most recent backup to reset the passcode after the device has been erased.
    Alternatively, place the device in recovery mode and restore it to erase the device:
    Disconnect the USB cable from the device, but leave the other end of the cable connected to your computer's USB port.
    Turn off the device: Press and hold the Sleep/Wake button for a few seconds until the red slider appears, then slide the slider. Wait for the device to shut down.
    While pressing and holding the Home button, reconnect the USB cable to the device. The device should turn on.
    Continue holding the Home button until you see the Connect to iTunes screen.
    iTunes will alert you that it has detected a device in recovery mode. Click OK, and then restore the device.

  • Too many BPM data collection jobs on backend system

    Hi all,
    We find about 40,000 data collection jobs running on our ECC6 system, far too many.
    We run about 12 solutions, all linked to the same backend ECC6 system. Most probably this is part of the problem. We plan to scale down to 1 solution rather than the country-based approach.
    But here we are now, and I have these questions.
    1. How can I relate a BPM_DATA_COLLECTION job on ECC6 back to a particular solution ? The job log give me monitor-id, but I can't relate that back to a solution.
    2. If I deactivate a solution in the solution overview, does that immediately cancel the data collection for that solution ?
    3. In the monitoring schedule on a business process step we sometimes have intervals defined as 5 minutes, sometimes 60. Strange thing is that the drop-down of that field does not always give us the same list of values. Even within a solution I see that in one step I have the choice of a long list of intervals, in the next step in that same business process I can only choose between blank and 5 minutes.
    How is this defined ?
    Thanks in advance,
    Rad.

    Hi,
    How did you managed to get rid of this issue. i am facing the same.
    Thanks,
    Manan

Maybe you are looking for

  • The parameter 'token' cannot be a null or empty string sharepoint

    Hello , I am trying to create SharePoint 2013 web app for office 365 using visual studio 2012. when i debug and run the SharePoint custom web app then it show the error ("the parameter 'token' cannot be a null or empty string") in the TokenHelper.cs

  • Firefox doesn't display my html properly (Like it is looking like on my editor). Explorer displays the page like my html editor.

    Hello, I develop my web site with a html editor. When I use preview, everything is fine. If I use Explorer, everything is fine also but if I use Firefox to display the page, the images and text are not where they should be.

  • Supplier terms & payment date

    hi, all need report that provice 1. is there any standard report to select supplier & their terms of payment 2. Report that list of  that include PO number, PO date , Invoice Date , Baseline date , Posting Date & Due Date Thanks

  • Dynamic table column creation

    Hi All, I am trying to create a table where the number of columns is equal to the number of entries in an output table in my context. How do I go about creating columns dynamically dependant on the number of entries in a table? Kind regards Seb

  • Incomplete BD Burning

    Further to my previous problem with my TV player not playing a PE9 made BD disk, I examined the disk with Imgburm and it reports that the disk status is 'incomplete'. The M2ts file plays through to the last fadeout on my computer and in a Panasonic T