Alert monitor for long running background jobs
Hello,
I have to configure an alert moniter for long running background jobs which are running more than 20000 secs using rule based. I have created a rule based MTE and assigend MTE class CCMS_GET_MTE_BY_CLASS to virtual node but i dont find a node to specify the time.
could any one guide me how can i do this.
Thanks,
Kasi
Hi *,
I think the missing bit is where to set the maximum runtime. The runtime is set in the collection method and not the MTE class.
process: rz20 --> SAP CCMS Technical Expert Monitors --> All Contexts on local application server --> background --> long-running jobs. Click on 'Jobs over Runtime Limits' then properties, click the methods tab then double click 'CCMS_LONGRUNNING_JOB_COLLECT', in the parameters tab you can then set the maximum runtime.
If you need to monitor specific jobs, follow the process (http://help.sap.com/saphelp_nw70/helpdata/en/1d/ab3207b610e3408fff44d6b1de15e6/content.htm) to create the rule based monitor, then follow this process to set the runtime.
Hope this helps.
Regards,
Riyaan.
Edited by: Riyaan Mahri on Oct 22, 2009 5:07 PM
Edited by: Riyaan Mahri on Oct 22, 2009 5:08 PM
Similar Messages
-
Hi,
We are facing problem with an MM job, this job some time finishes in few minutes or some times it will be running for a long time so that we should kill the job manually.
Can you please tell me what are the reasons can be for this, and also tell me what are the reasons that a job to be fnished in scheduled is running for a long time and how to find the root cause for this.
Yours reply is very much appreciated
Regards
Balaji VedagiriHi,
Please confirm you have enough hardware space.
Please do a consistency check of the BTC Processing System as follows:
1. Run Transaction SM65
2. Select Goto ... Additional tests
3. select these options: Perform TemSe check
Consistency check DB tables
List
Check profile parameters
Check host names
Determine no. of jobs in queue
All job servers
and then click execute.
4. Once you get the results check to see if you have any inconsistencies
in any of your tables.
5. If there are any inconsistencies reported then run the "Background
Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
This time check the "Consistency check DB tables" and the
"Remove Inconsistencies" options.
6. Run this a couple of times until all inconsistencies are removed from
the tables.
Make sure you run this SM65 check when the system is quiet and no other
batch jobs are running as this would put a lock on the TBTCO table till
it finishes. This table may be needed by any other batch job that is
running or scheduled to run at the time SM65 checks are running.
Please confirm you are running the following reports daily as per note #48400:
RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
RSPO1043 and RSTS0020 for the consistency check.
Regards,
Snow -
Trigger Alert for Long Running Jobs in CPS
Hello,
I am currently trying to make a trigger so that I can monitor the long running jobs in the underlying ERP. Can you help me on the APIs to use?
Im trying to modify my previous alert - Chekcing of failed jobs
// only check error jobs
if (jcsPostRunningContext.getNewStatus().equals(JobStatus.Error)) {
String alertList = "email address ";
String [] group = alertList.split(",");
for (int i = 0; i < group.length; i++) {
JobDefinition jobDefinition = jcsSession.getJobDefinitionByName("System_Mail_Send");
Job aJob = jobDefinition.prepare();
aJob.getJobParameterByName("To").setInValueString(group<i>);
aJob.getJobParameterByName("Subject").setInValueString("Job " + jcsJob.getJobId() + " failed");
aJob.getJobParameterByName("Text").setInValueString(jcsJob.getDescription());
Im trying to look for the API so I can subtract the system time and the start time of the job and compare it to 8 hours?
if (jcsPostRunningContext.getNewStatus().equals(JobStatus.Error)) { <-- Can I have it as ( ( Start Run Time - System Time ) > 8 Hours )
Or is there an easier way? Can somebody advise me on how to go about this one?Hi,
You can do it using the api:
if ((jcsJob,getRunEnd().getUTCMilliSecs() - jcsJob.getRunStart().getUTCMilliSecs()) > (8*24*60*1000))
This has the drawback that you will only be notified when the job finally ends (maybe more then 8 hours!).
The more easier and integrated method is using the Runtime Limits tab on your Job Definition or Job Chain. This method can raise an event when the Runtime Limit is reached. The event can trigger your notification method.
Regards Gerben -
RZ20 - Is there an alert for long running transactions?
In RZ20 is there an alert for long running transactions?
http://help.sap.com/saphelp_nw04s/helpdata/en/9c/f78b3ba19dfe47e10000000a11402f/content.htm
This document clearly explains your problem.
"Reward points if useful" -
Can a long running batch job causing deadlock bring server performance down
Hi
I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking >12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
Thanks
Rgds
UngKok Aik wrote:
According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
And so on...
Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
Thanks
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking" Carl Sagan -
Alv show in report but when see in spool (after run background job) there i
my program have some error when i run result alv show in report but when see in spool (after run background job) there is no data, (other program can see result in spool)
Please help
here is some example of my program
********************************declare internal table*****************************
internal table output for BDC
data : begin of t_output occurs 0,
bukrs type anla-bukrs,
anln1 type anla-anln1,
anln2 type anla-anln2,
zugdt type anla-zugdt,
result(70) type c,
end of t_output.
*****get data from loop********************************
loop at t_anla.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = t_anla-anln1
IMPORTING
OUTPUT = t_anla-anln1.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = t_anla-anln2
IMPORTING
OUTPUT = t_anla-anln2.
check record is correct or not
select single bukrs anln1 anln2 zugdt
into w_output
from anla
where bukrs = t_anla-bukrs and
anln1 = t_anla-anln1 and
anln2 = t_anla-anln2
zugdt = '00000000'
if record is correct
if sy-subrc = 0 and w_output-zugdt = '00000000'.
w_output-bukrs = t_anla-bukrs.
w_output-anln1 = t_anla-anln1.
w_output-anln2 = t_anla-anln2.
w_output-result = 'Yes : this asset can delete'.
append w_output to t_output.
if record is not correct
elseif sy-subrc = 0 and w_output-zugdt <> '00000000'.
there is error record this asset have value already
v_have_error = 'X'.
w_output-bukrs = t_anla-bukrs.
w_output-anln1 = t_anla-anln1.
w_output-anln2 = t_anla-anln2.
w_output-result = 'Error : this asset have value already'.
append w_output to t_output.
else.
there is error record this asset donot exist in table anla
v_have_error = 'X'.
w_output-bukrs = t_anla-bukrs.
w_output-anln1 = t_anla-anln1.
w_output-anln2 = t_anla-anln2.
w_output-result = 'Error : this asset doest not exist'.
append w_output to t_output.
endif.
*end of check record is correct or not
clear w_output.
endloop.
******************************show data in ALV***************************************************
show data from file in ALV
perform display_report_ALV.
*& Form display_report_ALV
form display_report_ALV.
DATA: LT_FIELD_CAT TYPE SLIS_T_FIELDCAT_ALV,
LT_EVENTS TYPE SLIS_T_EVENT,
LV_REPID LIKE SY-REPID.
PERFORM ALV_DEFINE_FIELD_CAT USING LT_FIELD_CAT.
PERFORM ALV_HEADER_BUILD USING T_LIST_TOP_OF_PAGE[].
PERFORM ALV_EVENTTAB_BUILD USING LT_EVENTS[].
LV_REPID = SY-REPID.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
I_CALLBACK_PROGRAM = LV_REPID
IT_FIELDCAT = LT_FIELD_CAT
I_SAVE = 'A'
IT_EVENTS = LT_EVENTS[]
TABLES
T_OUTTAB = t_output
EXCEPTIONS
PROGRAM_ERROR = 1
OTHERS = 2.
IF SY-SUBRC NE 0.
WRITE: / 'Return Code : ', SY-SUBRC,
'from FUNCTION REUSE_ALV_GRID_DISPLAY'.
ENDIF.
endform.
*& Form alv_define_field_cat
text
-->P_LT_FIELD_CAT text
FORM ALV_DEFINE_FIELD_CAT USING TB_FCAT TYPE SLIS_T_FIELDCAT_ALV.
DATA: WA_FIELDCAT LIKE LINE OF TB_FCAT,
LV_COL_POS TYPE I.
DEFINE FIELD_CAT.
CLEAR WA_FIELDCAT.
ADD 1 TO LV_COL_POS.
WA_FIELDCAT-FIELDNAME = &1.
WA_FIELDCAT-REF_TABNAME = &2.
WA_FIELDCAT-COL_POS = LV_COL_POS.
WA_FIELDCAT-KEY = &3.
WA_FIELDCAT-NO_OUT = &4.
WA_FIELDCAT-REF_FIELDNAME = &5.
WA_FIELDCAT-DDICTXT = 'M'.
IF NOT &6 IS INITIAL.
WA_FIELDCAT-SELTEXT_L = &6.
WA_FIELDCAT-SELTEXT_M = &6.
WA_FIELDCAT-SELTEXT_S = &6.
ENDIF.
WA_FIELDCAT-DO_SUM = &7.
WA_FIELDCAT-OUTPUTLEN = &8.
APPEND WA_FIELDCAT TO TB_FCAT.
END-OF-DEFINITION.
FIELD_CAT 'BUKRS' 'ANLA' 'X' '' 'BUKRS' 'Company Code' '' ''.
FIELD_CAT 'ANLN1' 'ANLA' 'X' '' 'ANLN1' 'Asset Number' '' ''.
FIELD_CAT 'ANLN2' 'ANLA' 'X' '' 'ANLN2' 'Asset Sub Number' '' ''.
FIELD_CAT 'ATEXT' 'T5EAE' 'X' '' 'ATEXT' 'Result' '' ''.
FIELD_CAT 'RESULT' '' 'X' '' 'RESULT' 'RESULT' '' ''.
ENDFORM. " alv_define_field_catHi,
Check this code..
FORM display_report_alv.
DATA: lt_field_cat TYPE slis_t_fieldcat_alv,
lt_events TYPE slis_t_event,
lv_repid LIKE sy-repid.
PERFORM alv_define_field_cat USING lt_field_cat.
PERFORM alv_header_build USING t_list_top_of_page[].
PERFORM alv_eventtab_build USING lt_events[].
lv_repid = sy-repid.
IF sy-batch EQ 'X'. ----> " System Field for Backgroud..if Background use list display
CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
EXPORTING
i_callback_program = lv_repid
it_fieldcat = lt_field_cat
i_save = 'A'
it_events = lt_events[]
TABLES
t_outtab = t_output
EXCEPTIONS
program_error = 1
OTHERS = 2.
ELSE.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
i_callback_program = lv_repid
it_fieldcat = lt_field_cat
i_save = 'A'
it_events = lt_events[]
TABLES
t_outtab = t_output
EXCEPTIONS
program_error = 1
OTHERS = 2.
ENDIF.
IF sy-subrc NE 0.
WRITE: / 'Return Code : ', sy-subrc,
'from FUNCTION REUSE_ALV_GRID_DISPLAY'.
ENDIF.
ENDFORM. "display_report_ALV -
How To Run Background Job on Specific Date of Every Month
Hi,
I am looking for an option there we can Run Background job On Specific Date!
Example: Task Name: Zprg1 > each month of "18", and same I want to repeat after 3 Days means on "22", then want to repeat after 5 days means on "28"
please suggest.Hi swapZ,
this is very easy:
1. Schedule the Job Zprg1 on the 18th of this month and enter a mothly period:
2. copy this job to new name Zprg1_plus3 and repeate the action of point 1 with the date 22.04.2015.
3. copy this job to new name Zprg1_plus5 and repeate the action of point 1 with the date 28.04.2015.
You will get thre jobs running every month on 18. 22. and 28.
Best regards
Willi Eimler -
BAPI for Scheduling a background job
Hi all,
is there any bapi for scheduling a background job?
i mean can we do background job using bapi
If not how can we create a bapi for scheduling a background job.
Thanks & Regards,
AzharHi,
Use following BAPIs for scheduling a job in background
BAPI_XBP_JOB_OPEN - This BAPI solves the first step in scheduling a job in the R/3 background processing system. You can create the job with this method.
Using additional BAPI calls BAPI_XBP_JOB_ADD_ABAP_STEP and BAPI_XBP_JOB_ADD_EXT_STEP, you can add job steps to the job.
With the BAPI call BAPI_XBP_JOB_CLOSE, you can finish defining the job and transfer it to the background processing system with the status Scheduled without start date.
You have execute the job with the BAPI calls BAPI_XBP_JOB_START_IMMEDIATELY or BAPI_XBP_JOB_START_ASAP.
I hope it would help you.
Regards,
Venkatram -
Alert monitor for Forecast Exceptions?
Does Flexible Planning offer any type of Alert monitor for flagging Forecast Exceptions? We're creating a statistical forecast in MC94 and would like to set up some automatic triggers or alerts. I know APO has this functionality, but I have yet to be able to find any type of exception reporting.
Any help would be greatly appreciated.
Thanks.Hi,
Do you still have the issue, Please let us know if you need further help.
BR
Claire -
Profiler execution plan ONLY for long running queries
The duration only applies to specific profiler events however I'd like to capture the execution plan ONLY for queries over 10 minutes.
Is there a way to do this using Xevents?
Anyone knows?
Thanks!
PaulaI've wanted that too but could not find a way to get it from profiler.
But it may be possible with xevents (or without xevents!) to watch for long-running queries and then get the plan from the cache,where it will probably stick for some time, using DMVs.
Josh -
Is it possible to delete a running background job programatically?
Hi Gurus,
is it possible to delete a running background job programatically? if yes how can we do that?
Thanks in advanceOr as sais by Sandeep you can use that fm.
Before calling that select the required data from table TBTCO and pass to the fm. -
Considerations for long running publication extensions
We are considering implementing a post processing publication extension which may take several minutes to execute. One of our concerns with this strategy is that the publication extension may bog down the Adaptive Processing Server.
Are there any general considerations / recommendations for long running post processing publication extensions?
Thanks!Generally creating a new thread is an expensive process. Well, everything is relative. My laptop can create & run & stop 7,000+ threads per second, test program below, YMMV. If you are dealing with thousands of thread creations per second, pooling may be sensible; if not, premature optimization is the root of all evil, etc.
public class ThreadSpeed
public static void main(String args[])
throws Exception
System.out.println("Ignore the first few timings.");
System.out.println("They may include Hotspot compilation time.");
System.out.println("I hope you are running me with \"java -server\"!");
for (int n = 0; n < 5; n++)
doit();
System.out.println("Did you run me with \"java -server\"? You should!");
public static void doit()
throws Exception
long start = System.currentTimeMillis();
for (int n = 0; n < 10000; n++) {
Thread thread = new Thread(new MyRunnable());
thread.start();
thread.join();
long end = System.currentTimeMillis();
System.out.println("thread time " + (end - start) + " ms");
static class MyRunnable
implements Runnable
public void run()
}Edited by: sjasja on Jan 14, 2010 2:20 AM -
Tracking completion status for long running DML operations
Does anybody know:
Is there any possibility to track a completion status for long running DML operations (for example how many rows is inserted)?
For example if I execute an INSERT statement which is working for several hours it is very important to estimate the total time for this operation.
Thanks forwardI'm working with Oracle8 in present, and unfortunately this solution (V$SESSION_LONGOPS)cannot help me.
On Oracle8 it works, but with some restrictions:
- You must be using the cost-based optimizer
- Set the TIMED_STATISTICS or SQL_TRACE parameter to TRUE
- Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS package. -
Is there any time out defined for long running transaction?
hi,
i have to make one big data transferring script , though transaction is not required here, but i was planning to,
please tel me is there any time out for long running transactions.i have to run the script from database it self
yours sincerleyCan you show us an example of your script? You can divide the transaction into small chunks to reduce time and locking/blocking as well.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Alert Monitoring for tcodes!
Hi Experts,
I need to monitor ST22(Short Dumps),SM13(update requets through CCMS.Please let me know how can we set up alert monitoring for particular tcodes?Hi,
For this first you need to install CCMS agents first, if you have solution manager is landscape(It should actually) you can monitor same from solman.
[Here is the link |http://help.sap.com/saphelp_nw70/helpdata/en/e2/eff640fa4b8631e10000000a1550b0/content.htm]
Then you can send mail trigger from solution manager for update errors and dumps as well.
Regards,
Gagan Deep Kaushal
Maybe you are looking for
-
Problem with Epson Printer and Time Capsule
I have a new Time Capsule, already gone through first backup etc... I tried to connect my 2 month old Epson RX 680 to the TC via USB. I have tried everything and still cannot print wirelessly. I have uninstalled, reinstalled, rebooted both TC and pri
-
Using a LabVIEW application as default program for opening files
Google has returned nothing useful on this. I want to use a built LabVIEW application to open data files from the Windows desktop. It is easy to edit the Windows registry to set the application as default program - but I'm not sure how to set the ev
-
Please help.. error playing mp3 files in Sony Erricsson SDK 2.2.4
i installed Sony Ericsson SDK 2.2.4 and also installed the software requirements like: SPOT xde Player, the script: WindowsXP-Windows2000-Script56-KB917344-x86-enu.exe, and directx 9.0 i have jdk version 1.5.0 but when i try to run the application na
-
My Photoshop Elements 10 keeps needing to close because it encounters an error. I have tried uninstalling the program and reinstalling it but it still does not fix the problem. can you tell me what to do?
-
Hi folks, The configuration of our Oracle Portal dictates that our page titles don't actually tell you what the page is called. For example, the news story on http://www.southlanarkshire.gov.uk/portal/page/portal/EXTERNAL_WEBSITE_DEVELOPMENT/SLC_ONLI