Time limit between running a job?

A little background - I'm trying to create a trigger that logs all logins to a table. While testing, I noticed that if I deleted the table the trigger was trying to call then I could not log on because the trigger would fail. This would cause me to have
to start SQL in single-user mode with -f parameter, and then use SQLCMD to connect then delete the trigger.
I wanted to fix the trigger so that this would not happen. Here's what I did...
Created a table msdb.MySchema.LogonHistory.
and a trigger...
CREATE TRIGGER [Tr_ServerLogon]
ON ALL SERVER WITH EXECUTE AS 'sa'
FOR LOGON
AS
BEGIN
IF OBJECT_ID('msdb.MySchema.LogonHistory', 'U') IS NOT NULL
    BEGIN
        INSERT INTO msdb.MySchema.LogonHistory
        (DBUser, AppName, LogonTime, SystemUser)
        VALUES(USER, APP_NAME(), GETDATE(), ORIGINAL_LOGIN())
    END
END
This worked perfectly. If I deleted the table people could still log in.
I wanted to expand to add an ELSE statement to send an e-mail if the table didn't exist.Problem is, I received an e-mail for EVERY connection. Simply logging in to Management Studio adds 5-6 rows to my LogonHistory table. Here's the ELSE statement I added:
ELSE
    BEGIN
        EXEC msdb.dbo.sp_send_dbmail @profile_name = "DBMail", @recipients = '[email protected]', @subject = 'Your table doesnt exist', @body='Error'
    END
I was thinking instead of executing sp_send_dbmail I could a job with that in it. The question is can I limit the job to run only once every hour? Or is there something else I could do within the trigger? I was thinking I could check send_request_date in
sysmail_mailitems where subject = 'Your table doesnt exist' and compare the send_request_date to the current GETDATE() and only e-mail if it's greater than 1 hour difference.
Just want to make sure there's not something easier/obvious that I'm missing.

Yep.. You can configure the job to be executed once every hour. For that just go to job properties -> Schedules tab. Create a new schedule as below
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Similar Messages

  • Setting time limit on running report

    Does anyone know if you can set a time limit on a report? I want a report to be canceled if it runs for more than 30 minutes. I see that you have the option in the Queue Manager when creating a job, but I want it to happen for my web-based reports...

    You can count the numer of times something occurs and use that as a condition to stop things.  How you go about stopping things depends on how your animation is set up.  If it is a straight timeline animation, you would just use a stop(); command.

  • Exceeded time limit in running   CALL FUNCTION 'BAPI_COSTCENTERGROUP_CREATE

    Hello all,
    Please may I ask your help in creating cost center group.
    I used BAPI COSTCENTERGROUP_CREATE, everything is ok except that when I run the program it dumps because the time it process is exceed the time limit. It's my first time to work with BAPI.
    This is urgent.
    Please help and advice me what to do.
    I'll give the highest points.
    THANK YOU SO MUCH. God bless.

    Hi chenara ,
    I did'nt  use this bapi before but this is what i would do :
    1. put  a break-point just before enter the bapi and try to see
       what is the problem , ( look at what you send to the bapi ,
      go with F5 inside and try to see where it is stuck ) ,
      i know it is hard to debug sap bapi but sometimes it helps.
    2. go to the basis team , they can help you to monitor
       what take the most time.
    i hope this could help you.
    Lilach Menashe.

  • Time limit between purchase of voucher & exam date for SCJP

    Hello folks,
    i plan to give my SCJP 1.5 next week.
    (But i did not buy the voucher as yet because I was nt sure if i wd be in the US till Sept. )
    If i buy the voucher today , what are the chances that i get to schedule my test next Saturday i.e. 8/09/2007
    Can somebody plz let me knw?

    you can write next week for sure, if you buy today.

  • Time Limit exceeded while running in RSA3

    Hi BW Experts,
      I am trying to pull 1 lakh data from CRM to BI System.
    Before scheduling, i am trying to execute in RSA3.I am getting the error message as "Time Limit Exceeded".
    Pls suggest, why it is happening like this.
    Thanks in advance.
    Thanks,
    Ram.

    Hi,
                because huge data with in the stipulated time it is not executing and showing the all records ,so it is better to go any selection option by each document type or some else then u can add all the documents ,anyway in bw side we r running this job in background so no problem.if u want see all records at a  time then u can discuss with ur basis people to extend the time for that.
    Thanks & Regards
    sathish

  • Error while running query "time limit exceeding"

    while running a query getting error "time limit exceeding".plz help.

    hi devi,
    use the following links
    queries taking long time to run
    Query taking too long
    with hopes
    Raja Singh

  • Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.

    Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.

    Hi,
    I believe this would be your Prod system, so don't just cancel it but look at the job log. If it is still processing then don't kill it and wait for the change run to complete but if you can see that nothing is happening and it is stuck for a long time then you can go ahead and cancel it.
    But please be sure, as these kind of jobs can create problems if you cancel them in the middle of a job.
    Regards,
    Arminder Singh

  • Urgent run the job --SBIE0001 takes  long time more than two hours

    hi expert,
    we run the job which cocntian the program SBIE0001 it takes so along time,
    we use the job to eatra the date from our system to another satellite system, by checking the
    job this is one step takes the mainly time as below.
    88 LUWs confirmed and 88 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
    any update will be appreciated. thanks in advance.

    If you have transaction ME30 active in yous system, run the program into in.
    It will give more information about how the time is lost.

  • APO jobs cancelled due to error- Time limit Exceed.

    Dear All,
    Three jobs are scheduled daily to transfer data from R/3 to APO,
    These are cancelled due to error -"Time limit exceed".

    Hi Pankaj,
    There is a specific time allocated for each queue to clear and to get transferred across the system. the probable reasons are.
    1. The queue itself is taking more than the allocated time to clear due to system load. Each queue requires specific free memory from server , in case of overload system is unable to allocate desired memory hence the error might appear.
    2. If in a queue the very first entry is stuck due to x,y,z reason the queues following automatically go into "timeout" and then "system fail"
    Proposed solution.
    1. Analyze the first entry functionally. If Ok then ask the basis team to clear that particular entry.
    2. after time out the queue will go to "SYSFAIL". Double clicking on this will reveal the reason. Based on the technical reason I can suggest the relevant OSS notes to be applied.
    For detailed technical analysis check t-code CFG1.
    Regards,
    Ankur
    Edited by: Ankur Shrivastava on Dec 19, 2007 5:58 AM

  • Will it be possible to run several jobs in background at the same time?

    Hi!
    The new release looks promising. Look forward to hear more in Birmingham.
    Just now we have a problem. It have to do with functionality In Toad compared to SQL Developer.
    Will it be possible to run several jobs in background at the same time. Toad allows that.
    If yes: How can we make that happen?

    "Jobs" are always background.
    But I take you mean queries. Yes, since v1.5.x you can open an Unshared SQL Worksheet (ctrl-shift-n or the toolbar button).
    Have fun,
    K.

  • Time Limit exceeded error in R & R queue

    Hi,
    We are getting Time limit exceeded error in the R & R queue when we try to extract the data for a site.
    The error is happening with the message SALESDOCGEN_O_W. It is observed that whenever, the timelimit error is encountered, the possible solution is to run the job in the background. But in this case, is there any possibility to run the particular subscription for sales document in the background.
    Any pointers on this would be of great help.
    Thanks in advance,
    Regards,
    Rasmi.

    Hi Rasmi
    I suppose that the usual answer would be to increase the timeout for the R&R queue.
    We have increased the timeout on ours to 60 mins and that takes care of just about everything.
    The other thing to check would be the volume of data that is going to each site for SALESDOCGEN_O_W. These are pretty big BDOCs and the sales force will not thank you for huge contranns time 
    If you have a subscription for sales documents by business patrner, then it is worth seeing if the business partner subscription could be made more intelligent to fit your needs
    Regards
    James

  • Program terminated: Time limit exceeded, ABAP performance, max_wprun_time

    Hi,
    I am running an ABAP program, and I get the following short dump:
    Time limit exceeded. The program has exceeded the maximum permitted runtime and has therefore been terminated. After a certain time, the program terminates to free the work processfor other users who are waiting. This is to stop work processes being blocked for too long by
    - Endless loops (DO, WHILE, ...),
    - Database acceses with large result sets,
    - Database accesses without an apporpriate index (full table scan)
    - database accesses producing an excessively large result set,
    The maximum runtime of a program is set by the profile parameter "rdisp/max_wprun_time". The current setting is 10000 seconds. After this, the system gives the program a second chance. During the first half (>= 10000 seconds), a call that is blocking the work process (such as a long-running SQLstatement) can occur. While the statement is being processed, the database layer will not allow it to be interrupted. However, to stop the program terminating immediately after the statement has been successfully processed, the system gives it another 10000 seconds. Hence the maximum runtime of a program is at least twice the value of the system profile parameter "rdisp/max_wprun_time".
    Last error logged in SAP kernel
    Component............ "NI (network interface)"
    Place................ "SAP-Dispatcher ok1a11cs_P06_00 on host ok1a11e0"
    Version.............. 34
    Error code........... "-6"
    Error text........... "connection to partner broken"
    Description.......... "NiPRead"
    System call.......... "recv"
    Module............... "niuxi.c"
    Line................. 1186
    Long-running programs should be started as background jobs. If this is not possible, you can increase the value of the system profile parameter "rdisp/max_wprun_time".
    Program cannot be started as a background job. We have now identified two options to solve the problem:
    - Increase the value of the system profile parameter "rdisp/max_wprun_time"
    - Improve the performance of the following SELECT statement in the program:
    SELECT ps_psp_pnr ebeln ebelp zekkn sakto FROM ekkn
    INTO CORRESPONDING FIELDS OF TABLE i_ekkn
    FOR ALL ENTRIES IN p_lt_proj
    WHERE ps_psp_pnr = p_lt_proj-pspnr
    AND ps_psp_pnr > 0.
    In EKKN we have 200 000 entries.
    Is there any other options we could try?
    Regards,
    Jarmo

    Thanks for your help, this problem seems to be quite challenging...
    In EKKN we have 200 000 entries. 199 999 entries have value of 00000000 in column ps_psp_pnr, and only one has a value which identifies a WBS element.
    I believe the problem is that there isn't any WBS element in PRPS which has the value of 00000000. I guess that is the reason why EKKN is read sequantially.
    I also tried this one, but it doesn't help at all. Before the SELECT statement is executed, there are 594 entries in internal table p_lt_proj_sel:
      DATA p_lt_proj_sel LIKE p_lt_proj OCCURS 0 WITH HEADER LINE.
      p_lt_proj_sel[] = p_lt_proj[].
      DELETE p_lt_proj_sel WHERE pspnr = 0.
      SORT p_lt_proj_sel by pspnr.
      SELECT ps_psp_pnr ebeln ebelp zekkn sakto FROM ekkn
      INTO CORRESPONDING FIELDS OF TABLE i_ekkn
      FOR ALL ENTRIES IN p_lt_proj_sel
      WHERE ps_psp_pnr = p_lt_proj_sel-pspnr.
    I also checked that the index P in EKKN is active.
    Can I somehow force the optimizer to use the index?
    Regards,
    Jarmo

  • TRFC error "time limit exceeded"

    Hi Prashant,
    No reply to my below thread...
    Hi Prashant,
    We are facing this issue quite often as i stated in my previous threads.
    As you mentioned some steps i have already followed all the steps so that i can furnish the jog log and tRFC details for reference long back.
    This issue i have posted one month back with full details and what we temporarily follow to execute this element successfully.
    Number of times i have stated that i need to know the root cause and permanent solution to resolve this issue as the log clearly states that it is due to struck LUWs(Source system).
    Even after executing the LUWs manually the status is same (Request still running and the status is in yellow color).
    I have no idea why this is happening to this element particularly as we have sufficient background jobs.
    we need change some settings like increasing or decreasing data package size or something else to resolve the issue permanently?
    For u i am giving the details once again
    Data flow:Standard DS-->PSA--->Data Target(DSO)
    In process monitor screen the request is in yellow color. NO clear error message s defined here.under update 0 record updated and missing message with yellow color except this the status against each log is green.
    Job log:Job is finished TRFCSSTATE=SYSFAIL message
    Trfcs:time limit exceeded
    What i follow to resolve the issue:Make the request green and manually update from PSA to Data target and the job gets completed successfully.
    Can you please tell me how to follow in this scenario to resolve the issue as i waiting for the same for long time now.
    And till now i didn't get any clue and what ever i have investigated i am getting replies till that point and no further update beyond this
    with regards,
    musai

    Hi,
    You have mentioned that already you have checked for LUWs, so the problem is not there now.
    In source system, go to we02 and check for idoc of type RSRQST & RSINFO. If any of them are in yellow status, take them to BD87 and process them. If the idoc processed is of RSRQST type, it would now create the job in source system for carrying out dataload. If it was of RSINFO type, it would finish the dataload in SAP BI side as well.
    If any in red, then check the reason.

  • Long running QueryDocumentProperties job crash

    Hi,
    I have a very simple piece of code using the Plumtree.Remote.PRC namespace, looping thru all document properties and checking for the presence of a specific value. Each time the code crashes around the time I visit about the 700th doc. The whole process takes a little less than 1 sec/doc.
    Am I hitting some timeout problem here ?
    Stack dump below.
    Thanks for any ideas.
    [email protected]
    Plumtree.Remote.PRC.PortalException: Exception of type Plumtree.Remote.PRC.PortalException was thrown. ---> System.Web.Services.Protocols.SoapException: Server was unable to process request. --> Invalid pointer
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at com.plumtree.remote.prc.soap.DirectoryAPIService.queryDocumentProperties(String sLoginToken, Int32 nCardID)
    at com.plumtree.remote.prc.soap.DirectoryProcedures.QueryProperties(String sLoginToken, Int32 nCardID)
    --- End of inner exception stack trace ---
    at Plumtree.Remote.PRC.DocumentManagerWrapper.QueryDocumentProperties(Int32 documentID)
    at Mercator.Portal.Scheduler.Jobs.CBrocomAddPropJob.GetDocumentProperty(Int32 docId, String id)
    at Mercator.Portal.Scheduler.Jobs.CBrocomAddPropJob.Mercator.Portal.Framework.Interfaces.IJob.Run(String jobProgId, String parameter)

    Hi Dean,
    We use version 5.0.4.
    I have 2 jobs in fact. One is processing the cards in the order of their objectid, another picks the objectids from a hashtable (hence a different processing order). Both fail at around the same index (700), but not always the same, about a range of +/- 10. If I limit the job to a previously failed card, it does work, so I guess it can not be the data. The number of cards is 786. I ran the job again after completely removing the cards + their deletion history, same result. I'll run the job again with PTSpy on, but I think there was nothing special in it except:
    209 05-20 08:45:19 Warn Plumtree.dll 14176 10924 PTInternalSession.cpp(3184) *** COM exception writing Log Message: IDispatch error #16389 (0x80044205): [Failed to open the log file for writing. log file: C:\Program Files\plumtree\ptportal\5.0\settings\logs\PTMachine.log]2 05-20 08:43:51 Warn Common Library 14176 10924 PTCommon.cpp(977) ***SetError *** (0x80044205): Error while writing message to log file.
    which I do not understand either, the file is there and is perfectly writable.
    I guess I'll try splitting the job in 2 and open a support incident.
    Thanks.
    Michel.

  • Time Limit Exceeds

    Hi,
    We are processing the file from FTP server and we are creating the Idocs in SAP system some times few Idocs were goes to 51 status while running the job  the message displaying is Time Limit exceeded. how to fix this problem.
    Thanks,
    Sudheer.

    Hi,
    Check this Timed Out Errors
    /people/michal.krawczyk2/blog/2006/06/08/xi-timeouts-timeouts-timeouts
    Regards
    Seshagiri

Maybe you are looking for

  • Problem with Zen Micro

    I am having a problem with my ZEN Micro. Whenever I plug it in the computer it locks up. I have tried the disk cleanup. I have formated the device. I have removed the firmware. If it isn't pluged it, it will startup to the recovery menu, but as soon

  • Airport Express Join Wireless Network Disables Ethernet

    I have one of the the older Airport Express (firmware 6.3) and I'm trying to use it to connect a non-wireless machine to an existing network. Act as a single Wi-Fi interface to any computer or almost any other devices such as game consoles that has a

  • In OAWS difference between different stores

    HI ALL, I have worked on tcode OAWS but could not find exact difference between the follwing -Storing for subsequent entry -Storing for subsequent assignment -Store and entry -Store and assign           -Assign and store Can nyone make me clear. Rega

  • Skipped audio files when burning disc in iDVD.

    I burned a DVD using iDVD version 7.0. The audio files sound perfect on the computer but on the disc there are skipped areas noted in three different audio files. I did try different discs and burned discs on both my laptop and my desktop. I have use

  • Can i connect to my idisk public folder and download?

    is this possible? i want to download some mp3s in my public folder or documents