Force immediate alerts timer job to run on a particular server

In Central Admin->Manage Content Database->Preferred server for timer jobs says "Not selected".  We have 2 WFE servers, server1 and server2.  server1 currently is not able to connect to the smtp server so we would like for the immediate
alerts timer job to be executed on server2 which has access to smtp.  I know I could set the preferred server for timer jobs to be server2 but that would mean that all timer jobs would execute on that server.
I have a few questions:
1. Is there a way to force just immediate alert timer job to run on server2?
2. How can I find out which server is currently processing the immediate alerts timer job (or which server processed the last immediate alerts timer job)
3.  If I force all timer jobs to execute on server2, how much of an effect does it have on the server in terms of RAM, CPU and other resources?
thanks,

What i believe is, alerts timer jobs is ContentDatabase Locktype so if you select the prefred server from the Central admin.
I am sure, all the timer jobs which having the ContentDB lock type will run on preferred server.
 Note that content database locks are handled at the server and database level, not at the individual job level.
The Article Trevor mentioned is great for explanation.
2.) you can check it from the Central admin > Monitoring  > Job History and from here select the job definition which history you want
to see.
3)you forcing all timer jobs in one server that's mean you are increasing the over head for that server so performance may be issue. Its depend how many user / load you already have
on the server. another thing, may be deadlock between the jobs i.e if one job took more time and other jobs wait for it ends up skipping it.
Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

Similar Messages

  • Setting a different rfc destination if code is run on a particular server ?

    Hi,
    I need to fetch blocked data from GTS server....
    I am doing the same using the function module '/SAPSLL/BLCK_DOCS_SD_R3' present in the transaction '/SAPSLL/BL_DOC_SD_R3' ...
    Now as per my requirement, i need to fetch data from a different rfc destination if the code is run on a particular server.... ie I need to set the RFC destination in that particular case....
    I do not want to copy the whole function group and make changes in the place where the RFC destination is determined.
    Is there any other way to get through this.   ....
    Thanks....

    Hello
    The RFC destination to the GTS server is determined within fm /SAPSLL/BLCK_DOCS_LS_FETCH_R3:
    *------- RFC-Destination Legal Services ermitteln
      CALL FUNCTION '/SAPSLL/CD_ALE_RECEIVER_GET_R3'
           IMPORTING
                EV_RFC_DEST                   = LV_RFC_DEST
           EXCEPTIONS
                NO_RFC_DESTINATION_MAINTAINED = 2
                CUSTOMS_SERVER_NOT_UNIQUE     = 4
                OTHERS                        = 8.
      IF NOT ( SY-SUBRC IS INITIAL ) .
        MESSAGE ID     SY-MSGID TYPE SY-MSGTY
                NUMBER SY-MSGNO
                WITH   SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4
                RAISING ERROR_ALE_SETUP.
        EXIT.
      ENDIF.
    *------- Daten fuer RFC-Aufruf vorbereiten
      PERFORM BLOCKED_DOCS_REF_PREPARE USING    IV_APP_LEV
                                       CHANGING CT_DOCUMENT_NUMBER
                                                LT_OBJECT_DOCUMENT .
    *------- Prozessindikator
      IF NOT ( IS_OUTPUT-SHWMSG IS INITIAL ) .
        CASE IV_APP_LEV.
          WHEN GC_APPLICATION_LEVEL-SALES_ORDER.
            PERFORM PROGRESS_INDICATOR USING IS_OUTPUT-SHWMSG
                                             TEXT-M05.
          WHEN GC_APPLICATION_LEVEL-OUTBOUND_DELIVERY.
            PERFORM PROGRESS_INDICATOR USING IS_OUTPUT-SHWMSG
                                             TEXT-M06.
          WHEN GC_APPLICATION_LEVEL-INVOICE.
            PERFORM PROGRESS_INDICATOR USING IS_OUTPUT-SHWMSG
                                             TEXT-M07.
          WHEN OTHERS.
    *------- Nada
        ENDCASE.
      ENDIF.
    *------- FB: RFC-Aufruf in Legal Services
      CALL FUNCTION '/SAPSLL/LC_BLCK_DOC_SELECT_RFC'
        DESTINATION LV_RFC_DEST
        EXPORTING
    Calling fm /SAPSLL/CD_ALE_RECEIVER_GET_R3 the system tries to find the RFC destination from a distribution model (transaction BD64) for
    - OBJECT = 'BUS6801'
    - METHOD = 'SYNCHRONIZEIFR3'
    If we are already on the GTS server I assume that the fetched RFC destination should be 'NONE'.
    If you are on another system (<> GTS server) then you should create the appropriate distribution model which allows the system to determine the RFC destination to the GTS server.
    Regards
      Uwe

  • Why my timer job is not running?

    I have a server farm configuration with 1 web server, 2 app servers and 1 database server.
    What i am trying to do from timer job is to loop through all web applications present in the server farm and perform some action against some web-applications.
    I have timer job which is provisioned with web-application(central admin web app) and locktype as contentDatabse passed as parameter, and i have scheduled this job to run every  2 minutes. This timer job is provisioned by a feature which has a farm
    scope.
    My timer job never runs.
    If i stop the SharePoint timer service running in the app servers my timer job runs. From this
    blog i have came to know that "So, the first thing is that for a particular server to be eligible to run the job, it must be provisioned
    with the service or web app associated with the job." And my app servers does not host the web applications.
    My questions is.
    1. Why is my timer job is trying to run from the app servers?
    2. If i pass a web server while provisioning timer job is that correct? I think timer job becomes tied to particular server and removing/
    taking it down may make the timer job not to run.
    Thanks,
    Mallikarjun
    mallikarjun

     Try to deploy the Timer job from Web Front end not Application server. And try to Activate from there only . 
    Thanks and Regards
    Er.Pradipta Nayak
    Visit my Blog
    Xchanging

  • An unexpected error occurred while the job was running. (ID: 104)

    I'm getting this error in the event logs when trying to run a consistency check / sync with one of our file servers.
    This server was working fine, but w needed to migrate the data to a new partition (GPT) to allow it to expand past the 2TB limit of MBR partitions.  I added a new 3TB disk migrated the data and changed the drive letter to what the old partition was
    prior.  Since then we seem to get this error.
    I have installed the update roll up on both the server and the agent side.  I have also removed the server from the protection group then re-added it.  I've also removed and reinstalled the agent on the file server.
    Any help is appreciated!  Here's the full error from the DPM server:
    The replica of E:\ on server is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. (ID: 3106)
    An unexpected error occurred while the job was running. (ID: 104)

    Server Version is Windows 2012 R2 STD.  DPM is 4.2.1235.0 (DPM 2012 R2).  I expanded the production file server.  All that server does is host up file shares.
    I suspected the fact that I created a new volume and changed drive letter back to the original to be the source cause of the issue.  What I ended up doing is blowing away the backups from this server on disk and re-adding it to the protection group. 
    It now runs much longer, but still times out at random with the error mentioned above.
    Vijay:  The error was copied from Event Viewer.  Not sure what else you require?  Event ID is 3106 from DPM-EM.
    The DPM logs say:
    Affected area: E:\
    Occurred since: 2015-01-05 2:04:38 AM
    Description: The replica of Volume E:\ on servername is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. You can recover data from existing recovery
    points, but new recovery points cannot be created until the replica is consistent.
    For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)
     An unexpected error occurred while the job was running. (ID 104 Details: The semaphore timeout period has expired (0x80070079))
    Date inactivated: 2015-01-05 7:03:04 AM
    Recommended action: No action is required because this alert is inactive.
    Affected area: E:\
    Occurred since: 2015-01-05 2:04:38 AM
    Description: The replica of Volume E:\ on servername is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. You can recover data from existing
    recovery points, but new recovery points cannot be created until the replica is consistent.
    For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)
     An unexpected error occurred while the job was running. (ID 104 Details: The semaphore timeout period has expired (0x80070079))

  • Reg CCMs Alerts--Time in UTC

    Hi,
    I have configured CCMs alerts and i am getting ma ils
    but i see that the time is in UTC
    I have checked the Time zone settting in CEN and monitored System and it is EST
    Please let me know of how to correct this error
    ALERT for SAP \ server_SAP_90 \ Background \ AbortedJobs at 20090407 200044 ( Time in UTC )
    RED CCMS alert for monitored object AbortedJobs
    Alert Text:Job DEPUY RUN (ID number 16003301) terminated [User 100:SAPSYS]
    System:SAP

    Hi Balaji
    Have you check your OS settings for timezone, also are your system time and time zone different as seen in system settings ?
    Also time zone settings for the user with whom the mail/message dispatch job is running which was scheduled while configuring email autoreactions...not the user sapsys
    Bhudev

  • 90% of the system resource is consumed when timer service is running

    Hi,
        I have an development environment with 8GB RAM with SharePoint 2013 and SQL Server 2014, The SharePoint runs slow when Timer Service is running. Turning of the Timer Service speed up the environment.
    Is it a know issue with SharePoint 2013? Is there any update/hotfix available pertaining to the issue.
    Thanks,
    Ajeet

    Hi  Ajeet,
    According to your description, my understanding is that the SharePoint Timer Service(OWSTIMER)   consumes 90% of the server resource in  your SharePoint 2013 single server.
    For your issue, could you run Microsoft Net Monitor to see the contents of the packets that were being sent / received by the owstimer.exe process?  Also please make sure your single server match the
    hardware requirement:
    http://technet.microsoft.com/en-us/library/cc262485(v=office.15).aspx#hwforwebserver
    Here is a  blog for troubleshooting timer service issue:
    http://soerennielsen.wordpress.com/2009/01/14/fixing-the-timer-service-when-everything-breaks-down/
    http://www.mysharepointadventures.com/2012/09/sharepoint-timer-job-service-consuming-all-memory-on-server/
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Eric Tao
    TechNet Community Support

  • All timer jobs don't start (paused in check job status)

    Hello,
    Some bad things have happend to our SharePoint Server 2010. All timer jobs suddenly stopped running. The last time they ran successfully was 09/26/11. Since that time all jobs have been scheduled to run according to their schedule but don't actually run.
    In the Check Job Status view there is a list of jobs scheduled to run 09/26/11 which are currently paused for some reasons, other jobs don't run at all. 
    Is it possible to unpause these jobs and let other jobs to run? Any ideas would be very much apprecaited. Thank you in advance.

    Hi,
    Thanks for your post.
    Pausable Timer Jobs
    You can now create pausable timer jobs. This is done by inheriting from the
    SPPausableJobDefinition and overriding Execute(SPJobState) instead of
    Execute(Guid). You can then use the current job state (SPJobState) to store values which are retrieved when the job is resumed.
    Running jobs on all content databases
    Another new timer job derivative type is the SPContentDatabaseJobDefinition.This is a timer job specifically made to perform actions on content databases. The timer job is distributed on the WFE servers and each content database is only
    processed by one job. Override the Execute(SPContentDatabase, SPJobState) to add your own processing. The job supports pausing.
    Running jobs on all Site Collections
    A really interesting new timer job type is the SPAllSitesJobDefinition. This one is derived from the SPContentDatabaseJobDefinition and has a method called
    ProcessSite(SPSite, SPJobState) which you override to process the SPSite object. This could be very useful to build site inventories/directories.
    Running job on a specific server
    The SPServerJobDefinition is a pausable timer job that is designed to be targeted to a specific server (SPServer).
    Running jobs for specific services
    The SPServiceJobDefinition is another pausable timer job that runs a job on all servers where a specific
    SPService has been provisioned. A very similar job is the
    SPFirstAvailableServiceJobDefinition which runs the job on the first available server which has a specific SPService installed. If you would like to run a job an all servers you can easily use SPServiceJobDefinition. Then use the timer job service
    (which is installed on all servers, except dedicated database servers) and pass
    SPFarm.Local.TimerService as the SPService parameter in the constructor.
    All of the new specific timer jobs are essentially smart derivatives of the SPJobDefinition but using these new abstract classes will certainly save you some time when you need to target your timer jobs.
    I hope that helps.

  • Huge SharePoint_Config Database/Timer Job History Overflow

    Hello All,
    I know there are quite a few sites about this issue, but none of them seem to work for me so I was hoping for some help.
    Just for reference, this is in my dev environment on a VM running:
    SP2010
    SQL Server 2008 R2
    Windows Server 2008 R2
    It is the same old story of SharePoint's "Delete Job History" timer job not running and therefor the “TimerJobHistory” database table growing massive. For the last couple of months
    my drive keeps running out of space and putting a halt to my work. So I went to the Net looking for a solution. I found a few that looked promising, but either caused more problems or simply did not work. Now that I have my environment back up and running
    I was hoping for some help in solving this.
    Does anyone know how to purge the “TimerJobHistory” table so my SharePoint_Config database will not keep filling up my space? From there is there anyway to prevent this from happening again?
    The “Delete Job History” will not run currently even though I have adjusted the days to keep history and increased the duration that the job should run.
    So far here is what I tried and the results:
    http://sharepoint.it-professional.co.uk/?p=228 – Runs for a while and seems promising, but keeps growing the log and database
    file until out of space, then crashes. I have to spend a lot of time freeing up space, detaching, and reattaching the config database. At this point I have removed everything from that drive that I can so no more space can be freed.
    Shrinking files – This is only a temporary fix and I heard that it is a bad idea to do this more than once every now and again anyway. I am sure my database is already fragmented terribly
    (which also if anyone can point me to a working solution for defragging in SQL Server 2008 R2 I would greatly appreciate it.)
    There are a number of other sites that make recommendations that seem to be… well… less than what would be considered a “good practice” so I avoided those.
    For the time being I have done what I had hoped not to and turned the SharePoint Timer service off to prevent the database from filling up again. If I turn it back on the log fill will be full
    in a matter of seconds which will move me back to no space and in a bad position.
    This seems to be happening to all the devs here, but I got a bit too brave and tried to find a solution… I learn a lot from my mistakes though!
    If any other information is needed, please let me know.
    Any help would be GREATLY appreciated.
    Thanks,
    J.

    Hi,
    We need to reduce the amount of data being deleted by the timer job during each run, so I need to modify the daystokeephistory value of the timer job.
    The following PowerShell script for your reference:
    $history = Get-SPTimerJob | Where-Object {$_.name -eq "job-delete-job-history"}
    $history.daystokeephistory = 25
    $history.update()
    $history.runnow()
    More information:
    http://convergingpoint.blogspot.com/2012/05/sharepoint-configuration-database-too.html
    http://www.techgrowingpains.com/2012/01/out-of-control-sharepoint_config-database/
    Best Regards
    Dennis Guo
    TechNet Community Support

  • Create Upgrade Evaluation Site Collections Timer Job does not send notification Email when the Site is created

    Hello Everyone,
    My problem is:
    The Create Upgrade Evaluation Site Collections job does not send a Notification Email when the Eval Site is created. I only get a notification E-Mail that mention that a Upgrade Evaluation Site Colletion is requested and then after 27 Days that the Evla
    Site will be deleted in three Days.
    My Enviroment:
    SharePoint Foundation 2013 Sp1 on Windows Server 2012
    Exchange 2010 SP3
    I hope someone can help.
    best regards
    domschi

    Hi domschi,
    As I understand, you didn’t receive email generated from Create Upgrade Evaluation Site Collections timer job. While you might receive email generated from Delete Upgrade Evaluation Site Collections.
    When you request an evaluation site collection, the request is added to a Timer job
     which runs once a day. You will receive an e-mail message when the upgrade evaluation site is available. This might take up to 24 hours. The message includes a link to the evaluation site. Upgrade evaluation site collections are set to automatically
    expire (after 30 days by default).
    Please go to CA > Monitoring > Review Job Definitions, locate issue timer job and click Run Now. Then go to Job History and check if the issue job failed to run.
    Also, please check if the email are received by Exchange Hub server.
    Regards,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected] .
    Rebecca Tu
    TechNet Community Support

  • What is Thread Safety in timer jobs on SharePoint?

    Hi All,
    What is Thread Safety in timer jobs on SharePoint?
    Thanks in advance!

    hi
    thread safety in timer jobs means the same as in other code: it should be possible to run multiple instances of the same job simultaneously. If job uses some shared resource, access to this resource should be synchronized. But the tricky moment is that it
    is not enough to just use standard .Net thread synchronization mechanisms here (e.g. lock), because in most cases Sharepoint runs on the server farm and the same job instance may be executed on different servers while standard synchronization mechanisms work
    within single process memory space (it is possible to guarantee that jobs are running on the same server by assigning preferrable server for timer jobs in Central administration > Content databases > content database, but often it is left to Sharepoint
    to decide on what server jobs are executed). In case of timer jobs you need to store some flag that job is started in some shared storage, e.g. in SPWebApplication.Properties:
    lock(obj)
    try
    if ((bool)web.AllProperties["jobstarted"])
    return;
    web.SetProperty("jobstarted", true);
    web.Update();
    finally
    web.AllProperties.Remove("jobstarted");
    web.Update();
    It is just idea and there is still minor possibility that 2 jobs instances may run at the same time (if 2nd job instances set jobstarted flar after 1st jobs checked it in AllProperties, but before 1st jobs set it), but it will solve most of the problems in
    reality. In order to make the code more secure you may use timestamps instead of flag.
    Blog - http://sadomovalex.blogspot.com
    Dynamic CAML queries via C# - http://camlex.codeplex.com

  • Debugging timer job on a server

    This is something that has frustrated me the whole day. I am trying to debug a custom timer job on a development server which has visual studio installed locally. It worked only one time and after I updated the assembly, I cannot quite figure out how
    to load the symbols. It says cannot load symbols.
    1. How and where do I need to place the updated assemblies?
    2. When do I need to restart the timer service to pick up the updated assemblies?
    3. When and do I need an IIS reset?
    Thanks!

    Hi,
    1. Check latest WSP is deployed in the central admin. If not, Retract and deploy it again.
    2. Check latest assembly is installed in GAC otherwise uninstall and install the assembly using GACUTIL
    3. Do IISRESET
    4. Restart the SharePoint Timer job in services.msc
    5. In Visual Studio -> Select OWSTIMER.EXE
    6.In Central Admin -> Job Description (Refresh the Page) -> Go to your Timer Job ->Click Run Now.
    If you still can't debug then check the Logs in 14 hive folder and also EventViewer logs
    Thanks, Sures | MCTS SharePoint

  • How to select server in which background job should run

    Hi,
    I want to run my program as background job. I want the user to select the server, in which background job should be run, in the selection screen of my program. When it is sheduled in background the job should run in the selected server.
    How to do this?
    Regards,
    Sriram

    Hi,
    please write the code like as below.
    DATA : D_GROUP like TBTCJOB-JOBGROUP.
    use the function moulde JOB_OPEN.
      D_JOBNAME = SY-REPID.
      D_GROUP- BTCSYSREAX =  " pass the target server name Here
         CALL FUNCTION 'JOB_OPEN'
            EXPORTING
                 JOBNAME          = D_JOBNAME
                 JOBGROUP       =  D_GROUP
            IMPORTING
                 JOBCOUNT         = D_JOBNO
            EXCEPTIONS
                 CANT_CREATE_JOB  = 1
                 INVALID_JOB_DATA = 2
                 JOBNAME_MISSING  = 3
                 OTHERS           = 4.
    submit   <Program name>
                  USER SY-UNAME VIA JOB D_JOBNAME NUMBER D_JOBNO
                  USING SELECTION-SET '  var1 '   " Give varient name
                  AND RETURN.
    CALL FUNCTION 'JOB_CLOSE'
               EXPORTING
                    JOBCOUNT             = D_JOBNO
                    JOBNAME              = D_JOBNAME
                    STRTIMMED            = 'X'
               IMPORTING
                   JOB_WAS_RELEASED     = D_REL
               EXCEPTIONS
                   CANT_START_IMMEDIATE = 1
                   INVALID_STARTDATE    = 2
                   JOBNAME_MISSING      = 3
                   JOB_CLOSE_FAILED     = 4
                   JOB_NOSTEPS          = 5
                   JOB_NOTEX            = 6
                   LOCK_FAILED          = 7
                   OTHERS               = 8.
    Hope this will helps you
    Regards
    Kiran

  • Project Server 2013: Synchronization of AD with security groups - missing from list of timer jobs

    I have same problem in:
    http://social.msdn.microsoft.com/Forums/en-US/2b916bb9-2277-4c53-8b97-271a912414ba/ps2013-timer-job-missing-quotproject-server-synchronization-of-ad-with-security-groups-forquot
    "I cannot find timer job in SPS central administration "Project Server: Synchronization of AD with security groups for <PWAIntanceName>" to schedule synchronization. Enterprise Resource Pool synchronization working fine and timer job
    "Project Web App: Synchronization of AD with the Enterprise Resource Pool job for <PWA site name>" exist on server."
    Don't offer solution for "Schedule Enterprise Resource Pool synchronization".
    Only "Timer job in SPS central administration "Project Server: Synchronization of AD with security groups".
    Have a solution?

    Project Server timer job "Synchronization of AD with security groups" don't exist.
    So I created job in Task Scheduler of Project Server OS, that every day start PowerShell-script:
    if ((Get-PSSnapin | where {$_.Name -eq "Microsoft.SharePoint.PowerShell"}) -eq $null)
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    Invoke-SPProjectActiveDirectoryGroupSync –Url http://project/pwa
    Security groups of Project Server automatically synchronize with groups from AD!
    http://technet.microsoft.com/en-us/library/jj219472.aspx

  • TIMER JOB STATUS INITIALIZED STUCK AT 0%; NO ALERTS SENDING

    I have read every article on the msdn site for this issue.  I have followed the instructions from many sites in the order dictated.
    I have run the various commands to check that the alerts are enabled, etc; all are enabled.
    (http://dzeee.net/sharepoint/post/2010/01/17/Alerts-not-working.aspx)
    http://blogs.technet.com/b/saantil/archive/2009/11/25/working-with-alerts.aspx?CommentPosted=true#commentmessage
    stsadm -o getproperty -pn alerts-enabled -url http://SiteURL
    stsadm -o getproperty -pn job-immediate-alerts -url http://SiteURL
    stsadm -o getproperty -pn job-daily-alerts -url http://SiteURL
    stsadm -o getproperty -pn job-weekly-alerts -url http://SiteURL
    I have checked the eventcache table in the content database, and it is has over 1000 entries where not null.
    Now at a loss as to why this is not changing at all.  And the timer job is still stuck at 0%
    Appreciate any other ideas that I've overlooked.  Thanks.
    Rose

    Okay, the culprit was Sophos Web Intelligence on the server.  We have a SP 2007 environment.  Simply disabling it does not work.  It has ot be removed.  All version of Sophos 10 are preventing the alerts and emails form being sent out. 
    Now th ealerts are playing catch up.  I'm getting alerts which are over 2 weeks old.
    Http://community.sophos.com/t5/Sophos-EndUser-Protection/sophos-10-and-sharepoint-2007/td-p/20119
    The work around solution given by Microsoft was not enough for us.  Maybe it will work for others.  Here is the link:
    http://support.microsoft.com/kb/2000689
    Thanks again for the assistance.
    Rose

  • Event based scheduler job - 2 events at the same time only 1 run

    Hi,
    i converted our dbms_job - jobs to the newer package dbms_scheduler.
    It is a 10.2.0.4 patch 23(8609347) database on an windows server 2003 R2 Enterprise x64 Edition SP2.
    The Jobs(about 130) are nothing special ... only some statistics, matview-refreshes and so on.
    For the notification of failed jobs and jobs which run over the max_run_duration i downloaded and installed the job notification package.
    The jobs are assigned to different departments and the corresponding developer teams in our company.
    I created a notification job for each department and if a job fails we (the database administrators) and the corresponding deverlopers will be informed.
    Now i ascertained that only 1 email will be send if 2 jobs of the same department fails at the same time.
    The emailer-jobs are auto-generated by the job notification package. I only modified them to look after all jobs of special department and not only after one job. (--> event_condition ... object_name LIKE 'XXX%')
    example for dba-jobs(copy of the script output of TOAD):
    SYS.DBMS_SCHEDULER.CREATE_JOB
           job_name        => 'DBA_JOBS_EMAILER'
          ,start_date      => NULL
          ,event_condition => tab.user_data.object_name LIKE ''DBA%'' AND tab.user_data.event_type in (''JOB_FAILED'',''JOB_OVER_MAX_DUR'')'
          ,queue_spec      => 'SYS.SCHEDULER$_EVENT_QUEUE, JOB_FAILED_AGENT'
          ,end_date        => NULL
          ,program_name    => 'SYS.EMAIL_NOTIFICATION_PROGRAM'
          ,comments        => 'Auto-generated job to send email alerts for jobs "DBA%"'
        );I thought that a queue is used to manage all events from the scheduler jobs. So i made a test with 2 dba jobs and simulated a failure at the same time but i received only one mail.
    So what is happend with the second event? I looked for the events in the qtab(SCHEDULER$_EVENT_QTAB) which belongs to the event queue(SYS.SCHEDULER$_EVENT_QUEUE) and no event was missing.
    So i think the emailer job has to run 2 times.
    Is anyone able to explain or to find my mistake?
    I know that the easiest way is to create one emailer job for each normal job but i think this is a little bit costly because all the arguments are the same for one department.
    Thanks & Regards

    Thanks for your fast answer.
    You are right with the "enabled => TRUE;" part and i only forgot to post it.
    So the Job is enabled (otherwise it would not send any mail). Because it is sending one mail i think it is also not necessary to hand over a job_type.
    Additionally the job starts a program ... so it is right to set the job_type='STORED_PROCEDURE' isn't it?
    And also right ... i already added the agent as subscriber to the queue.
    Anyway i think the whole thing do what it have to do. So in my oppinion there are no big mistakes in creating the job or at adding the subscriber.
    There are also no problem in raising the events by itself and enqueue them in the scheduler event queue.
    There is only a problem when 2 jobs fails (or run out ouf max duration) at exactly the same time.
    If i understand it right:
    The agent/subscriber will find the "JOB_FAILED"-event for the first Job in the queue and starts the emailer Job.
    The agent will also find the "JOB_FAILED"-event for the second Job and wants to start the emailer Job again.
    I don't know if this is really the problem but perhaps the emailer-job can not be started in consequence of the second event because ist is already running.
    I also don't know if this is a mistake of the agent or of the emailer-job by itself.
    I only want that it runs two times (one run for each event). I my case it also doesn't matter which email is send at first.

Maybe you are looking for