Job Scheduling using job_close

Hi gurus,
im trying to scheduke several jobs using the FM Job close, but the jobs are running paralel..
The first job its' schedule with a start date or imediate, and the letf ones are schedule to start after his precessor finish, but once i run the programa 4 jobs start running ...
Can any one help me on this?
Below goes my code...
  DO njobs TIMES.
**Nome de Job
    CLEAR: jobname, jobcount, job_release.
    CLEAR: job_imediate, str_job.
    ADD 1 TO ind_job.
    WRITE ind_job TO str_job.
    CONCATENATE 'EXECORC' sy-uname sy-uzeit str_job
                 INTO jobname SEPARATED BY '-'.
    CALL FUNCTION 'JOB_OPEN'
         EXPORTING
              jobname          = jobname
         IMPORTING
              jobcount         = jobcount
         EXCEPTIONS
              cant_create_job  = 1
              invalid_job_data = 2
              jobname_missing  = 3
              OTHERS           = 4.
    IF sy-subrc <> 0.
      MESSAGE i003(zmapas).
      EXIT.
    ENDIF.
    IF gv_global EQ 'X'.
**Submit job
      SUBMIT z_mapa_execucao_orcamental
             VIA JOB jobname NUMBER jobcount
             WITH ano EQ ano
             WITH so_perio IN so_perio
             WITH so_date IN so_date
             WITH so_org EQ so_org
             WITH so_num IN so_num
             AND RETURN.
    ELSE.
***Limites
      CLEAR: upper_bound, lower_bound.
      upper_bound = njobs * ind_job.
      lower_bound = upper_bound - njobs + 1.
      CLEAR so_num.
      REFRESH so_num.
      LOOP AT tab_prog FROM lower_bound TO upper_bound.
        so_num-sign = 'I'.
        so_num-option = 'EQ'.
        so_num-low = tab_prog-zlinha.
        APPEND so_num.
      ENDLOOP.
      SUBMIT z_mapa_execucao_orcamental
             VIA JOB jobname NUMBER jobcount
             WITH ano EQ ano
             WITH so_perio IN so_perio
             WITH so_date IN so_date
             WITH so_org EQ so_org
             WITH so_num IN so_num
             AND RETURN.
    ENDIF.
    IF ind_job EQ 1.
      IF stdt_output-startdttyp EQ 'I'.
        job_imediate = 'X'.
      ENDIF.
      CALL FUNCTION 'JOB_CLOSE'
           EXPORTING
                jobcount             = jobcount
                jobname              = jobname
                sdlstrtdt            = stdt_output-sdlstrtdt
                sdlstrttm            = stdt_output-sdlstrttm
                strtimmed            = job_imediate
           IMPORTING
                job_was_released     = job_release
           EXCEPTIONS
                cant_start_immediate = 1
                invalid_startdate    = 2
                jobname_missing      = 3
                job_close_failed     = 4
                job_nosteps          = 5
                job_notex            = 6
                lock_failed          = 7
                OTHERS               = 8.
      IF sy-subrc <> 0.
        MESSAGE i003(zmapas).
        EXIT.
      ELSE.
        CLEAR: predjob, predjobcount, stdt_output.
        predjob = jobname.
        predjobcount = jobcount.
        MESSAGE s004(zmapas) WITH jobname.
      ENDIF.
    ELSE.
      CALL FUNCTION 'JOB_CLOSE'
           EXPORTING
                jobcount             = jobcount
                jobname              = jobname
*                predjob_checkstat    = 'X'
                pred_jobcount        = predjobcount
                pred_jobname         = predjob
*                strtimmed            = 'X'
           IMPORTING
                job_was_released     = job_release
           EXCEPTIONS
                cant_start_immediate = 1
                invalid_startdate    = 2
                jobname_missing      = 3
                job_close_failed     = 4
                job_nosteps          = 5
                job_notex            = 6
                lock_failed          = 7
                OTHERS               = 8.
      IF sy-subrc <> 0.
        MESSAGE i003(zmapas).
        EXIT.
      ELSE.
        CLEAR: predjob, predjobcount, stdt_output.
        predjob = jobname.
        predjobcount = jobcount.
        MESSAGE s004(zmapas) WITH jobname.
      ENDIF.
    ENDIF.
Thanks in Advance,
Best Regards,
João Martins

Hello.
First of all, parameter predjob_checkstat makes the second job to start only if the previous one ends without error. Probably this solves your problem at all ... the second waits to the end of the first to see if it ended with error or not.
I was analysing your problem. Parameter strtimmed can only be set in the first JOB_CLOSE. All the others cannot have this parameter set to 'X' if you want them to wait for the end of the previous ones.
So, try predjob_checkstat = 'X' and strtimmed = space.
Also, I have one example that is working:
*** Escalona um JOB para cada ficheiro encontrado.
    LOOP AT t_processar.
      CLEAR: w_jobcount, w_jobname, l_liberado.
      ADD 1 TO l_conta.
      MOVE l_conta TO l_conta2.
      CONDENSE l_conta2.
      CONCATENATE t_jobs-jobname l_conta2 INTO w_jobname.
      CALL FUNCTION 'JOB_OPEN'
           EXPORTING
                jobname          = w_jobname
           IMPORTING
                jobcount         = w_jobcount
           EXCEPTIONS
                cant_create_job  = 1
                invalid_job_data = 2
                jobname_missing  = 3
                OTHERS           = 4.
***   Criou-se o JOB com sucesso
      IF sy-subrc = 0.
        CLEAR seltab_wa.
***     Monta o parâmetro
        MOVE: t_jobs-param TO seltab_wa-selname,
              t_processar-line+34 TO seltab_wa-low.
        APPEND seltab_wa TO seltab.
        seltab_wa-selname = 'P_LOJA'.
        seltab_wa-low = t_processar-ficheiro+7(4).
        APPEND seltab_wa TO seltab.
***     Submete o programa para o JOB
        SUBMIT (t_jobs-repid)
               WITH  SELECTION-TABLE seltab
               USER sy-uname
               VIA JOB w_jobname NUMBER w_jobcount
               AND RETURN.
***     Encerra o JOB
        IF l_conta EQ 1.
          l_hora = sy-uzeit.
          ADD 120 TO l_hora.
          CALL FUNCTION 'JOB_CLOSE'
               EXPORTING
                    jobcount             = w_jobcount
                    jobname              = w_jobname
                    sdlstrtdt            = sy-datum
                    sdlstrttm            = l_hora
                    targetserver         = w_servidor
               IMPORTING
                    job_was_released     = l_liberado
               EXCEPTIONS
                    cant_start_immediate = 1
                    invalid_startdate    = 2
                    jobname_missing      = 3
                    job_close_failed     = 4
                    job_nosteps          = 5
                    job_notex            = 6
                    lock_failed          = 7
                    OTHERS               = 8.
        ELSE.
          CALL FUNCTION 'JOB_CLOSE'
               EXPORTING
                    jobcount             = w_jobcount
                    jobname              = w_jobname
                    predjob_checkstat    = 'X'
                    pred_jobcount        = w_jobcount2
                    pred_jobname         = w_jobname2
                    targetserver         = w_servidor
               IMPORTING
                    job_was_released     = l_liberado
               EXCEPTIONS
                    cant_start_immediate = 1
                    invalid_startdate    = 2
                    jobname_missing      = 3
                    job_close_failed     = 4
                    job_nosteps          = 5
                    job_notex            = 6
                    lock_failed          = 7
                    OTHERS               = 8.
        ENDIF.  "l_conta eq ...
      ENDIF. "sy-subrc = 0 do JOB-OPEN
      w_jobname2  = w_jobname.
      w_jobcount2 = w_jobcount.
      PERFORM f_limpa_param.
    ENDLOOP. "at t_processar
Regards.
Valter Oliveira.

Similar Messages

  • Job scheduling using dbms_scheduler.create_job

    hi all experts,
    i am really grateful to you for your responses, now have a look at this, i have created a job with "dbms_scheduler.create_job" and blow is the output from view "dba_schduler_jobs" why it is showing run_count to '0'...i have gone through all provided links like:
    Answers to "Why are my jobs not running?"
    and applied and checked all parameters, i am using oracle 10.2.0.1.0 on windows server 2003 32-bit
    SQL> BEGIN
         DBMS_SCHEDULER.CREATE_JOB (
             job_name => 'clouser'
            ,job_type => 'PLSQL_BLOCK'
         ,job_action => 'begin package.procedure("po_closure"); end;'
         ,start_date => to_date('17-04-2009 22:00:00', 'dd-mm-yyyy hh24:mi:ss')
         ,repeat_interval => 'FREQ=DAILY;byminute=10'
         ,enabled => TRUE
         ,comments => 'op closure everyday at 11PM'
    END;
    PL/SQL procedure successfully completed.
    SQL> select owner,job_name name,run_count run,start_date
      2  from dba_scheduler_jobs
      3  where owner='FINANCEDEV';
    OWNER                          NAME                                  RUN
    START_DATE
    FINANCEDEV                     CLOUSER                                 0
    17-APR-09 02.45.10.000000 PM +05:30
    FINANCEDEV                     CLOUSER_MAIN                            0
    18-APR-09 10.25.00.000000 AM +05:30
    FINANCEDEV                     CLOUSER_MAIN1                           0
    18-APR-09 10.45.00.000000 AM +05:30
    OWNER                          NAME                                  RUN
    START_DATE
    FINANCEDEV                     CLOUSER_MAIN4                           0
    18-APR-09 10.45.00.000000 AM +05:30
    FINANCEDEV                     CLOUSER_MAIN5                           0
    18-APR-09 10.43.00.000000 AM +05:30the time i scheduled has already passed, and i am checking it again and again, but no luck,can you please explain why is it so??
    thanks in advance
    thanks and regards
    VD
    Edited by: vikrant dixit on Apr 17, 2009 10:33 PM

    Hello,
    I should have asked only for next_run_date and state and failure_count. Do you see any value for failure_count column. Seems you have a failure in running of this job and whereever are you , what's the time?
    You can also run this job manually using and see if run_count goes up or failure and it schedules the job for tommorrow as well.
    exec DBMS_SCHEDULER.RUN_JOB('CLOUSER',TRUE);Based on your information, your next_run_date should be set to 19 APr @ 12:10 AM.
    Here is example based on your information
    BEGIN
       sys.DBMS_SCHEDULER.create_job (
          job_name          => '"SCHEMA_NAME"."CLOUSER"',
          job_type          => 'PLSQL_BLOCK',
          job_action        => 'DECLARE
    begin
      po_closure varchar2(..) := ''some_value'';
      schema_name.package_name.procedure(po_closure);
    commit;
    end;
          repeat_interval   => 'FREQ=DAILY;BYMINUTE=10;BYSECOND=0',
          start_date        => SYSTIMESTAMP AT TIME ZONE 'US/Eastern',
          job_class         => 'DEFAULT_JOB_CLASS',
          auto_drop         => FALSE,
          enabled           => TRUE
       sys.DBMS_SCHEDULER.set_attribute (name        => '"SCHEMA_NAME"."CLOUSER"',
                                         attribute   => 'restartable',
                                         VALUE       => TRUE);
       sys.DBMS_SCHEDULER.enable ('"SCHEMA_NAME"."CLOUSER"');
    END;Regards

  • Job Scheduling using server pool

    Is there any way to use a pool of servers for job scheduling?
    I know you can tell it the target server you want, but I want it to find an available server from a list.
    The reason is, the job runs software on the app server.  Not all app servers have this software installed.  But it is installed on more than one.  I want it to find an available server, but only one that has the software installed.  It would be great to somehow define a pool of servers where the job can run.
    I don't think I can get creative with job class.  This looks at all work processes on all servers. 
    If I go this way, I'd have to set all work processes as A on the servers I don't want to use, and set my job as B.  That is definately not an option.
    Any ideas?
    Thanks

    Hello dskdell,
    I guess u want to use some kind of load balancing with the background servers,
    well when u define a job,dont give the server name there,then it will look for the available server and schedule the job there
    " Although a job can specify to use a particular background server (an application server that has at least one background work process), it is best to allow the background processing system to use load balancing to distribute the workload among the available servers"
    Refer to http://help.sap.com/saphelp_nw70/helpdata/en/4a/2d513897110872e10000009b38f889/frameset.htm
    Hope it helps
    Rohit

  • Trigger mail for cancelled background jobs scheduled using SM37

    Dear Experts,
    My requirement is to trigger email whenever a job gets cancelled in background.
    For this I have already tried creating a workflow using BO BPJOB for event ABORTED.
    But for some reason the event is never getting triggered.
    I tested executing the workflow from tcode SWDD, it was running successfully, which means that there is no issue with my workflow.
    Now I want to resolve the above issue   OR
    I want to raise the event manually from the program through some BADI or Exit.
    But I didn't find any BADI or Exit for tcode SM37. Can anybody let me know if any.

    Hi,
    Please refer the below links.
    Workflow- Background job fail
    Re: Send mail when job fails
    Regards
    GK.
    Moderator message: please do not post just links without any further explanations.
    Edited by: Thomas Zloch on Sep 20, 2010 11:53 AM

  • Webview Job Scheduler Questions

    If you schedule a report to run in Webview - does the user have to be logged in to Webview for the report to run?
    Also - where do the reports actaully go?
    We are taking on a new business that wants some webview reports shceudled to run and dump to a location so they can grab them and import them into their own reporting database/dashboards.
    I know we have another customer that has something similar setup - but it was done by a 3rd party contractor before I was on the team.
    Thanks in advance.

    Hi Ronnie,
    Couple of things to note:
    Webview Job Scheduler uses Windows Task Scheduler to schedule reports
    As such, PC needs to remain on and the user who scheduled the job must be logged into WebView at the time the job is scheduled to be run (also needs to remain logged in if you are exporting to file and the drive you are exporting to is a mapped drive)
    User who is scheduling the reports needs to be an administrator of the machine they are scheduling from in order to create the Scheduled Tasks
    When you output locally to a drive letter. It automatically goes into a Drive:\Job_Scheduler\ directory
    Hope that helps. The requirement for local admin rights is a real pain as usually in most environments it's end business users who are trying to do this and IT departments don't like giving them local admin rights to their PCs...
    Cheers,
    Nathan

  • Job schedule question

    Hi:
    If I make a change to a job, will the scheduled job need to be re-activate for the new change to take effect? Or will BOBJ recognize the change without having to re-activate the scheduled job?
    Thanks in advance.

    You can change all the objects (like dataflows, workflows, ...) in the job, it will not affect the job schedule (so no need to re-activate).
    The job schedule uses the GUID to identify the job in the repo, this will not change when you modify objects in the job. So the next time the schedulesd job is executed it will get the new job definition automatically.
    - Ben.

  • Implementation of Batch Job Schedule in ADF

    Hi,
    How do I implement the job scheduler using Oracle ADF? What approach should I take to implement this. This includes printing of reports.
    Regards,
    Gareth

    You cannot set tow different times on a single day.
    This option is not available. If you want you can schedule the same publication two times with two different times.
    Seems to be a good option but not available
    Regards
    Gowtham

  • Drop/Create sequence using Oracle Job Scheduler

    IDE for Oracle SQL Development: TOAD 9.0
    Question: I am trying to do the following:
    1. Check if a certain sequence exists in the user_sequences table
    2. Drop the sequence if it exists
    3. Re-create the same sequence afterward
    All in a job that is scheduled to run daily at 12:00 AM.
    What I would like to know is if this is even possible in the first place with Oracle jobs. I tried the following:
    1. Create the actual "BEGIN...END" anonymous block in the job.
    2. Create a procedure that uses a dynamic SQL string using the same "BEGIN...END" block that drops and recreates the sequence using the EXECUTE IMMEDIATE commands
    But I have failed on all accounts. It always produces some sort of authorization error which leads me to believe that DDL statements cannot be executed using jobs, only DML statements.
    BTW, by oracle jobs, I mean the SYS.DBMS_JOBS.SUBMIT object, not the job scheduler.
    Please do not ask me why I need to drop and recreate the sequence. It's just a business requirement that my clients gave me. I just want to know if it can be done using jobs. If not, I would like to know if there are any work-arounds possible.
    Thank you.

    Please do not ask me why I need to drop and recreate the sequence. It's just a business requirement that my clients gave me. I just want to know if it can be done using jobs. If not, I would like to know if there are any work-arounds possible.Well, I won't ask you then, but can you ask your clients why on earth they would want that?
    Do they know that doing DDL 'on the fly' will invalidate the dependent objects?
    Best shot you can give at it is reset the sequence. And you could do it in a job, yes, as long as it's interval is during some maintenance window (no active users).
    Regarding resetting a sequence, you, (and your clients) should read this followup:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1119633817597
    (you can find lots more info on sequences and jobs by doing a search from the homepage http://asktom.oracle.com)
    Regarding the authorization errors: your DBA should be able to provide you the nessecary privileges.
    But in the end, this is something I'd rather not would like to see implemented on a production system...

  • Parallel processing in background using Job scheduling...

    (Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
    Hi ABAP Gurus,
    I have read a bit till now about parallel processing. But I have a doubt.
    I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
    Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
    to use parallel processing in SAP.
    Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
    But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
    Can this be also called parallel processing ?
    Please let me know if this sounds wise to you guys...
    Regards,
    Tushar.

    Thanks for your reply...
    So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
    I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
    report ZMMI_MATERIAL_MASTER_TEST
          no standard page heading line-size 255.
    include bdcrecx1.
    parameters: dataset(132) lower case default
                                 '/tmp/testmatfile.txt'.
       DO NOT CHANGE - the generated data section - DO NOT CHANGE    ***
      If it is nessesary to change the data section use the rules:
      1.) Each definition of a field exists of two lines
      2.) The first line shows exactly the comment
          '* data element: ' followed with the data element
          which describes the field.
          If you don't have a data element use the
          comment without a data element name
      3.) The second line shows the fieldname of the
          structure, the fieldname must consist of
          a fieldname and optional the character '_' and
          three numbers and the field length in brackets
      4.) Each field must be type C.
    Generated data section with specific formatting - DO NOT CHANGE  ***
    data: begin of record,
    data element: MATNR
           MATNR_001(018),
    data element: MBRSH
           MBRSH_002(001),
    data element: MTART
           MTART_003(004),
    data element: XFELD
           KZSEL_01_004(001),
    data element: MAKTX
           MAKTX_005(040),
    data element: MEINS
           MEINS_006(003),
    data element: MATKL
           MATKL_007(009),
    data element: BISMT
           BISMT_008(018),
    data element: EXTWG
           EXTWG_009(018),
    data element: SPART
           SPART_010(002),
    data element: PRODH_D
           PRDHA_011(018),
    data element: MTPOS_MARA
           MTPOS_MARA_012(004),
         end of record.
    data: lw_record(200).
    End generated data section ***
    data: begin of t_data occurs 0,
          matnr(18),
          mbrsh(1),
          mtart(4),
          maktx(40),
          meins(3),
          matkl(9),
          bismt(18),
          extwg(18),
          spart(2),
          prdha(18),
          MTPOS_MARA(4),
        end of t_data.
    start-of-selection.
    perform open_dataset using dataset.
    perform open_group.
    do.
    *read dataset dataset into record.
    read dataset dataset into lw_record.
    if sy-subrc eq 0.
    clear t_data.
    split lw_record
       at ','
    into t_data-matnr
          t_data-mbrsh
          t_data-mtart
          t_data-maktx
          t_data-meins
          t_data-matkl
          t_data-bismt
          t_data-extwg
          t_data-spart
          t_data-prdha
          t_data-MTPOS_MARA.
    append t_data.
    else.
    exit.
    endif.
    enddo.
    loop at t_data.
    *if sy-subrc <> 0. exit. endif.
    perform bdc_dynpro      using 'SAPLMGMM' '0060'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'RMMG1-MATNR'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=AUSW'.
    perform bdc_field       using 'RMMG1-MATNR'
                                 t_data-MATNR.
    perform bdc_field       using 'RMMG1-MBRSH'
                                 t_data-MBRSH.
    perform bdc_field       using 'RMMG1-MTART'
                                 t_data-MTART.
    perform bdc_dynpro      using 'SAPLMGMM' '0070'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MSICHTAUSW-DYTXT(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=ENTR'.
    perform bdc_field       using 'MSICHTAUSW-KZSEL(01)'
                                 'X'.
    perform bdc_dynpro      using 'SAPLMGMM' '4004'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '/00'.
    perform bdc_field       using 'MAKT-MAKTX'
                                 t_data-MAKTX.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MARA-PRDHA'.
    perform bdc_field       using 'MARA-MEINS'
                                 t_data-MEINS.
    perform bdc_field       using 'MARA-MATKL'
                                 t_data-MATKL.
    perform bdc_field       using 'MARA-BISMT'
                                 t_data-BISMT.
    perform bdc_field       using 'MARA-EXTWG'
                                 t_data-EXTWG.
    perform bdc_field       using 'MARA-SPART'
                                 t_data-SPART.
    perform bdc_field       using 'MARA-PRDHA'
                                 t_data-PRDHA.
    perform bdc_field       using 'MARA-MTPOS_MARA'
                                 t_data-MTPOS_MARA.
    perform bdc_dynpro      using 'SAPLSPO1' '0300'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=YES'.
    perform bdc_transaction using 'MM01'.
    endloop.
    *enddo.
    perform close_group.
    perform close_dataset using dataset.

  • When & how to use default settings option in job scheduling with BO 4.1

    Hi,
    With BO 4.1 we got new features and one of them is "Default Settings" option in the job scheduling. While scheduling a report in CMC we are getting below attached screen. I want to know when and how to use this option? while scheduling job on queries.
    Please guide me to, thanks in advance.
    Regards,
    Mithun Pati.

    Hi Mithun,
    Below thread may give you clear idea.
    The Purpose of the default settings is to have customized default settings for the enterprise. These settings would apply to any user who has access to Info view and wants to schedule a report. For example you can setup default values for "From" section of email so any scheduled reports use those settings. Similarly another setting we have defaulted is the number of retires involved and the seconds for each retry. Business Objects will not send out a failure emails unless the last retry has failed. The default settings should only be modified by an admin.
    Purpose of Default Settings when scheduling a WebI report?

  • Tidal Enterprise Scheduler - Get Job status using c#

    Is it possible to get the list of jobs scheduled and their status in Tidal Enterprise scheduler using a programming language like C# ?

    API depends on your version and you need to have the client installed from Server/DEsktop initiating the API
    the PDF document should have been included on you CD2 called Command Line Program Guide
    I will use 5.3.1 as an example:
    open a command prompt
    Browse to C:\Program Files (x86)\TIDAL\Scheduler\client
    type in set alias=DEV > example=DEV otherwise it defaults to Admiral
    type in SACMD
    type  HOST I always type in HOST so I can verify what master I am running against (nio need if only one connection)
    type in jobmon with your options I included below from doc
    [-d date] +display_options [filtering_options] [-b]
    It may be of limited use in 5.3.1 because to isolate you need to know Job ID or Alias or some other filter critiera... Job ID is in database jobmst.dbo.jobmst_id which is the unique identified and can be different from alias... depending on your  requirements (dashboard?) and how you have organized your jobs a SQL query would be the better option. 
    DISPLAY
    r
    Job rule ID
    i
    Job run ID
    p
    Parent job group ID
    j
    Job type (job or job group)
    c
    Occurrence number
    o
    Job or job group owner
    u
    Runtime user
    h
    Agent The Agent the job runs on
    z
    Scheduled vs. unscheduled job
    t
    Job start time
    s
    Job status
    v
    Job duration
    n
    Job name
    a
    Job alias
    q
    Queue
    x
    Exit code
    FILTER
    -r
    rule_id Job rule ID
    -p
    group_id Parent job group ID
    -j
    type Job type. You can choose: job group: group, or 1 job: Job, or 2
    -o
    owner Job or job group owner
    -u
    run_user Runtime user
    -h
    agent The Agent name the job runs on
    -s
    job Status ( Completed Abnormally=103, Normal=101)
    -a
    alias Job alias
    -x
    exit_code Exit code
    -b
    Suppresses the header information

  • Reset SIP trunk using job scheduler

    We manage multiple CUCM clusters using 8.6/9.1 for clients and occasionally we will find the need to reset the main SIP trunks for PSTN access, which we have to do after hours due to these clusters being in production.  We have heard tell that it is possible to schedule a trunk reset using the job scheduler but we are unable to find how this is done.  Does anyone have any experience with this?

    Part of our confusion is that when you save on the trunk configuration, we see the pop up below that mentions the Job Scheduler.

  • Job Schedule Sun thru Thurs only via TVARV using variant

    Can someone tell me how to use TVARV to setup my job to run only on Sunday thru Thursday using my variant?

    Hi,
    Please check the below links.
    http://help.sap.com/saphelp_nw04/helpdata/en/c0/9803aae58611d194cc00a0c94260a5/content.htm
    Some sample code from seachsap site.
    Have you ever wondered where your TVARV variables are being used in your batch jobs? How do you know if they change, and the impact it may have on your interfaces? The program below will provide a listing for you.
    The program uses Job Count (job id) as the key versus Job Name as job names can be redundant. The code was written in R/3 v.4.6C, but should work in 3.x versions as well.
    Code
    PROGRAM: ZCAUS_EXTRACT_REPORT_VARIABLES
    AUTHOR:  DYLAN HACK
    CREATE DATE: 10/09/2002
    PURPOSE: This program was developed to extract all jobs based on the
             "Job Status" selection criteria and find the relevant TVARV
             variable entries. Then the report can be downloaded and
             sorted on selection variables. Ultimate purpose is to know
             what jobs are affected by the individual TVARV variables.
             If a job exists but does not have a corresponding entry/use
             of a TVARV variable, it will not get listed.
    JOB STATUS LEGEND:
    R: job step running.
    Y: job step ready (eligible to run, waiting for a work process).
    P: job step scheduled.
    S: job step released (eligible to run when the start condition of the
       job is fulfilled).
    A: job step aborted.
    F: job step successfully finished.
    Z: system upgrade in progress, only upgrade-related jobs are allowed
       to run. Jobs and job steps with this status are ignored by the
       scheduler.
    X: unknown status detected.
    REPORT ZCAUS_EXTRACT_REPORT_VARIABLES .
    tables: tbtcp, tbtco.
    data: var like RSVARVAR occurs 0 with header line,
          begin of report_tmp occurs 0,
            jobname   like tbtcp-jobname,
            jobcount  like tbtcp-jobcount,
            stepcount type tbtcp-stepcount,
            progname  type syrepid,
            variant   type syslset,
          end of report_tmp,
          report like report_tmp occurs 0 with header line,
          begin of listing occurs 0,
            jobname  type tbtcp-jobname,
            jobcount  like tbtcp-jobcount,
            stepcount type tbtcp-stepcount,
            progname  type syrepid,
            variant   type syslset,
            variable type rsvarvar-variable,
          end of listing.
    selection-screen begin of block a1 with frame title text-001.
    selection-screen begin of block b1 with frame.
      select-options: s_status for tbtco-status.
    selection-screen end of block b1.
    selection-screen begin of block b2 with frame.
      selection-screen comment /5(75) com1.
      selection-screen comment /5(75) com2.
      selection-screen comment /5(75) com3.
      selection-screen comment /5(75) com4.
      selection-screen comment /5(75) com5.
      selection-screen comment /5(75) com6.
      selection-screen comment /5(75) com7.
      selection-screen comment /5(75) com8.
    selection-screen end of block b2.
    selection-screen end of block a1.
    initialization.
    move:
    'R: Job Running' to com1,
    'Y: Job Ready (Eligible to run, waiting for process)' to com2,
    'P: Job Scheduled' to com3,
    'S: Job Released (Eligible to run when start condition is trigerred)'
    to
    com4,
    'A: Job Aborted' to com5,
    'F: Job Successfully Finished' to com6,
    'Z: Upgrade in process, job being ignored' to com7,
    'X: Unknown Status Detected' to com8.
    start-of-selection.
    get all jobs in released status
      select jobcount from tbtco into report_tmp-jobcount
        where status in s_status.
        append report_tmp.
      endselect.
    get all related variants for those jobs that meet the where clause.
      loop at report_tmp.
        select jobname jobcount stepcount progname variant
        into  corresponding fields of report
        from tbtcp where jobcount = report_tmp-jobcount and
                         variant not like '&%' and
                         variant <> space.
        append report.
        endselect.
      endloop.
    sort and delete duplicates (if any)
    sort report by jobname jobcount stepcount progname variant.
    delete adjacent duplicates from report comparing all fields.
    get TVARV selection variables
    loop at report.
    clear var. refresh var.
      call FUNCTION 'RS_VARIANT_VARIABLES'
        EXPORTING
           PROGRAM = report-progname
           VARIANT = report-variant
        TABLES
         VAR = var
        EXCEPTIONS
            VARINT_NOT_EXISTENT.
    move the data to a listing table
      if var-variable <> space.
       loop at var.
        move: report-jobname   to listing-jobname,
              report-jobcount  to listing-jobcount,
              report-stepcount to listing-stepcount,
              report-progname  to listing-progname,
              report-variant   to listing-variant,
              var-variable     to listing-variable.
        append listing.
       endloop.
      endif.
    endloop.
    sort and delete dups from listing
    sort listing by jobname jobcount stepcount progname variant variable.
    delete adjacent duplicates from listing comparing all fields.
    data: count type i. loop at listing. count = count + 1. endloop.
    write out the listing
    write: 'Count', count. skip.
    write: 2   'Job Name',
           36  'Job #',
           48  'Step',
           54  'Program name',
           95  'Variant',
           110 'Variable'.
    loop at listing.
        write: / listing-jobname,
                 listing-jobcount,
                 listing-stepcount,
                 listing-progname,
                 listing-variant,
                 listing-variable.
    endloop.
    Thanks,
    Ramakrishna

  • Job Queues using Scheduler

    Hi,
    Can you please let us know whether we could create any job queues using Oracle Scheduler so that one job queue can be run with one license and another using a different license?
    Thanks,
    Sandeep.

    Hi Sandy,
    I am not sure what you mean but when you are on 11g you can schedule jobs to run in remote systems/databases. So I can imagine that the remote databases can be any edition, as long as you can connect to it. Maybe you should check the license information docs <http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/toc.htm>
    I hope this helps,
    Ronald
    http://ronr.blogspot.com

  • DI Job Schedules not starting (running) scheduled using DI Web Admin

    I have DI Jobs that have job schedules that I created through DI Web Admin and have active schedules. I have some jobs that start to run at their schedule time while others have active schedules but do not start at their schedule time.
    Edited by: Juan Jacome on Oct 26, 2010 8:56 PM

    Im not sure but a lot of times corruption happens and hence its better you redo your scheduling.
    Regards,
    Den

Maybe you are looking for