PO processing and Cronacle Job Scheduling

We are using PO to process data out of an external database and we are using Redwood Cronacle to start and stop the PO channel. Currently, we are relying on PO updating a column in the external database to show that its been processed. What I was wondering, is there a way we could have Cronacle monitor the PO processing so we can stop the channel once PO has processed the data?

Hi Jason,
I've recently written a blog for this requirement. See if this helps with your current version. Otherwise, some tweak can get you there.
Integration of Third Party Scheduler (e.g. Control-M) with SAP PI/PO - Handling PI/PO intermediate message status in rea…
Regards,
Abhishek

Similar Messages

  • Mail configuration and background job scheduling for ST22 Data

    Experts,
    I want to know how can i schedule a background job in SAP to extract data from ST22 and send a automated mail for the same?
    Any help will be appreciated, Thanks!!!
    Regards,
    Vinit Pagaria

    You don't need to do that, simply configure CCMS (RZ20) to send you an alert via mail when a dump occurs.
    Read,
    http://help.sap.com/saphelp_nw04/helpdata/EN/90/4e313f8815d036e10000000a114084/frameset.htm
    Regards
    Juan

  • Question about statistics refresh and the job scheduler

    Hi Everyone:
    So this is the Stats job I see in dba_scheduler_jobs.  Reading this it runs the 15th of every month.
    SQL> select JOB_NAME, STATE, LAST_START_DATE, NEXT_RUN_DATE from dba_scheduler_jobs;
    MGMT_STATS_CONFIG_JOB          SCHEDULED       01-JAN-15 01.01.01.133919 AM -05:00                                         01-FEB-15 01.01.01.100000 AM -05:00
    This query shows the status job is currently running in my database.
    select to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') "RUNTIME", db_unique_name,
    status, client_name, window_group, consumer_group
    from dba_autotask_client, v$database;  2    3
    RUNTIME                       DB_UNIQUE_NAME  STATUS   CLIENT_NAME                         WINDOW_GROUP         CONSUMER_GROUP
    21-JAN-2015 18:14:56          DELTAC          ENABLED  auto optimizer stats collection     ORA$AT_WGRP_OS       ORA$AUTOTASK_STATS_GROUP
    First, is there any relationship between MGMT_STATS_CONFIG_JOB in the first query and "auto optimizer stats collection" in the second query?  If so, how do I find the connection?
    Second, This 'auto optimizer stats collection' is show as running but I don't see it listed in dba_scheduler_jobs?
    Now when I check what jobs are running.
    SQL> SELECT * FROM ALL_SCHEDULER_RUNNING_JOBS;
    no rows selected
    It shows nothing is running.  Please help explain this huge disconnect I see.
    thanks
    Ed

    Hi this is a regular database in a linux environment.  I tried associating the question in a regular rdbms community but it was greyed out.

  • How to deactivate Filter and Job in Job scheduling?

    Hi,
    My scenario is:
    XI will poll the file from 3rd party system and wil send to R/3 via IDOC adapter.
    On every sunday , R/3 will be down for 3-4 hrs for maintenance purpose.
    I need to create job scheduling in such a way that XI sould collect all the message coming on sunday during that period
    of time and after endtime has reached , it should process those message and should send the message to R/3 system.
    I have created Sender/Receiver ID ..also create Filter and a JOBID.
    while creating Job ID i specified the following
    1: Start Date    = 18.08.2009
    2:Start Time     = 11.00.00
    3:End Date      = 18.08.2009
    4:End Time      = 15.00.00
    5:Period          =  7
    6:Period Unit   = D (Days)
    Job should be scheduled at 11.00.00 and end at 15.00.00 ,filter and Job both should deactivate and after 15.00.00 it should process those messages.
    But the problem is it doesnt deactivate the filter.
    I tried using SXMS_JOB_DEACTIVATE and SXMS_START_JOB_AT_ONCE
    Both deacitvtes the JOB but not filter.
    Can any body tell me , how to go with it.
    Also if I have to write a report, what all things are required.
    regards,
    Mayank

    Hi Volker,
    Down time for R/3 is more ..so i am not sure whether XI will trying till that time or not.
    And XI is not able to than all the incoming messages in XI during that period of time will be Red Flag.
    can we write a report in XI such a way that .. all the queued messages should be processed ..thorugh Job scheduling.

  • Job schedule - Timezone

    Hi,
    Here in Brazil, we have the Summer schedule.
    This schedule, for some mounths, one hour advances, and some jobs schedule must run in Brazil schedule.
    My doubt is: the Jobs schedule run with the SAP time zone or server time?!
    Thans and regards,
    Dany Anderson

    Hi Dany,
    Jobs will run according to server time which will be the same as shown in SAP.
    Regards.
    Ruchit.

  • Process Chains and 3rd party scheduling tools in 04s

    All,
    What, if any, 3rd party scheduling tools (Autosys, CTL-M, etc...) deliver certified connectivity with 04s BW?  I've had some experience with 3.x in prior lives in having to support a custom abap pgm to submit and monitor process chains via SM37.   Has the integration with 3rd party tools changed?  If we need to connect with a 3rd party tool, is a custom 'wrapper pgm' still required?
    Thanks,

    Hi Lonnie,
    SAP NetWeaver 2004s comes with the external scheduling tool Redwood Cronacle included. In the next release it will be integral part of the NetWeaver system.
    Please check the SAP Service Marketplace for further information
    <a href="http://service.sap.com/job-scheduling">SAP Service Marketplace /job-scheduling</a>
      Cheers
        SAP NetWeaver BI Organsation

  • Parallel processing in background using Job scheduling...

    (Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
    Hi ABAP Gurus,
    I have read a bit till now about parallel processing. But I have a doubt.
    I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
    Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
    to use parallel processing in SAP.
    Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
    But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
    Can this be also called parallel processing ?
    Please let me know if this sounds wise to you guys...
    Regards,
    Tushar.

    Thanks for your reply...
    So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
    I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
    report ZMMI_MATERIAL_MASTER_TEST
          no standard page heading line-size 255.
    include bdcrecx1.
    parameters: dataset(132) lower case default
                                 '/tmp/testmatfile.txt'.
       DO NOT CHANGE - the generated data section - DO NOT CHANGE    ***
      If it is nessesary to change the data section use the rules:
      1.) Each definition of a field exists of two lines
      2.) The first line shows exactly the comment
          '* data element: ' followed with the data element
          which describes the field.
          If you don't have a data element use the
          comment without a data element name
      3.) The second line shows the fieldname of the
          structure, the fieldname must consist of
          a fieldname and optional the character '_' and
          three numbers and the field length in brackets
      4.) Each field must be type C.
    Generated data section with specific formatting - DO NOT CHANGE  ***
    data: begin of record,
    data element: MATNR
           MATNR_001(018),
    data element: MBRSH
           MBRSH_002(001),
    data element: MTART
           MTART_003(004),
    data element: XFELD
           KZSEL_01_004(001),
    data element: MAKTX
           MAKTX_005(040),
    data element: MEINS
           MEINS_006(003),
    data element: MATKL
           MATKL_007(009),
    data element: BISMT
           BISMT_008(018),
    data element: EXTWG
           EXTWG_009(018),
    data element: SPART
           SPART_010(002),
    data element: PRODH_D
           PRDHA_011(018),
    data element: MTPOS_MARA
           MTPOS_MARA_012(004),
         end of record.
    data: lw_record(200).
    End generated data section ***
    data: begin of t_data occurs 0,
          matnr(18),
          mbrsh(1),
          mtart(4),
          maktx(40),
          meins(3),
          matkl(9),
          bismt(18),
          extwg(18),
          spart(2),
          prdha(18),
          MTPOS_MARA(4),
        end of t_data.
    start-of-selection.
    perform open_dataset using dataset.
    perform open_group.
    do.
    *read dataset dataset into record.
    read dataset dataset into lw_record.
    if sy-subrc eq 0.
    clear t_data.
    split lw_record
       at ','
    into t_data-matnr
          t_data-mbrsh
          t_data-mtart
          t_data-maktx
          t_data-meins
          t_data-matkl
          t_data-bismt
          t_data-extwg
          t_data-spart
          t_data-prdha
          t_data-MTPOS_MARA.
    append t_data.
    else.
    exit.
    endif.
    enddo.
    loop at t_data.
    *if sy-subrc <> 0. exit. endif.
    perform bdc_dynpro      using 'SAPLMGMM' '0060'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'RMMG1-MATNR'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=AUSW'.
    perform bdc_field       using 'RMMG1-MATNR'
                                 t_data-MATNR.
    perform bdc_field       using 'RMMG1-MBRSH'
                                 t_data-MBRSH.
    perform bdc_field       using 'RMMG1-MTART'
                                 t_data-MTART.
    perform bdc_dynpro      using 'SAPLMGMM' '0070'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MSICHTAUSW-DYTXT(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=ENTR'.
    perform bdc_field       using 'MSICHTAUSW-KZSEL(01)'
                                 'X'.
    perform bdc_dynpro      using 'SAPLMGMM' '4004'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '/00'.
    perform bdc_field       using 'MAKT-MAKTX'
                                 t_data-MAKTX.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MARA-PRDHA'.
    perform bdc_field       using 'MARA-MEINS'
                                 t_data-MEINS.
    perform bdc_field       using 'MARA-MATKL'
                                 t_data-MATKL.
    perform bdc_field       using 'MARA-BISMT'
                                 t_data-BISMT.
    perform bdc_field       using 'MARA-EXTWG'
                                 t_data-EXTWG.
    perform bdc_field       using 'MARA-SPART'
                                 t_data-SPART.
    perform bdc_field       using 'MARA-PRDHA'
                                 t_data-PRDHA.
    perform bdc_field       using 'MARA-MTPOS_MARA'
                                 t_data-MTPOS_MARA.
    perform bdc_dynpro      using 'SAPLSPO1' '0300'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=YES'.
    perform bdc_transaction using 'MM01'.
    endloop.
    *enddo.
    perform close_group.
    perform close_dataset using dataset.

  • Scheduling Process Chains through Jobs

    Hi All,
    I have a question regarding scheduling of process chains using Jobs created in SM36.
    In our project we are not using Meta Chains to run the individual Process Chains.
    Instead jobs have been created for each chain and the jobs are scheduled one after another and dependencies are maintained (e.g. Job A depends on Job B)
    However, on scheduling, we find that even before Job B completes Job A gets triggered and both the Process Chains start running in parallel instead of waiting for the completion of the first Chain.
    Any solution for this ??
    Thanks
    Sam

    Hi,
    I can suggest one method using Events...
    Start
               |
          Load-Cube A
              |
        After Success raise Event (This is Event Program, you need to insert)
    After Event the dependent Process Chain will trigger
    Start
               |
          Load-Cube B
              |
        After Success raise Event (This is Event Program, you need to insert) -->If you need
    Create Events in SM64 and use in below Program..see the sample code.
    REPORT  ZE_SIT_RP.
    DATA: EVENTID LIKE TBTCJOB-EVENTID.
    DATA: EVENTPARM LIKE TBTCJOB-EVENTPARM.
    EVENTID = 'ZE_SIT'.
    EVENTPARM = 'ZEP_SIT'.
    CALL FUNCTION 'RSSM_EVENT_RAISE'
      EXPORTING
        I_EVENTID                    = EVENTID
        I_EVENTPARM                  = EVENTPARM
    * EXCEPTIONS
    *   BAD_EVENTID                  = 1
    *   EVENTID_DOES_NOT_EXIST       = 2
    *   EVENTID_MISSING              = 3
    *   RAISE_FAILED                 = 4
    *   OTHERS                       = 5
    IF SY-SUBRC <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    Thanks
    Reddy

  • Background job schedule and mail triggering

    Hi Experts,
    I schedule a background job to run a custom program for project closure.The job is running successfully. But,the mail I am getting from this job run is same all the time (means it's showing same project closure again and again though I am running job for different projects). Is it some bug in our custom program or any parameters required to be check in job schedule?
    Kindly suggest.
    Thanks & Regards
    Saurabh

    Yes. That is the point I am missing. Just one 'date' is checked and project is taken into account for the custom program and after its successful run the mail is sent to users.
    And when the same program I am assigning in SM36,it is actually running the program accurately for project/s but sending the same mail which it send for very first project earlier.
    Can you please guide me on the way to create these variants? 
    You will need to Save different variants for different projects and then assign the variants with your job.
    Will it be required to create variant again and again and assign different projects individually? As, we are not sure that which project is gonna be created in future. So, need guideline how these variants can help me to sort out the e-mail issue.
    Regards
    Saurabh

  • Backing up Jobs, Chains and Programs in Oracle Job Scheduler

    What is the best way to back up Jobs, Chains and Programs created in the Oracle Job Scheduler via Enterprise Manager - and also the best way to get them from one database to another. I am creating quite a long chain which executes many programs in our test database and wish to back everything up along the way. I will also then need to migrate to the production database.
    Thanks for any advice,
    Susan

    Hi Susan,
    Unfortunately there are not too many options.
    To backup a job you can use dbms_scheduler.copy_job. I believe EM has a button called "create like" for jobs and programs but I am not sure about chains and this can be used to create backups as well.
    A more general purpose solution which should also cover chains is to do a schema-level export using expdp i.e. a dump of an entire schema.
    e.g.
    SQL> create directory dumpdir as '/tmp';
    SQL> grant all on directory dumpdir to public;
    # expdp scott/tiger DUMPFILE=scott_backup.dmp directory=dumpdir
    You can then import into a SQL text file e.g.
    # impdp scott/tiger DIRECTORY=dumpdir DUMPFILE=scott_backup SQLFILE=scott_backup.out
    or import into another database (and even another schema) e.g.
    # impdp scott/tiger DIRECTORY=dumpdir DUMPFILE=scott_backup
    Hope this helps,
    Ravi.

  • Our organization uses an Oracle database hosted on a Unix platform and one of our data processing outputs is a "stuffer" document that has a barcode, and Unix jobs automatically send the document to a printer.   Is there a way, or does Adobe have a produc

    Our organization uses an Oracle database hosted on a Unix platform and one of our data processing outputs is a “stuffer” document that has a barcode, and Unix jobs automatically send the document to a printer.
    Is there a way, or does Adobe have a product or solution, to create a PDF version of the document including the barcode, before it’s sent to a printer?

    What format is the document that is printed? Or what technology is used to format the printer? There isn't a standard way of doing things in Unix.

  • Scheduled Task - Stop/Start processes and .exe

    I've running some short batch files to stop a process and an .exe at just past midnight.
    and then corresponding files to start a process and an .exe at 6am
    I've set up Task Scheduler to run the Stops at 00:05 and 00:10
    and then the starts at 06:30 and 06:40
    However when I run tests, the scheduled tasks giving a status of "running" long after the the "stop" batch files have run. 
    How can i set them up so they run once, then stop running so there is no danger of conflict when the "start" tasks run in the morning. 
    I hope this is clear. 
    Thanks
    Pete

    Hi,
    Please check the in the task manager (+show tasks from all users) to see if it is indeed still running. You could refer to the thread below to troubleshoot the issue:
    TASK SCHEDULER: scheduler status is being “RUNNING” always
    http://social.technet.microsoft.com/Forums/en-US/2f6dc29c-3b8b-45f5-a2a7-53e076acc062/task-scheduler-scheduler-status-is-being-running-always
    If the issue still exists, the workaround is adding this line at the very end of the batch file:
    TASKKILL /F /IM cmd.exe
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Job Scheduling error

    Hi Experts,
    I have a critical requirement. I need to kick off 3 interfaces from 1 report each of which will update the Oracle database.
    I've used JOB_OPEN and JOB_CLOSE function modules and using SUBMIT <report> VIA JOB <jobname> NUMBER<jobnumber> AND RETURN. My problem is that these 3 reports are not getting executed, means they are not updating the Oracle database when they are kicked-off from the new program.
    But if i run each of them individually with same test data, it runs perfectly and updates the Oracle database. I tried to debug, it returns before going into START-OF-SELECTION event.
    Any help will be appreciated
    Thanks
    Prateek

    The VIA JOB addition also loads the program accessed in a separate internal mode when the SUBMIT statement is executed and the system performs all the steps specified before START-OF-SELECTION. This means the events LOAD-OF-PROGRAM and INITIALIZATION are triggered and selection screen processing is performed. If the selection screen is not processed in the background when VIA SELECTION-SCREEN is specified, the user of the calling program can eidit it and schedule the program accessed in the background request using the function Place in Job. If the user cancels selection screen processing, the program is not scheduled in the background job. In both cases, execution of the program executed is completed after selection screen processing and the system returns to the calling program due to the AND RETURN addition.
    Hope you got it.

  • What is the Diffrence between Adaptive and Webi Job Server

    Hello,
    What is the diffrence between Adaptive Job server and Webi Job server?
    In my BO server we didn't have Webi Job server but we are able to schedule the webi reports successfully.
    Any Idea on this.
    Thanks,
    Nimesh.

    I believe the Adaptive Job Server is on a generic shell server that behave as a Webi Job Server when it processes webi reports, as a program job server when it processes program object etc.

  • Background Job Scheduling

    Hi,
      I am scheduling a report to run in background.
    In this report it is creating background jobs automatically for different company codes.
    It submits the 1st background job and waits until it finishes.
    Then 2nd job starts in background and continues with other jobs.
    At end it finishes all the jobs and closes.
    Now my problem is.
    1.       Whether is it possible for us to submit all the jobs at 1 time. And execute at same time. Ie., 1st, 2nd job will start at same time.
    2.       If possible how can we do that.
    What I have written is
    loop at companycode.
    Create job name.
    call fun 'Job_Open'.
    submit xxxx user sy-uname via job job_name numer job_count
    to sap-spool
    spool parameters l_spool_parameter
    without spool dynpro
    with companycode
    with ......
    and return.
    endloop.
    Please help ASAP, urgent.

    hi praveen,
    Job Scheduling Explained
    Definition
    Before any background processing can actually begin, background jobs must be defined and scheduled. The scheduled time for when a job runs is one part of the job’s definition. There are several ways to schedule jobs:
    From Transaction SM36 (Define Background Job)
    With the "start program in the background" option of either Transaction SA38 (ABAP: Execute Program) or Transaction SE38 (the ABAP editor)
    Through the background processing system’s own programming interface. (Many SAP applications use the internal programming interface to schedule long-running reports for background processing.)
    Through an external interface.
    Scheduling Background Jobs   
    Use
    You can define and schedule background jobs in two ways from the Job Overview:
    ·         Directly from Transaction SM36. This is best for users already familiar with background job scheduling.
    ·         The Job Scheduling Wizard. This is best for users unfamiliar with SAP background job scheduling. To use the Job Wizard, start from Transaction SM36, and either select Goto ® Wizard version or simply use the Job Wizard button.
    Procedure
           1.      Call Transaction SM36 or choose CCMS ® Jobs ® Definition.
           2.      Assign a job name. Decide on a name for the job you are defining and enter it in the Job Name field.
           3.      Set the job’s priority, or “Job Class”:
    ·         High priority:      Class A
    ·         Medium priority: Class B
    ·         Low priority: Class C
           4.      In the Target server field, indicate whether to use system load balancing.
    ·         For the system to use system load balancing to automatically select the most efficient application server to use at the moment, leave this field empty.
    ·         To use a particular application server to run the job, enter a specific target server.
           5.      If spool requests generated by this job are to be sent to someone as email, specify the email address. Choose the Spool list recipient button.
           6.      Define when the job is to start by choosing Start Condition and completing the appropriate selections. If the job is to repeat, or be periodic, check the box at the bottom of this screen.
           7.      Define the job’s steps by choosing Step, then specify the ABAP program, external command, or external program to be used for each step.
           8.      Save the fully defined job to submit it to the background processing system.
           9.      When you need to modify, reschedule, or otherwise manipulate a job after you've scheduled it the first time, you'll manage jobs from the Job Overview.
    Note: Release the job so that it can run. No job, even those scheduled for immediate processing, can run without first being released.
    Specifying Job Start Conditions
    Use
    When scheduling a background job (either from Transaction SM36, Define Background Job or CCMS ® Jobs ® Definition), you must specify conditions that will trigger the job to start.
    Procedure
    Choose the Start condition button at the top of the Define Background Job screen.
    Choose the button at the top of the Start Time screen for the type of start condition you want to use (Immediate, Date/Time, After job, After event, or At operation mode) and complete the start time definition in the screen that appears.
    For the job to repeat, check the Periodic job box at the bottom of the Start Time screen and choose the Period values button below it to define the frequency of repetition (hourly, daily, weekly, monthly, or another specific time-related period). Then choose the Save button in the Period values screen to accept the periodicity and return to the Start Time screen.
    Once you’ve completed specifying the job start conditions, choose the Save button at the bottom of the Start Time screen to return to the Define Background Job screen.
    No job can be started until it is released, including jobs scheduled to start immediately. Since releasing jobs can be done only by a system administrator from the job management screen (Transaction SM37) or by other users who have been granted the appropriate Authorizations for Background Processing, no unauthorized user can start a job without explicit permission
    Managing Jobs from the Job Overview
    Use
    The Job Overview, or Job Maintenance, screen is the single, central area for completing a wide range of tasks related to monitoring and managing jobs, including defining jobs; scheduling, rescheduling, and copying existing jobs; rescheduling and editing jobs and job steps; repeating a job; debugging an active job; reviewing information about a job; canceling a job's release status; canceling and deleting jobs; comparing the specifications of several jobs; checking the status of jobs; reviewing job logs; and releasing a job so it can run.
    Procedures
    To display the Job Overview screen, choose CCMS ® Jobs ® Maintenance or call Transaction SM37. Before entering the Job Overview screen, the system first displays the Select Background Jobs screen. You'll need to complete this Job Selection screen to define the criteria for the jobs you want to manage. Once you've selected jobs to manage, you can choose from a wide range of management tasks:
    To copy a single existing job, choose Job ® Copy.
    To reschedule or edit job steps or attributes of a single job, choose Job ® Change. A job step is an independent unit of work within a background job. Each job step can execute an ABAP or external program. Other variants or authorizations may be used for each job step. The system allows you to display ABAP programs and variants. You can scan a program for syntax errors. You can also display the authorizations for an authorized user of an ABAP job step.
    To repeat a single job, choose Job ® Repeat scheduling.
    To debug an active job, choose Job ® Capture: active job. Only a single selection is allowed. If an active job seems to be running incorrectly (e.g., running for an excessively long time), you can interrupt and analyze it in debugging mode in a background process, and then either release it again or stop it altogether.
    You will be able to capture a background job only if you are logged on to the SAP server on which the job is running. To find server information in the Job Overview, select and mark the job, then choose Job ® Job details.
    To review information about a job, choose Job ® Job details. Details displayed can include:
    current job status
    periodicity, or the repetition interval
    other jobs linked to the current job, either as previous or subsequent jobs
    defined job steps
    spool requests generated by the current job
    To cancel a job's "Released" status, select the job or jobs from the Job Overview list and choose Job ® Release -> Scheduled.
    To cancel a job from running but keep the job definition available, select the job or jobs from the Job Overview list and choose Job ® Cancel active job.
    To delete a job entirely, select the job or jobs from the Job Overview list and choose Job ® Delete. Jobs with the status of Ready or Running cannot be deleted.
    To compare the specifications of more than one job, select the jobs from the Job Overview list and choose Job ® Compare jobs.
    To check the status of jobs, select the job or jobs from the Overview Job list and choose Job ® Check status. This allows you to either change the job status back to Planned or cancel the job altogether. This is especially useful when a job has malfunctioned.
    To review job logs, select a job or jobs with the status Completed or Canceled from the Job Overview list and
    regards
    karthik
    reward me points if helpfull

Maybe you are looking for