Dependent jobs in PI

Hi All,
I have a jdbc -> file interface that selects some records and writes them to a csv file.
I need to send out a http trigger to the target system right after the file is written. This is to avoid the target system polling for the file.
Is there a way to schedule dependent jobs in PI?
Thanks,
Harsh
PS: We recently moved to PI 7.1

Hi Volker,
Thanks for the prompt response.
BPM is not an advisable option as the load is high. Regarding the proxy approach, if I understood you correctly, I think you are suggesting that I add an addtional receiver in my receiver determination.
Our concern with that approach is that, we'd like to send out the trigger only after we have confirmation that the file has been successfully written out. An additional receiver(proxy or otherwise) would send the trigger even if the file channel errors out for some reason.
Thanks,
Harsh

Similar Messages

  • Creating dependent jobs

    Hi,
    I have to create a job which is dependent on the completion of other job. I checked in transaction SM36 -> Start Condition -> After Job. But I have a check box with description "Start Status dependent" in it. Please let me know what is the purpose of this check box and what will be the impact if we dont check this.
    Regards,
    Senthil G.

    Hi senthil,
    1.Start Status dependent (If Ticked)
    2. It means = If the 1st job is SUCCESSFUL,
       only then this 2nd dependent job should start.
    regards,
    amit m.

  • Restarting a systemd.service with a dependency job failed.

    Hello.
    Let's say I have foo.service containing that:
    [Unit]
    Description=Is executed only if bar.service success
    Requires=bar.service
    After=bar.service
    [Service]
    Type=simple
    TimeoutSec=6
    Restart=on-failure
    ExecStart=/usr/bin/touch /tmp/test
    And i have bar.service containing that:
    [Unit]
    Description=Test in foo.tld is reachable
    [Service]
    Type=oneshot
    ExecStart=/usr/bin/ping -c1 foo.tld
    [Install]
    WantedBy=multi-user.target
    If the ping fails in bar.service, then I'll get that when I try to start foo.service: « A dependency job for foo.service failed. See 'journalctl -xn' for details. »
    How can I make foo.service check every n seconds if its dependency jobs are fullfilled/successfull instead of just failing and stopping like it currently does ?
    I'm searching for a "Restart" option that would apply if the dependency check fails. I'm not sure if this is possible.
    Thank you in advance for your answers.

    Maybe you're looking for RequiresOverrideable? From the man page of systemd.unit
    RequiresOverridable=
    Similar to Requires=. Dependencies listed in RequiresOverridable= which cannot be fulfilled or fail to start are ignored if the startup was explicitly requested by the user. If the start-up was
    pulled in indirectly by some dependency or automatic start-up of units that is not requested by the user, this dependency must be fulfilled and otherwise the transaction fails. Hence, this option
    may be used to configure dependencies that are normally honored unless the user explicitly starts up the unit, in which case whether they failed or not is irrelevant.

  • How to schedule dependent job in oracle

    I would like to schedule a job which should be dependent on the completion of previous job. How is it achieved in oracle 10g

    refer to this examples about DBMS_SCHEDULER.CREATE_CHAIN to form job dependency.

  • Scheduling Dependent Jobs

    Hi,
    I am trying to schedule 2 jobs that are not logically linked, but their execution is dependent.
    ex: Job A: Gather the DB Stats
    Job B: Run the DB Export
    Due to some technical reasons, I want Job B to run during its allocated schedule ONLY IF Job A ran, and completed successfully before it was Job B time slot.
    So my questions:
    1- Is there a way to do this in Enterprise manager?
    2- Is there a way to do this in the script, other than calling job B specifically from job A, nor enabling / disabling nor altering the Job in any way..
    Thanks

    Thanks Lakmal.. So I am using Oracle 11g, and I had a look at the link you provided. Quite informative and useful.
    So I have the below assumptions, please correct me if I am wrong:
    1- So this is basically like a subroutine of jobs,whereby they run according to the chain.
    2- and each job runs when the previous is completed.
    3- So if one of the jobs in the chain fails, the whole chain fails, and no retries are executed on the chain level
    Right?
    Thanks

  • Running 2 dependent jobs on different schedules

    Dear all,
    I have the following problem:
    Job1 (a chain running external jobs) runs one per day at 08:00
    Job2 (a chain running external jobs) is dependent upon job1 completing and runs cyclically every 10 minutes.
    I have attempted to use scheduler events but this only runs the first instanceof job2. The next instance sits there in a not started state waiting for the next event trigger (which will not happen until the next day).
    I don't think I can combine the jobs and schedules and creating new procedures to accomodate this is an unwanted overhead.
    I have created a workable solution using conditional sql syntax in the start rule condition of the job2 chain using data held in the all_scheduler_jobs table but I don't really think the logs were created for such a use. The conditional sql used in the rule of step 1 of job2 is:
    (SELECT COUNT(*) FROM all_scheduler_jobs WHERE owner = <job_owner>'' AND job_name = ''<job_name>'' AND state = ''COMPLETED'' AND TRUNC((SELECT MAX(last_start_date) FROM all_scheduler_jobs WHERE owner = '<job_owner>'' AND job_name = '<job_name>'' AND state = 'COMPLETED')) = TRUNC(systimestamp) ) = 1
    Does anyone have any alternative solutions for sucha a schedule ? Any help would be appreciated.
    Thanks

    Hi,
    If I am understanding you correctly job2 should run ecvery 10 minutes and additionally run when job1 completes .
    I would recommend splitting this up into 2 jobs - job2a and job2b . Have job2a run every 10 minutes and job2b run when job1 completes.
    Here is sample code you can use to have job2b run when job1 completes (replace the job action and type with your chain)
    Hope this helps,
    Ravi.
    -- create a table for output
    create table job_output (log_date timestamp with time zone,
    output varchar2(4000));
    -- add an event queue subscriber for this user's messages
    exec dbms_scheduler.add_event_queue_subscriber('myagent')
    -- create the first job and have it raise an event whenever it completes
    -- (succeeds, fails or stops)
    begin
    dbms_scheduler.create_job
    ( 'first_job', job_action =>
    'insert into job_output values(systimestamp, ''first job runs'');',
    job_type => 'plsql_block',
    enabled => false, repeat_interval=>'freq=secondly;interval=30' ) ;
    dbms_scheduler.set_attribute ( 'first_job' , 'max_runs' , 2);
    dbms_scheduler.set_attribute
    ( 'first_job' , 'raise_events' , dbms_scheduler.job_run_completed);
    end;
    -- create a simple second job that runs after the first has completed
    begin
    dbms_scheduler.create_job('second_job',
    job_type=>'plsql_block',
    job_action=>
    'insert into job_output values(systimestamp, ''second job runs'');',
    event_condition =>
    'tab.user_data.object_name = ''FIRST_JOB''',
    queue_spec =>'sys.scheduler$_event_queue,myagent',
    enabled=>true);
    end;
    -- enable the first job so it starts running
    exec dbms_scheduler.enable('first_job')
    -- wait until the first job has run twice
    exec dbms_lock.sleep(60)
    select * from job_output;

  • Dependent Jobs in scheduler

    Hi Experts,
    Below are the version details where I am working.
    Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production"
    TNS for 32-bit Windows: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
         I got a requirement like below
        There are 2 COBOL programs which got scheduled using oracle scheduler.
    Before start execution of second program need to check first job status. if the first job is succeeded then only second program has to continue other wise it has to stop.
    DO we have any option other than event based jobs and chains.
    Thanks,
    Nagaraju A.

    Hi,
    If you don't want to use either event based jobs nor chains then you'll need to check the first job's status in your second job's logic.
    In the beginning of the second job, check the status of the first job from dba_scheduler_job_run_details for the latest run (or a given date). If status is not SUCCEEDED, then exit the procedure of your second job.

  • Redwood CPS - how to define dependant jobs?

    Hi
    I am trying to schedule a job to run only after another job has ran successfully using Redwood CPS. I understand that the best way to do this is as follows:
    1) Define an new event - e.g. JOB_COMPLETION
    2) Define a new Job Chain, and assign a Step and a the first job to run to this. Set the 'Raise Events' tab to JOB_COMPLETION and the drop down box 'Status To Raise On' to 'Completed'
    3) Define another new Job Chain with a Step and the next job you want to run after completion of the first job. Set the 'Wait Events' tab to JOB_COMPLETION and Auto Submit to 'Always' and tick the 'clears event' tick box
    4) Right click on the first Job Chain and Submit - set a time for the job to run.
    What happens: the first job chain/step runs okay. Looking at the event JOB_COMPLETION it says both the Raise Event and Wait Event were both executed - and cleared afterwards - yet the second Job Chain was never ran? In the job status, it says 'Never'.... any idea how I get this to run on completion of the first job?
    Thanks
    Ross

    Answered in the CPS forum:
    Hi Ross,
    I think the best way to do it is to just use the jobchain object. You can do as follow:
    - Create a jobchain with 2 steps
    - First script in the first step, second script in the second step
    By default, script 2 does not run if script 1 failed.
    Regards,
    Yi Jiang

  • Scheduling dependant jobs

    Hi,
    We have a requirement where a job should be executed in 1 system, only if another job succeeds in another system.
    Both the systems are connected to each other.
    Can this be done by triggering events..using BP_EVENT_RAISE??
    Or if there is some other way, please let me know.
    Thanks,
    Saba.

    Yes, you can use 'BP_EVENT_RAISE' to raise an event which will then trigger your background job in the system.
    However since this FM only triggers that event in the system it is called you would have to encapsulate it in an RFC.
    Create RFC in System1 that calls BP_EVENT_RAISE.
    In System2 you have your job which is the precondition for the the job in System1.
    Add an additional step to the job in System2 and call a report which calls the RFC in System1.
    This will do the followoing:
    System2 - Jobx - Step1 (your precondition report)
    System2 - Jobx - Step2 (report calling RFC in System1)
    System1 - RFC - raises event
    System1 - Joby - starts because Jobx in System2 is done
    Hope that helps,
    Michael

  • FM JOB_CLOSE doesn't wait for pred job even when pred_jobname/jobcount set

    Dear experts,
    I need to call in sequential order two batch jobs when there is a special condition. It's not always the case, which means when there is no successor task, then the job should be executed immediately. But when there is another waiting in the queue, then
    I set the parameters pred_jobcount/pred_jobname so that the next job is executed when the previous is finished.
    I tried with the combination of several parameters, read lots of help and checked example programs, but the problem I've got: either both jobs are launched at the same time (so it does';t really wait for the predecessor to be completed sucessfully even if predjob_checkstat is set to X), or the second job is never launched at all! Here is the call of JOB_CLOSE:
    AS_PRED contains the correct information of the predecessor job (I verified against SM37 and no problem there). For the first job to be launched, I have no issue at all. However the successor is not launched at all!
    (Just to play around and to test some other parameter combination, I called the FM with strtimmed always set to X but then both jobs are launched at the same time, which is not convenient for me!).
    IF as_pred IS INITIAL.
        lv_strt_immed = 'X'.
      ELSE.
        lv_strt_immed = space..
        lv_checkstat = 'X'.
      ENDIF.
    * Plan job with direct start after predecessor job (if available)
      CALL FUNCTION 'JOB_CLOSE'
        EXPORTING
          jobcount             = me->job_count
          jobname              = me->job_name
          pred_jobcount        = as_pred-predjobcnt
          pred_jobname         = as_pred-predjob
          predjob_checkstat    = lv_checkstat
          strtimmed            = lv_strt_immed
        IMPORTING
          job_was_released     = lp_job_released
        EXCEPTIONS
          cant_start_immediate = 1
          invalid_startdate    = 2
          jobname_missing      = 3
          job_close_failed     = 4
          job_nosteps          = 5
          job_notex            = 6
          lock_failed          = 7
          invalid_target       = 8
          OTHERS               = 9.
    Do you have any hint what might be wrong with the function call above?
    Many thanks for you help, best regards
    Ebru

    BP_JOB_CREATE can be used to create dependent jobs. The below sample program creates a chain of jobs. Prerequisite: the report program YTEST should exist.
    ++++++++++++++++++++++
    REPORT  yjob_chain.
    DATA cntr(3) TYPE n.
    PARAMETERS:
      job_str(10) TYPE c DEFAULT 'ABC', "default jobname template
      p_njobs TYPE i. "number of nodes in chain
    DATA jobname LIKE tbtcjob-jobname.
    DATA jobcount LIKE tbtcjob-jobcount.
    DATA pred_jobname LIKE tbtcjob-jobname.
    DATA pred_jobcount LIKE tbtcjob-jobcount.
    DATA flg_1stjob TYPE c.
    START-OF-SELECTION.
      CLEAR flg_1stjob.
      DO p_njobs TIMES.
        PERFORM create_job.
      ENDDO.
    *&      Form  create_job
    *       text
    FORM create_job.
      DATA global_job01 TYPE tbtcjob.
      DATA global_job02 TYPE tbtcjob.
      DATA steplist TYPE STANDARD TABLE OF tbtcstep.
      DATA stepline TYPE tbtcstep.
      stepline-program = 'YTEST'.
      stepline-typ = 'A'.
      stepline-authcknam = sy-uname.
      APPEND stepline TO steplist.
      cntr = cntr + 1.
      CONCATENATE job_str '-' cntr INTO jobname.
      global_job01-jobname = jobname.
      global_job01-jobclass = 'C'.
      IF flg_1stjob IS NOT INITIAL.
        global_job01-eventid = 'SAP_END_OF_JOB'.
        global_job01-eventparm = pred_jobname.
        global_job01-eventcount = pred_jobcount.
      ELSE.
        global_job01-reldate = sy-datum.
        global_job01-reltime = sy-timlo.
        global_job01-strtdate = sy-datum + 1.
        global_job01-strttime = '010000'.
      ENDIF.
      CALL FUNCTION 'BP_JOB_CREATE'
        EXPORTING
          job_cr_dialog       = 'N'
          job_cr_head_inp     = global_job01
        IMPORTING
          job_cr_head_out     = global_job02
        TABLES
          job_cr_steplist     = steplist
        EXCEPTIONS
          cant_create_job     = 1
          invalid_dialog_type = 2
          invalid_job_data    = 3
          job_create_canceled = 4
          OTHERS              = 5.
      IF sy-subrc = 0.
        pred_jobname = jobname.
        pred_jobcount = global_job02-jobcount.
        flg_1stjob = 'X'.
        WRITE:/ 'Created Job- ', 'Name: ', global_job02-jobname, 'Number: ', global_job02-jobcount.
      ELSE.
        WRITE:/ 'Error creating job'.
      ENDIF.
    ENDFORM.                    "create_job

  • Extra job getting created in dynamic handling of jobs

    I have the below code and I notice an extra job that is being created in SM35. Any reasons/clues please
    bdcjob will have A/P_ACCOUNTS_BDC
    adrjob will have A/P_ACCOUNTS_ADDRESS
    Name of the batch input session is A/P
    The 3rd extra job that is coming up is 'A/P' and I did not open any job by that name.
    Thanks for your help.
    Kiran
    DATA: bdcjob TYPE tbtcjob-jobname,
          bdcnum TYPE tbtcjob-jobcount,
          adrjob TYPE tbtcjob-jobname,
          adrnum TYPE tbtcjob-jobcount,
          params LIKE pri_params,
          l_valid   TYPE c.
    CHECK fileonly IS INITIAL.
    MOVE: jobname TO bdcjob.
    adrjob = 'A/P_ACCOUNTS_ADDRESS'.
    IF NOT logtable[] IS INITIAL.
      IF NOT testrun IS INITIAL.
    *   If its a test run, Non Batch-Input-Session
        SUBMIT rfbikr00 AND RETURN
                        USER sy-uname
                        WITH ds_name EQ file_o
                        WITH fl_check EQ 'X'.         "X=No Batch-Input
      ELSE.
    *   Create a session which will be processed by a job
        SUBMIT rfbikr00 AND RETURN
                        USER sy-uname
                        WITH ds_name EQ file_o
                        WITH fl_check EQ ' '.
    *   Open BDC Job
        CALL FUNCTION 'JOB_OPEN'
          EXPORTING
            jobname          = bdcjob
          IMPORTING
            jobcount         = bdcnum
          EXCEPTIONS
            cant_create_job  = 1
            invalid_job_data = 2
            jobname_missing  = 3
            OTHERS           = 4.
        IF sy-subrc <> 0.
          MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
          WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        ELSE.
    *     Submit RSBDCSUB to trigger the session in background mode
          SUBMIT rsbdcsub
            VIA  JOB    bdcjob
                 NUMBER bdcnum
            WITH von     = sy-datum
            WITH bis     = sy-datum
            WITH z_verab = 'X'
            WITH logall  = 'X'
            AND RETURN.
          IF sy-subrc EQ 0.
    *       Export data to a memory id. This data will be used by the program
    *       that updates the address & email id
            EXPORT t_zzupdate TO SHARED BUFFER indx(st) ID 'MEM1'.
    *       Get Print Parameters
            CALL FUNCTION 'GET_PRINT_PARAMETERS'
              EXPORTING
                no_dialog      = 'X'
              IMPORTING
                valid          = l_valid
                out_parameters = params.
    *       Open a second job to trigger a program which updates addresses & email ids
            CALL FUNCTION 'JOB_OPEN'
              EXPORTING
                jobname          = adrjob
              IMPORTING
                jobcount         = adrnum
              EXCEPTIONS
                cant_create_job  = 1
                invalid_job_data = 2
                jobname_missing  = 3
                OTHERS           = 4.
            IF sy-subrc <> 0.
              MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
            ELSE.
    *         submit the program to update email id & long addresses
              SUBMIT zfpa_praa_address_update
                     VIA JOB adrjob
                     NUMBER  adrnum
                     TO SAP-SPOOL WITHOUT SPOOL DYNPRO
                         SPOOL PARAMETERS params
                            AND RETURN.
              IF sy-subrc EQ 0.
    *           First close the dependent job(address update job). Dependency
    *           is shown by using pred_jobcount & pred_jobname parameters
                CALL FUNCTION 'JOB_CLOSE'
                  EXPORTING
                    jobcount             = adrnum
                    jobname              = adrjob
                    pred_jobcount        = bdcnum
                    pred_jobname         = bdcjob
                  EXCEPTIONS
                    cant_start_immediate = 1
                    invalid_startdate    = 2
                    jobname_missing      = 3
                    job_close_failed     = 4
                    job_nosteps          = 5
                    job_notex            = 6
                    lock_failed          = 7
                    invalid_target       = 8
                    OTHERS               = 9.
                IF sy-subrc <> 0.
                  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                          WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
                ENDIF.
              ENDIF.
            ENDIF.
          ENDIF.
    *     Close the main job(BDC Job)
          CALL FUNCTION 'JOB_CLOSE'
            EXPORTING
              jobcount             = bdcnum
              jobname              = bdcjob
              strtimmed            = 'X'
            EXCEPTIONS
              cant_start_immediate = 1
              invalid_startdate    = 2
              jobname_missing      = 3
              job_close_failed     = 4
              job_nosteps          = 5
              job_notex            = 6
              lock_failed          = 7
              invalid_target       = 8
              OTHERS               = 9.
          IF sy-subrc <> 0.
            MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
          ENDIF.
        ENDIF.
      ENDIF.
    Edited by: kiran dasari on Jul 9, 2010 12:58 AM

    I tried changing the tags..that did not help.
    Since there are two open job statements, I expect to see ONLY two jobs in SM37 and am seeing 3 jobs as mentioned. That is the problem and I wish to know how and from where the 3rd job is getting created.
    Let me try again and paste the code in the tags:
    DATA: bdcjob TYPE tbtcjob-jobname,
          bdcnum TYPE tbtcjob-jobcount,
          adrjob TYPE tbtcjob-jobname,
          adrnum TYPE tbtcjob-jobcount,
          params LIKE pri_params,
          l_valid   TYPE c.
    CHECK fileonly IS INITIAL.
    MOVE: jobname TO bdcjob.
    adrjob = 'A/P_ACCOUNTS_ADDRESS'.
    IF NOT logtable[] IS INITIAL.
      IF NOT testrun IS INITIAL.
    *   If its a test run, Non Batch-Input-Session
        SUBMIT rfbikr00 AND RETURN
                        USER sy-uname
                        WITH ds_name EQ file_o
                        WITH fl_check EQ 'X'.         "X=No Batch-Input
      ELSE.
    *   Create a session which will be processed by a job
        SUBMIT rfbikr00 AND RETURN
                        USER sy-uname
                        WITH ds_name EQ file_o
                        WITH fl_check EQ ' '.
    *   Open BDC Job
        CALL FUNCTION 'JOB_OPEN'
          EXPORTING
            jobname          = bdcjob
          IMPORTING
            jobcount         = bdcnum
          EXCEPTIONS
            cant_create_job  = 1
            invalid_job_data = 2
            jobname_missing  = 3
            OTHERS           = 4.
        IF sy-subrc <> 0.
          MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
          WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        ELSE.
    *     Submit RSBDCSUB to trigger the session in background mode
          SUBMIT rsbdcsub
            VIA  JOB    bdcjob
                 NUMBER bdcnum
            WITH von     = sy-datum
            WITH bis     = sy-datum
            WITH z_verab = 'X'
            WITH logall  = 'X'
            AND RETURN.
          IF sy-subrc EQ 0.
    *       Export data to a memory id. This data will be used by the program
    *       that updates the address & email id
            EXPORT t_zzupdate TO SHARED BUFFER indx(st) ID 'MEM1'.
    *       Get Print Parameters
            CALL FUNCTION 'GET_PRINT_PARAMETERS'
              EXPORTING
                no_dialog      = 'X'
              IMPORTING
                valid          = l_valid
                out_parameters = params.
    *       Open a second job to trigger a program which updates addresses & email ids
            CALL FUNCTION 'JOB_OPEN'
              EXPORTING
                jobname          = adrjob
              IMPORTING
                jobcount         = adrnum
              EXCEPTIONS
                cant_create_job  = 1
                invalid_job_data = 2
                jobname_missing  = 3
                OTHERS           = 4.
            IF sy-subrc <> 0.
              MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
            ELSE.
    *         submit the program to update email id & long addresses
              SUBMIT zfpa_praa_address_update
                     VIA JOB adrjob
                     NUMBER  adrnum
                     TO SAP-SPOOL WITHOUT SPOOL DYNPRO
                         SPOOL PARAMETERS params
                            AND RETURN.
              IF sy-subrc EQ 0.
    *           First close the dependent job(address update job). Dependency
    *           is shown by using pred_jobcount & pred_jobname parameters
                CALL FUNCTION 'JOB_CLOSE'
                  EXPORTING
                    jobcount             = adrnum
                    jobname              = adrjob
                    pred_jobcount        = bdcnum
                    pred_jobname         = bdcjob
                  EXCEPTIONS
                    cant_start_immediate = 1
                    invalid_startdate    = 2
                    jobname_missing      = 3
                    job_close_failed     = 4
                    job_nosteps          = 5
                    job_notex            = 6
                    lock_failed          = 7
                    invalid_target       = 8
                    OTHERS               = 9.
                IF sy-subrc <> 0.
                  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                          WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
                ENDIF.
              ENDIF.
            ENDIF.
          ENDIF.
    *     Close the main job(BDC Job)
          CALL FUNCTION 'JOB_CLOSE'
            EXPORTING
              jobcount             = bdcnum
              jobname              = bdcjob
              strtimmed            = 'X'
            EXCEPTIONS
              cant_start_immediate = 1
              invalid_startdate    = 2
              jobname_missing      = 3
              job_close_failed     = 4
              job_nosteps          = 5
              job_notex            = 6
              lock_failed          = 7
              invalid_target       = 8
              OTHERS               = 9.
          IF sy-subrc <> 0.
            MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
          ENDIF.
        ENDIF.
      ENDIF.
    ELSEIF NOT t_zzupdate[] IS INITIAL.
    * some other process
    endif.
    Thanks,
    Kiran

  • OIM 11.1.1.5: Error while importing scheduled jobs

    Hi All
    I exported my custom scheduled task+job from OIM 11g environment and am trying to import the same in another 11g environment (both 11.1.1.5) using deployment manager. While importing the xml, I get the following error:
    MDS-00044: Metadata for MetadataObject with name /db/task.xml already exists in the configured store
    any idea how can I export/import all the custom jobs from one OIM 11g env to another.
    Please help.

    While taking export using deployment manager select scheduled task and then select all dependent jobs. don't export jobs directly now Import same in other environment.
    Make sure you upload the scheduled task jar using uploadjars.sh utility or put at Scheduled Task folder before importing above.
    this is what I applied for migration and it is working fine.
    If still error persist. try to remove it using WeblogicDeleteMetadata.sh utility
    finally if nothing works. put custom MDS (scheduledtask.xml) at your machine /Temp/db/scheduledtask.xml. upd.ate the from location in weblogic.profile and import using WeblogicImportMetaData.sh utility.
    But, always put your jar before importing it.

  • Error in Background Jobs After Client Copy

    Hi,
    We had performed a system refresh from a production client(with client no 300) to a quality server (client no: 200). The post processing steps were performed ok. Later the client 300 on the quality server was dropped. However, the scheduled user jobs in the background on client 200 still refer to the original client 300 and abort.
    Is there any way of modifying there jobs(there are lots of them) so that the client(300 in this case) reference is modified to the existing client 200.
    Points will be rewarded for good answer.
    Thanks and Regards,
    Subodh

    Hi
    The reason for the jobs getting cancelled could be that they are
    client dependent jobs and inoder to resolve the issue, please
    reschedule the jobs in client 200 and check whether the job fails.
    Please let me know if all the jobs are failing or the issue is only
    with the client dependent jobs.
    Hope this information helps. Please get back to me if further
    assistance is required. I would be glad to assist you.
    venkat

  • How to see DBMS_OUTPUT error when pl/sql proc runs in the EM job scheduler?

    I have a pl/sql package that uses DBMS_OUTPUT to handle exceptions. When I run the package in sql*plus:
    set serveroutput on;
    begin
    mypkg.myproc;
    end;
    and there is an exception, I see the output from my DBMS_OUTPUT.PUT_LINE.
    However, when I use the job scheduler in Enterprise Manager... if there is an error, I do not see the output from my exception handling code (i.e., the output from DBMS_OUTPUT.PUT_LINE).

    Using DBMS_OUTPUT to handle exceptions is generally not considered good form. At a minimum, you would want to log the exception to a table (or a file). If you are catching an exception that you cannot handle, you would really want to let that propagate back up to the caller so that Oracle knows that the job failed and can take appropriate action (i.e. retrying, not running dependent jobs, notifying a DBA, etc).
    Justin

  • How to setup a job to change the backup ID

    Hi,
    We know that SAP hana's backup will use the same backup file names once the configuration is done for backup.
    To avoid overwritting the previous backup files, we have following options (note that we are using "file" not "backint")
    1) rename the generated backup files after the backup is done;
    or
    2) move the generated backup files to another location.
    We want to achieve above goals by
    1) OS level cron jobs
    or
    2) an xsjob in xsengine.
    The challenge is that cron can only be started at predefined time while the the backup job can take various time to finish.
    Therefore we wonder if xsenigin can provide any event-dependent job scheduling to help with this?
    Questions:
    1) is there any way to detect backup status automatically at OS level?
    2) is there any 3rd party scheduler for Hana?
    Thanks!

    Actually you can assign unique names for backups.
    Suppose we have $DAY = 'Friday'
    hdbsql -n localhost -i [instance] -u [db user] -p [passwd]  "BACKUP DATA USING FILE ('$DAY')"
    will generate files like Friday_databackup_0_1, Friday_databackup_1_1 etc

Maybe you are looking for

  • Images in table

    I am very new to designing websites. I have created a home page using tables to seperate the elements. Basically I have created three columns. In the first column is the navigation bar. The second column has a graphic and the third column has a neste

  • Any way to call function from impdp except remap_data?

    HI Friends, I have a requirement of load prd data into stage by masking some sensitive columns.Iam using impdp of datapump utility to load data.iam using remap_data option to call mask functions.But remap_data option is supporting upto 10 columns. we

  • Is there a driver for deskjet 3050 for mac osx 10.7?

    I just bought the Deskjet 3050 and I am currently running Mac OSX 10.7 on my MacBook Pro. The setup disk won't work for 10.7 so how am I supposed to set up my printer to my wireless network and my computer? Thanks This question was solved. View Solut

  • How do I share raw footage that I downloaded directly to Final Cut

    I need to send some footage for transcription. I was planning to use WeTransfer since I know it. When I downloaded the footage from my SD card to my Mac I imported directly to FCP, which usually I don't do. Usually I would download to my external HD

  • Suspense Journal Restriction

    Hi Guys Version: 12.1.2 We have an external system interfacing with GL_Interface. There are some imbalanced journals coming over to interface and getting imported and posted in GL since we have enabled suspense posting. But we have manual journals wh