Parallel processing in background using Job scheduling...

(Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
Hi ABAP Gurus,
I have read a bit till now about parallel processing. But I have a doubt.
I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
to use parallel processing in SAP.
Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
Can this be also called parallel processing ?
Please let me know if this sounds wise to you guys...
Regards,
Tushar.

Thanks for your reply...
So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
report ZMMI_MATERIAL_MASTER_TEST
      no standard page heading line-size 255.
include bdcrecx1.
parameters: dataset(132) lower case default
                             '/tmp/testmatfile.txt'.
   DO NOT CHANGE - the generated data section - DO NOT CHANGE    ***
  If it is nessesary to change the data section use the rules:
  1.) Each definition of a field exists of two lines
  2.) The first line shows exactly the comment
      '* data element: ' followed with the data element
      which describes the field.
      If you don't have a data element use the
      comment without a data element name
  3.) The second line shows the fieldname of the
      structure, the fieldname must consist of
      a fieldname and optional the character '_' and
      three numbers and the field length in brackets
  4.) Each field must be type C.
Generated data section with specific formatting - DO NOT CHANGE  ***
data: begin of record,
data element: MATNR
       MATNR_001(018),
data element: MBRSH
       MBRSH_002(001),
data element: MTART
       MTART_003(004),
data element: XFELD
       KZSEL_01_004(001),
data element: MAKTX
       MAKTX_005(040),
data element: MEINS
       MEINS_006(003),
data element: MATKL
       MATKL_007(009),
data element: BISMT
       BISMT_008(018),
data element: EXTWG
       EXTWG_009(018),
data element: SPART
       SPART_010(002),
data element: PRODH_D
       PRDHA_011(018),
data element: MTPOS_MARA
       MTPOS_MARA_012(004),
     end of record.
data: lw_record(200).
End generated data section ***
data: begin of t_data occurs 0,
      matnr(18),
      mbrsh(1),
      mtart(4),
      maktx(40),
      meins(3),
      matkl(9),
      bismt(18),
      extwg(18),
      spart(2),
      prdha(18),
      MTPOS_MARA(4),
    end of t_data.
start-of-selection.
perform open_dataset using dataset.
perform open_group.
do.
*read dataset dataset into record.
read dataset dataset into lw_record.
if sy-subrc eq 0.
clear t_data.
split lw_record
   at ','
into t_data-matnr
      t_data-mbrsh
      t_data-mtart
      t_data-maktx
      t_data-meins
      t_data-matkl
      t_data-bismt
      t_data-extwg
      t_data-spart
      t_data-prdha
      t_data-MTPOS_MARA.
append t_data.
else.
exit.
endif.
enddo.
loop at t_data.
*if sy-subrc <> 0. exit. endif.
perform bdc_dynpro      using 'SAPLMGMM' '0060'.
perform bdc_field       using 'BDC_CURSOR'
                             'RMMG1-MATNR'.
perform bdc_field       using 'BDC_OKCODE'
                             '=AUSW'.
perform bdc_field       using 'RMMG1-MATNR'
                             t_data-MATNR.
perform bdc_field       using 'RMMG1-MBRSH'
                             t_data-MBRSH.
perform bdc_field       using 'RMMG1-MTART'
                             t_data-MTART.
perform bdc_dynpro      using 'SAPLMGMM' '0070'.
perform bdc_field       using 'BDC_CURSOR'
                             'MSICHTAUSW-DYTXT(01)'.
perform bdc_field       using 'BDC_OKCODE'
                             '=ENTR'.
perform bdc_field       using 'MSICHTAUSW-KZSEL(01)'
                             'X'.
perform bdc_dynpro      using 'SAPLMGMM' '4004'.
perform bdc_field       using 'BDC_OKCODE'
                             '/00'.
perform bdc_field       using 'MAKT-MAKTX'
                             t_data-MAKTX.
perform bdc_field       using 'BDC_CURSOR'
                             'MARA-PRDHA'.
perform bdc_field       using 'MARA-MEINS'
                             t_data-MEINS.
perform bdc_field       using 'MARA-MATKL'
                             t_data-MATKL.
perform bdc_field       using 'MARA-BISMT'
                             t_data-BISMT.
perform bdc_field       using 'MARA-EXTWG'
                             t_data-EXTWG.
perform bdc_field       using 'MARA-SPART'
                             t_data-SPART.
perform bdc_field       using 'MARA-PRDHA'
                             t_data-PRDHA.
perform bdc_field       using 'MARA-MTPOS_MARA'
                             t_data-MTPOS_MARA.
perform bdc_dynpro      using 'SAPLSPO1' '0300'.
perform bdc_field       using 'BDC_OKCODE'
                             '=YES'.
perform bdc_transaction using 'MM01'.
endloop.
*enddo.
perform close_group.
perform close_dataset using dataset.

Similar Messages

  • Need help with parallel process in background; not able to call FM in bgnd

    Hello,
      I am trying since 2 days to solve the issue of parallel process in background without using FPP.
    For which I want to call function module of class method in new task but to be processed by background process and not dialog.
    I searched so many websites but everyone has suggesteed to 'call function in background task' . But the fact is the processing of function happens by dailog process even in this case.
    I want to loop at table and call FM or class method inside each loop.
    Kindly suggest me how can I call function or class method in new task in everycall and prcoess it in background.
    thanks

    Balaji,
    Is the name of the button between single or double quotes?
    Regards,
    Dan
    Blog: http://DanielMcGhan.us/
    Work: http://SkillBuilders.com/

  • Error in handling Print Params In Parallel Processing of background jobs

    Hi Friends,
    My requirement is to optimize the performance of standard pgm RELEABL1 that takes a long time to complete when scheduled in background,and for that i have created a zpgm which will split the input data and run the jobs in parallel . I am using the submit statement and JOB_OPEN and JOB_CLOSE function modules to schedule the standard prg RELEABL1 in background with the input from my zpgm. The problem here is there is a push button " PRINT PARAMETERS" near the execute button on the selection screen of the standard pgm RELEABL1 in which the printer details have to be mentioned. Whenever i schedule the job in backgroung it throughs an error stating that " Define the Print Parameter First " . I tried my luck with all possible combinations but not able to handle this through my zpgm.....otherwise my pgm works fine. Can someone please guide me on how to handle this print parameters either through submit or what ever way possible.
    Thanks & Regards,
    Balaji.K

    Hi Balaji,
    We have the same performanced problem. 8 processes in parallel, still suffers from bad performance. How many subscribers are there on the system and how many processes do you use ?
    Best Regards,
    Ugur Uygan

  • Parallel process define for batch job

    Hi,
    I would like to run a batch job with a few processes run parallel together. May I know where can i define it ? T-code ?
    Regards
    Lauran

    Hi Lauren,
    First of all there is no transaction code as such.
    First of all the report that  needs to be run in background should enable you to do parrallel processing. For that code has to be written accordingly.
    Check this link:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/fa/096e92543b11d1898e0000e8322d00/content.htm
    It gives details of function modules needed for this purpose.
    After this you need to create a variant for the report and schedule it to run in background using either SE38 (dirrectly) or by creating a job explicitily- SM36.
    A standard report that has parallel processing feature available is RBDAPP01.
    Also check transactions like BD18. They also make use of parallel processing.
    Regards.
    Ruchit.

  • Parallel Processing - In conjuction with TWS scheduler.

    We have a .Bat file that uses upshell.exe to execute a custom script we've created.
    This custom script looks for files in the inbox and moves them to the relevant OpenBatches/OpenBatchesML folder. Once files have been moved it then runs a Parallel process up-to-load for all data. However, what we're finding is that because we're running this in parallel the .bat script is completing despite FDM still processing in the background. This is evident, as the TBATCH table shows the batch as not 100% complete. This unfortunately causes ourselves a problem as the scheduler (TWS) thinks all processing has finished and subsequent downstream processing kicks off. As I understand it this wouldn't be an issue if we'd used Serial processing and not Parallel.
    We've chosen Parallel becuase of the volumes and the limitation of our batch window. Am I correct in assuming that in principle parallel will process data faster than serial? I am currently looking at specific performance tests on my data at present to prove this.
    The real issue is that we somehow need to ensure TWS doesn't kick off downstream processes and I need to somehow have FDM create a log file for batches that complete successfully. I'm assuming it should be done within an Event script but I'm not sure which as this is quite new to me.
    Has anyone come across this issue themselves? If so, I'm looking for some guidance/examples how you've managed to get round the problem.
    Thanks in advance.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel processing in background

    Hi All,
    I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
    Is there any other solutions using that i can reduce total processing time.
    Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
    Please note that all other performance tunings done.
    Thanks,
    Rajesh.

    Hi Rajesh,
    Refer sample code for <b>Parallel Processing</b>:
    By doing this your <b>processing</b> time will be highly optimized.
    Go thru the description given in the code at each level.
    This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
    Hope it helps.
    REPORT PARAJOB.
    Data declarations
    DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
    "Parallel processing group.
    "SPACE = group default (all
    "servers)
    WP_AVAILABLE TYPE I, "Number of dialog work processes
    "available for parallel processing
    "(free work processes)
    WP_TOTAL TYPE I, "Total number of dialog work
    "processes in the group
    MSG(80) VALUE SPACE, "Container for error message in
    "case of remote RFC exception.
    INFO LIKE RFCSI, C, "Message text
    JOBS TYPE I VALUE 10, "Number of parallel jobs
    SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
    RCV_JOBS TYPE I VALUE 1, "Work packet replies received
    EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
    TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
    "parallel processing work unit)
    BEGIN OF TASKLIST OCCURS 10, "Task administration
    TASKNAME(4) TYPE C,
    RFCDEST LIKE RFCSI-RFCDEST,
    RFCHOST LIKE RFCSI-RFCHOST,
    END OF TASKLIST.
    Optional call to SBPT_INITIALIZE to check the
    group in which parallel processing is to take place.
    Could be used to optimize sizing of work packets
    work / WP_AVAILABLE).
    CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
    EXPORTING
    GROUP_NAME = GROUP
    "Name of group to check
    IMPORTING
    MAX_PBT_WPS = WP_TOTAL
    "Total number of dialog work
    "processes available in group
    "for parallel processing
    FREE_PBT_WPS = <b>WP_AVAILABLE</b>
    "Number of work processes
    "available in group for
    "parallel processing at this
    "moment
    EXCEPTIONS
    INVALID_GROUP_NAME = 1
    "Incorrect group name; RFC
    "group not defined. See
    "transaction RZ12
    INTERNAL_ERROR = 2
    "R/3 System error; see the
    "system log (transaction
    "SM21) for diagnostic info
    PBT_ENV_ALREADY_INITIALIZED = 3
    "Function module may be
    "called only once; is called
    "automatically by R/3 if you
    "do not call before starting
    "parallel processing
    CURRENTLY_NO_RESOURCES_AVAIL = 4
    "No dialog work processes
    "in the group are available;
    "they are busy or server load
    "is too high
    NO_PBT_RESOURCES_FOUND = 5
    "No servers in the group
    "met the criteria of >
    "two work processes
    "defined.
    CANT_INIT_DIFFERENT_PBT_GROUPS = 6
    "You have already initialized
    "one group and have now tried
    "initialize a different group.
    OTHERS = 7..
    CASE SY-SUBRC.
    WHEN 0.
    "Everything’s ok. Optionally set up for optimizing size of
    "work packets.
    WHEN 1.
    "Non-existent group name. Stop report.
    MESSAGE E836. "Group not defined.
    WHEN 2.
    "System error. Stop and check system log for error
    "analysis.
    WHEN 3.
    "Programming error. Stop and correct program.
    MESSAGE E833. "PBT environment was already initialized.
    WHEN 4.
    "No resources: this may be a temporary problem. You
    "may wish to pause briefly and repeat the call. Otherwise
    "check your RFC group administration: Group defined
    "in accordance with your requirements?
    MESSAGE E837. "All servers currently busy.
    WHEN 5.
    "Check your servers, network, operation modes.
    WHEN 6.
    Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
    DESTINATION IN GROUP to call the function module that does the
    work. Make a call for each record that is to be processed, or
    divide the records into work packets. In each case, provide the
    set of records as an internal table in the CALL FUNCTION
    keyword (EXPORT, TABLES arguments).
    DO.
    CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
    "in parallel
    STARTING NEW TASK TASKNAME "Name for identifying this
    "RFC call
    DESTINATION IN GROUP group "Name of group of servers to
    "use for parallel processing.
    "Enter group name exactly
    "as it appears in transaction
    "RZ12 (all caps). You may
    "use only one group name in a
    "particular ABAP program.
    PERFORMING RETURN_INFO ON END OF TASK
    "This form is called when the
    "RFC call completes. It can
    "collect IMPORT and TABLES
    "parameters from the called
    "function with RECEIVE.
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1 MESSAGE msg
    "Destination server not
    "reached or communication
    "interrupted. MESSAGE msg
    "captures any message
    "returned with this
    "exception (E or A messages
    "from the called FM, for
    "example. After exception
    "1 or 2, instead of aborting
    "your program, you could use
    "SPBT_GET_PP_DESTINATION and
    "SPBT_DO_NOT_USE_SERVER to
    "exclude this server from
    "further parallel processing.
    "You could then re-try this
    "call using a different
    "server.
    SYSTEM_FAILURE = 2 MESSAGE msg
    "Program or other internal
    "R/3 error. MESSAGE msg
    "captures any message
    "returned with this
    "exception.
    RESOURCE_FAILURE = 3. "No work processes are
    "currently available. Your
    "program MUST handle this
    "exception.
    YOUR_EXCEPTIONS = X. "Add exceptions generated by
    "the called function module
    "here. Exceptions are
    "returned to you and you can
    "respond to them here.
    CASE SY-SUBRC.
    WHEN 0.
    "Administration of asynchronous RFC tasks
    "Save name of task...
    TASKLIST-TASKNAME = TASKNAME.
    "... and get server that is performing RFC call.
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    APPEND TASKLIST.
    WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
    TASKNAME = TASKNAME + 1.
    SND_JOBS = SND_JOBS + 1.
    "Mechanism for determining when to leave the loop. Here, a
    "simple counter of the number of parallel processing tasks.
    "In production use, you would end the loop when you have
    "finished dispatching the data that is to be processed.
    JOBS = JOBS - 1. "Number of existing jobs
    IF JOBS = 0.
    EXIT. "Job processing finished
    ENDIF.
    WHEN 1 OR 2.
    "Handle communication and system failure. Your program must
    "catch these exceptions and arrange for a recoverable
    "termination of the background processing job.
    "Recommendation: Log the data that has been processed when
    "an RFC task is started and when it returns, so that the
    "job can be restarted with unprocessed data.
    WRITE msg.
    "Remove server from further consideration for
    "parallel processing tasks in this program.
    "Get name of server just called...
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    "Then remove from list of available servers.
    CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
    IMPORTING
    SERVERNAME = TASKLIST-RFCDEST
    EXCEPTIONS
    INVALID_SERVER_NAME = 1
    NO_MORE_RESOURCES_LEFT = 2
    "No servers left in group.
    PBT_ENV_NOT_INITIALIZED_YET = 3
    OTHERS = 4.
    WHEN 3.
    "No resources (dialog work processes) available at
    "present. You need to handle this exception, waiting
    "and repeating the CALL FUNCTION until processing
    "can continue or it is apparent that there is a
    "problem that prevents continuation.
    MESSAGE I837. "All servers currently busy.
    "Wait for replies to asynchronous RFC calls. Each
    "reply should make a dialog work process available again.
    IF EXCP_FLAG = SPACE.
    EXCP_FLAG = 'X'.
    "First attempt at RESOURCE_FAILURE handling. Wait
    "until all RFC calls have returned or up to 1 second.
    "Then repeat CALL FUNCTION.
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
    ELSE.
    "Second attempt at RESOURCE_FAILURE handling
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
    "SY-SUBRC 0 from WAIT shows that replies have returned.
    "The resource problem was therefore probably temporary
    "and due to the workload. A non-zero RC suggests that
    "no RFC calls have been completed, and there may be
    "problems.
    IF SY-SUBRC = 0.
    CLEAR EXCP_FLAG.
    ELSE. "No replies
    "Endless loop handling
    ENDIF.
    ENDIF.
    ENDCASE.
    ENDDO.
    Wait for end of job: replies from all RFC tasks.
    Receive remaining asynchronous replies
    WAIT UNTIL RCV_JOBS >= SND_JOBS.
    LOOP AT TASKLIST.
    WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
    30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
    ENDLOOP.
    This routine is triggered when an RFC call completes and
    returns. The routine uses RECEIVE to collect IMPORT and TABLE
    data from the RFC function module.
    Note that the WRITE keyword is not supported in asynchronous
    RFC. If you need to generate a list, then your RFC function
    module should return the list data in an internal table. You
    can then collect this data and output the list at the conclusion
    of processing.
    FORM RETURN_INFO USING TASKNAME.
    DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
    RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
    IMPORTING RFCSI_EXPORT = INFO
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1
    SYSTEM_FAILURE = 2.
    RCV_JOBS = RCV_JOBS + 1. "Receiving data
    IF SY-SUBRC NE 0.
    Handle communication and system failure
    ELSE.
    READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
    IF SY-SUBRC = 0. "Register data
    TASKLIST-RFCHOST = INFO_RFCHOST.
    MODIFY TASKLIST INDEX SY-TABIX.
    ENDIF.
    ENDIF.
    ENDFORM
    Reward points if that helps.
    Manish
    Message was edited by:
            Manish Kumar

  • Reset SIP trunk using job scheduler

    We manage multiple CUCM clusters using 8.6/9.1 for clients and occasionally we will find the need to reset the main SIP trunks for PSTN access, which we have to do after hours due to these clusters being in production.  We have heard tell that it is possible to schedule a trunk reset using the job scheduler but we are unable to find how this is done.  Does anyone have any experience with this?

    Part of our confusion is that when you save on the trunk configuration, we see the pop up below that mentions the Job Scheduler.

  • Having trouble getting a job to run using job scheduler

    I am using Oracle 11g. I run a sql script that schedules a job:
    exec sys.dbms_scheduler.create_job( job_name => '"SYSTEM"."UCR"', job_type => 'EXECUTABLE', job_action => 'D:\UserCountReport\bin\ucr.bat', repeat_interval => 'FREQ=HOURLY;BYHOUR=3;BYMINUTE=0;BYSECOND=0',start_date => to_timestamp_tz('2011-07-21 US/Eastern', 'YYYY-MM-DD TZR'), job_class => 'DEFAULT_JOB_CLASS', auto_drop => FALSE,enabled => FALSE);
    exec sys.dbms_scheduler.set_attribute( name => '"SYSTEM"."UCR"',  attribute => 'job_weight',  value => 1);
    exec sys.dbms_scheduler.enable( '"SYSTEM"."UCR"' );
    commit;
    exit;When I am connected to the database, I run these commands:
    SQL> SELECT JOB_NAME, STATE, job_action, repeat_interval FROM DBA_SCHEDULER_JOBS
    where job_name='UCR';
    JOB_NAME
    STATE
    JOB_ACTION
    REPEAT_INTERVAL
    UCR
    SCHEDULED
    C:\UserCountReport\bin\ucr.bat
    FREQ=HOURLY;BYHOUR=9;BYMINUTE=0;BYSECOND=0
    SQL> SELECT JOB_NAME, status FROM DBA_SCHEDULER_JOB_LOG where job_name = 'UCR';
    JOB_NAME
    STATUS
    UCR
    SUCCEEDED
    UCR
    SUCCEEDED
    UCR
    SUCCEEDEDHowever, I don't get any signs that the job has run even though Oracle says all the runs have succeeded. The batch file that the job is supposed to execute creates a file and writes stuff to it. However, the file is not written. When I run the batch file from command prompt, it executes correctly.
    I have tried following instructions here:
    Answers to "Why are my jobs not running?"
    However, all three of these commands return errors:
    SQL> select value from v$parameter where name='job_queue_processes';
    select value from v$parameter where name='job_queue_processes'
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> select count(*) from dba_scheduler_running_jobs;
    select count(*) from dba_scheduler_running_jobs
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> select count(*) from dba_jobs_running;
    select count(*) from dba_jobs_running
    ERROR at line 1:
    ORA-00942: table or view does not existI've tried various other commands from that list and I get errors as well.
    I'm not quite sure where to go from here to troubleshoot this problem. Any help would be appreciated. Thanks.
    Edited by: 874375 on Jul 22, 2011 9:14 AM
    Edited by: 874375 on Jul 22, 2011 9:18 AM

    So I changed my script to this:
    exec sys.dbms_scheduler.create_job( job_name => '"SYSTEM"."UCR"', job_type => 'EXECUTABLE', job_action => '%windir%\system32\cmd.exe /C D:\UserCountReport\bin\ucr.bat', ....I now have this:
    SQL> SELECT JOB_NAME, status FROM DBA_SCHEDULER_JOB_LOG where job_name='BOXTONE_
    MONTHLY_UCR';
    JOB_NAME
    STATUS
    BOXTONE_MONTHLY_UCR
    FAILED
    BOXTONE_MONTHLY_UCR
    SUCCEEDED
    BOXTONE_MONTHLY_UCR
    SUCCEEDED
    JOB_NAME
    STATUS
    BOXTONE_MONTHLY_UCR
    SUCCEEDEDIs there a place with log files that will tell me why the job failed? If so where are the log files located?

  • Which process chain type used when scheduling from 3rd party

    Do we use Local Process Chain or Meta Chain type used when triggering  PC from 3rd Party scheduling tools like Redwood..
    Any inputs...
    Venkat.

    Hi Venkat,
    Select "Start Using Meta Chain or API" for the start process when running a process chain from a third party scheduling software or when calling a process chain from within another process chain. 
    I also believe some third party scheduling software will also work if you set the start process to  "Direct Scheduling" and set the start time to immediate but I would recommend just making it a Meta Chain for tracking purposes.
    Thanks,
    Damon Fahey

  • Dynamic parallel processing using a multiline container element

    Hi All ,
      I just wanted to how things work when we use "Dynamic parallel processing" for a decision step . I came across a situation wherein a Rule gets the approving user(s) and the work item should be sent to all those users . After getting an approval from all the users , the workflow should proceed or else it should terminate .
       I was just wondering whether "Dynamic parallel processing" will do this job or not . I had also thought of using forks but as the number of approvers are  decided at runtime , i dont think it is possible .
       Any inputs ?
      Edit : We are working on CRM 5.0
    Thanks ,
    Shounak M.
    Message was edited by: Shounak  M

    Hi Shounak,
    Just do as Mike says:
    use the multiline element for a subflow.
    The subflow consists of your user decision, if someone rejects it, remember it (could be done by updating a small table using a method, or use an event, or what mike suggested, updating appending a table )
    In the top flow, after the multiline element step determine if someone rejected it (wait for event, or reading the table).
    Kind regards, Rob Dielemans
    Message was edited by: Rob Dielemans

  • Troubleshooting the lockwaits for parallel processing jobs

    Hi Experts,
    I am facing difficulty tracing the job which is interfering with a business critical job.
    The job in discussion is using parallel processing and the other jobs running at the time are also using the same.
    If I see a lockwait for some process which may be a dailog process or update process spawned by these jobs, I am having difficulty knowing which one holding the lock and which one is waiting.
    So, Is there any way we could identify the dailog or update processes which are used for parallel processing for a particular backgorund job.
    Please help me as this is business critical and we have a high visibility in this area.
    Any suggestions will be appreciated.......
    Regards
    Raj

    Hi Raj,
    First of all, please indicate if you are using SAP Business One.  If yes, then you need to check those locks under SQL Management Studio.
    Thanks,
    Gordon

  • Scheduling BPEL Process in 11g using native SOA Suite functionatliy

    I have spent a significant amount of time reviewing documentation and the forums and have been unable to find a definitive answer on the ability to schedule a BPEL process in SOA Suite 11g.
    A similar question was asked here with no response:
    BPEL(SOA11G) and Quartz
    In SOA Suite 10.1.3.x it is discussed frequently:
    http://www.oracle.com/technology/tech/soa/soa-suite-best-practices/soa_best_practices_1013x_drop1.pdf
    Re: BPEL Adapters & Scheduling
    It appeared to be an actual feature in one of the technology previews:
    http://biemond.blogspot.com/2008/01/scheduling-processes-in-soa-suite-11g.html
    Can anyone provide an definitive answer on whether a BPEL process can be scheduled using functionality native to SOA Suite 11g. I understand there are external workarounds such as calling BPEL process from DBMS_JOB, using external scheduling framework etc. I am interested in solutions which are native to the SOA Suite itself if there is such a solution.
    If there is no definitive answer I will open a support case and post the results here for the benefit of the group.

    Hi-
    You can refer to the below link.
    http://darwin-it.blogspot.com/2008/01/how-to-create-bpel-job-scheduler.html
    I have not personally tried but I think it should work.
    PLease let us know how it goes.
    Thanks,
    Dibya

  • Job Schedule Sun thru Thurs only via TVARV using variant

    Can someone tell me how to use TVARV to setup my job to run only on Sunday thru Thursday using my variant?

    Hi,
    Please check the below links.
    http://help.sap.com/saphelp_nw04/helpdata/en/c0/9803aae58611d194cc00a0c94260a5/content.htm
    Some sample code from seachsap site.
    Have you ever wondered where your TVARV variables are being used in your batch jobs? How do you know if they change, and the impact it may have on your interfaces? The program below will provide a listing for you.
    The program uses Job Count (job id) as the key versus Job Name as job names can be redundant. The code was written in R/3 v.4.6C, but should work in 3.x versions as well.
    Code
    PROGRAM: ZCAUS_EXTRACT_REPORT_VARIABLES
    AUTHOR:  DYLAN HACK
    CREATE DATE: 10/09/2002
    PURPOSE: This program was developed to extract all jobs based on the
             "Job Status" selection criteria and find the relevant TVARV
             variable entries. Then the report can be downloaded and
             sorted on selection variables. Ultimate purpose is to know
             what jobs are affected by the individual TVARV variables.
             If a job exists but does not have a corresponding entry/use
             of a TVARV variable, it will not get listed.
    JOB STATUS LEGEND:
    R: job step running.
    Y: job step ready (eligible to run, waiting for a work process).
    P: job step scheduled.
    S: job step released (eligible to run when the start condition of the
       job is fulfilled).
    A: job step aborted.
    F: job step successfully finished.
    Z: system upgrade in progress, only upgrade-related jobs are allowed
       to run. Jobs and job steps with this status are ignored by the
       scheduler.
    X: unknown status detected.
    REPORT ZCAUS_EXTRACT_REPORT_VARIABLES .
    tables: tbtcp, tbtco.
    data: var like RSVARVAR occurs 0 with header line,
          begin of report_tmp occurs 0,
            jobname   like tbtcp-jobname,
            jobcount  like tbtcp-jobcount,
            stepcount type tbtcp-stepcount,
            progname  type syrepid,
            variant   type syslset,
          end of report_tmp,
          report like report_tmp occurs 0 with header line,
          begin of listing occurs 0,
            jobname  type tbtcp-jobname,
            jobcount  like tbtcp-jobcount,
            stepcount type tbtcp-stepcount,
            progname  type syrepid,
            variant   type syslset,
            variable type rsvarvar-variable,
          end of listing.
    selection-screen begin of block a1 with frame title text-001.
    selection-screen begin of block b1 with frame.
      select-options: s_status for tbtco-status.
    selection-screen end of block b1.
    selection-screen begin of block b2 with frame.
      selection-screen comment /5(75) com1.
      selection-screen comment /5(75) com2.
      selection-screen comment /5(75) com3.
      selection-screen comment /5(75) com4.
      selection-screen comment /5(75) com5.
      selection-screen comment /5(75) com6.
      selection-screen comment /5(75) com7.
      selection-screen comment /5(75) com8.
    selection-screen end of block b2.
    selection-screen end of block a1.
    initialization.
    move:
    'R: Job Running' to com1,
    'Y: Job Ready (Eligible to run, waiting for process)' to com2,
    'P: Job Scheduled' to com3,
    'S: Job Released (Eligible to run when start condition is trigerred)'
    to
    com4,
    'A: Job Aborted' to com5,
    'F: Job Successfully Finished' to com6,
    'Z: Upgrade in process, job being ignored' to com7,
    'X: Unknown Status Detected' to com8.
    start-of-selection.
    get all jobs in released status
      select jobcount from tbtco into report_tmp-jobcount
        where status in s_status.
        append report_tmp.
      endselect.
    get all related variants for those jobs that meet the where clause.
      loop at report_tmp.
        select jobname jobcount stepcount progname variant
        into  corresponding fields of report
        from tbtcp where jobcount = report_tmp-jobcount and
                         variant not like '&%' and
                         variant <> space.
        append report.
        endselect.
      endloop.
    sort and delete duplicates (if any)
    sort report by jobname jobcount stepcount progname variant.
    delete adjacent duplicates from report comparing all fields.
    get TVARV selection variables
    loop at report.
    clear var. refresh var.
      call FUNCTION 'RS_VARIANT_VARIABLES'
        EXPORTING
           PROGRAM = report-progname
           VARIANT = report-variant
        TABLES
         VAR = var
        EXCEPTIONS
            VARINT_NOT_EXISTENT.
    move the data to a listing table
      if var-variable <> space.
       loop at var.
        move: report-jobname   to listing-jobname,
              report-jobcount  to listing-jobcount,
              report-stepcount to listing-stepcount,
              report-progname  to listing-progname,
              report-variant   to listing-variant,
              var-variable     to listing-variable.
        append listing.
       endloop.
      endif.
    endloop.
    sort and delete dups from listing
    sort listing by jobname jobcount stepcount progname variant variable.
    delete adjacent duplicates from listing comparing all fields.
    data: count type i. loop at listing. count = count + 1. endloop.
    write out the listing
    write: 'Count', count. skip.
    write: 2   'Job Name',
           36  'Job #',
           48  'Step',
           54  'Program name',
           95  'Variant',
           110 'Variable'.
    loop at listing.
        write: / listing-jobname,
                 listing-jobcount,
                 listing-stepcount,
                 listing-progname,
                 listing-variant,
                 listing-variable.
    endloop.
    Thanks,
    Ramakrishna

  • Error while using RSDRI_INFOPROV_READ : parallel processing error

    Hi
    I am also facing parallel processing error while using the function module RSDRI_INFOPROV_READ in transformation.
    when only one data package is there, the load happens without any issue.  But when multiple data packages are involved the load fails with an error "Exception in parallel processing".

    Hi Lijo,
    I got the following information from the function module documentation of the FM RSDRI_INFOPROV_READ.
    If neither I_SAVE_IN_FILE nor I_SAVE_IN_TABLE are set, then the return takes place in the form of packages (that is an internal table), of value I_PACKAGESIZE. A negative value means that the return should be in one package.
    Prathish.

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

Maybe you are looking for