Parallel processing end condition problem

Hi,
i use a block with parallel processing for each row (parforeach). In every branch i create a workitem where a user has to input something. This is stored in my field status.
Now i want that if the status is 'X' all branches will be closed like it would be in a fork.
I entered the end condition &status& = X
But the end condition of parallel processing doesn't work. I tested it and it works but into the workflow the branches won't be ended.
Does someone has a solution or can explain me the problem?
Thx

Hi,
The dynamic parallelism using 'parforeach' is not the same as fork with multiple branches with number of required branches to end the fork !! Its like this, when you use dynamic parallelism , for each index of your multiline container element used for parallelism, the sub-workflow ( or the associated Task having dynamic parallelism) will be called parallely and each branch ( parallely called sub workflow or task ) is independent now having no relation ( by default ) and workflow will not continue to next step until unless all these branches are completed.
However, to solve your probelm i suggest the following
1. How are you handling the process in each branch, is it through a sub workflow? if yes, in your sub workflow you can create a fork parallel to your normal process. In that fork, create a 'Wait for Event' step and wait for a new custom event ( for this you have to define a new custom event on your BO) . Put the necessary branches required as 1 and Join this branch to end of this sub-workflow. and
2. Whenever your requirement to end all the branches is fulfilled ( say in your case status=X) raise this new custom event using Create Event and this will be captured by 'Wait Event' step in the fork of your sub-workflow and it will end that sub workflow ( meaning, your branch is ended now) .Make sure that you pass the BO Object Instance to your sub-workflow through binding from your main workflow. !!
Hope this helps you !!
Regards
Krishna Mohan

Similar Messages

  • Parallel processing of condition records in SAP

    Hi,
    I have a particular scenario, wherein XI sends 30000 idocs for pricing condition records of message type COND_A to SAP, and SAP has to process all the idocs within 15 minutes. Is it possible, and what kind of parallel processing techniques can be used to achieve this?
    Regards,
    Vijay
    Edited by: Vijay Iyengar on Feb 21, 2008 2:05 PM

    Hi
    We had a similar performance issue to load conditions of sales deal.
    We did not use IDOC.
    Initially we did the BDC and it was loading 19 records per second and than later we developed a direct input program, which loaded close to 900 records per second.
    What we did was, we wrote a direct input pogram and called the function module
    CALL FUNCTION 'RV_KONDITION_SICHERN_V13A' IN UPDATE TASK
    But Pls note - We took approval from SAP before using it.
    Regards
    Madhan
    Edited by: Madhan Doraikannan on Oct 20, 2008 11:40 AM

  • Fork End condition problem in BPM

    Hi All,
    I am collecting three different types of idocs (consider A, B and C). For collecting idocs I am using BPM. BPM Steps are
    1.Block1
           a.Fork u2013 3 branches
               i. Loop(3 Loops)  Receive u2013 Container Operation(Assign) -- Container Operation(Append)
    2.Block2
       a.Fork- 3 branches
          i. Transformation
          ii.Send
    In one of the field in idoc (C) I am getting the sum of idocs sent from the source. If the collected idoc count is equal to t he idoc (C) field value then fork condition is true otherwise the fork will be waiting up to the count equals.
    This is working fine whenever I am sending idocs in A, B and C or B,A and C. If I am sending idocs A,C,B  or B,C,A then it is not coming out of the fork end condition even though the condition is true. Can you please let me know how can we resolve t his.

    I kept Fork end condition in Loop also so its got worked

  • Problem in Dynamic Parallel processing using Blocks

    Hi All,
    My requirement is to have parallel approvals so I am trying to use the dynamic parallel processing through Blocks. I cant use Forks since the agents are being determined at runtime.
    I am using the ParForEach block. I am sending the &agents& as a multiline container in 'Parallel Processing' tab. I have user decision in the block. Right now i am hardcoding 2 agents in the 'agents' multiline element. It is working fine..but i am getting 2 instances of the block.I understand that is because I am sending &AGENTS& in the parallel processing tab.. I just need one instance of the block going to all the users in the &AGENTS& multiline element.
    Please let me  know how to achieve the same. I have already searched the forum but i couldnt find anything that suits my req.
    Pls help!
    Regards,
    Soumya

    Yes that's true when ever you try to use  ParForEach block then for each value entry in the table a separate workitem ID is created, i.e. a separate instance is created that paralle processing is not possible like that
    Instead of that what you can do is create a fork with 3 branches and define a End Condition such that until all 3 branches are executed .
    Before to the fork step determine all the agents and store them in a internal table , you can access the one internal table entry by using the index value check this [wiki|https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/accessingSingleEntryfromMulti-line+Element] to access single entry from a internal table.
    For each task in the fork assgin a agent
    So, as we have defined the condition that until all the three branches are executed you don't want to come out of the fork step so, it will wait until all the stpes are completed.

  • Parallel Block (ParForEach) end condition doesnt work

    hello
    I have a parallel block in my workflow (multiline) .
    In this defintion i have also configured an end condition.
    ( &Approved& = 'X'  for example)
    My intention was that the workflow continues after  the block when:
    1) all parallelblocks are processed.
    2) the end condition was met.
    But this second scenario( end condition was met) doesnt work.
    When approve = X in one block, the other parallel blocks arent logically deleted.
    is my assumption wrong? has anyone ever experienced this behaviour?
    thanx
    Hayo

    Hello,
    In fork normally you define how many branches are required to complete a job.
    Let say u have 3 approvers for one work item  say A,B, and C.
    If one of these approve you move on to the next step so in this case branches are 3  but necessary branch required is 1.
    I believe that u must be setting flag 'X' when it is approved correct.
    So just play with the necessary branches and you will be available to do it.
    Regards,
    Nabheet Madan

  • Parallel Processing problem

    Hi All,
    I have implemented parallel processing based on the number of jobs given on screen.In the table,if there are 1000 records and the number of jobs is 10,then 100 records would  be passed to the submit program.The problem here is that those are accounts and each account can be associated to more than one contracts,so there is a possibility that if say 1 account has 4 contracts with it, 2 might get passed in the 1st batch while the other 2 can go in the next batch since the data is divided depending on the number of jobs given on screen.
    I would like to ensure that all the contracts associated with that account go at a single time to the submit program.Can anyone give me an idea as to how can this be achieved?
    gv_job_no is the number of records which will be sent in one batch and lt_ever is being sent to the submit program.
      LOOP AT gt_ever
        INTO gwa_ever.
        IF gv_lines EQ gv_job_no.
          CLEAR gv_lines.
          EXIT.
        ELSE.
          APPEND gwa_ever TO lt_ever.
          gv_lines = gv_lines + 1.
          DELETE gt_ever
            INDEX sy-tabix.
        ENDIF.
      ENDLOOP.
    Thanks,
    Shreeraj
    Edited by: shreeraj pawar on Jan 18, 2011 4:29 PM

    Hi shreeraj pawar ,
    do like this with sorted internal table first 2 fields account and contract  (symbolic code):
    loop at gt_account_contract assigning <any>.
      at new account.
        lv_start = sy-tabix.
      endat.   
      at end of account.
        append lines of gt_account_contract
          from lv_start TO sy-tabix
          to lt_account_contract.
        if lines( lt_account_contract ) >= lv_lines_per_job.
    * <submit lt_account_contract for processing>
        clear:
          lt_account_contract,
          lv_start.
        endif.
      endat.
    endloop.
    You may have accounts with many contracts, this may lead to exceed the  lv_lines_per_job limit. But you will still have a much smoother distribution.
    Regards,
    Clemens

  • Problem in Parallel Processing

    Hi All,
    i had performed all the steps involved in parallel processing like creating Multiline Container Elements
    and creating Task.
    when i run my workflow all the workitems are created in workflow logs in completed state
    but in sbwp i can't find my workitems in the Inbox.
    pls let me know why this is happening??
    Thanks
    Ravi Aswani

    Hi Ravi,
    As per as per your inputs, when you check workflow log all the workitems are in complete status.
    1. Have you created all the tasks as background task?
    if it is a background task, then you wont get any mails.
    2. If you have created an foreground task and still if it is not sending any mails then check in the workflow log, if the flow is going to any activity step which is defined to send workitem.
    3. Since you have said that the status is completed, i dont think there is any agent problem right now.
    Please revert back if need more clarifications and with more inputs so that we can help you better.
    Regards,
    Gautham Paspala

  • Parallel Processing Problems!

    Hello!
    I have a WF with a parallel processing sending a workitem to each user that I choose in a Z program, that's ok.
    What I need is to start the next step ONLY if all the aprovers approve it, if one of then reject it I need another action.
    Now if just one of the appprovers aprove the step, my WF is going to the next... How can wait all approvals to start the next step???
    am I clear?
    Tks
    Marcos Munhos

    I think the Parallel approval step always wait for the remaining approvers to approve which are in the Approval step. I think you need to check whether the parallel approval is properly defined in the Miscalleanious or Others Tab.
    If the parallel approval step is a Subworkflow then it is difficult to evaluate all the approval done or not. May be you have to make use of container operation.
    Thanks
    Arghadip

  • How to define "leading" random number in Infoset fpr parallel processing

    Hello,
    in Bankanalyzer we use an Infoset which consists of a selection across 4 ODS tables to gather data.
    No matter which PACKNO fields we check or uncheck in the infoset definition screen (TA RSISET), the parallel frameworks always selects the same PACKNO field from one ODS table.
    Unfortunately, the table that is selected by the framework is not suitable, because our
    "leading" ODS table which holds most of our selection criteria is another one.
    How to "convince" the parallel framework to select our leading table for the specification
    of the PACKNO in addition (this would be times 20 faster due to better select options).
    We even tried to assign "alternate characteristics" to the packnos we do not liek to use,
    but it seems that note 999101 just fixes this for non-system-fields.
    But for the random number a diffrent form routine is used in /BA1/LF3_OBJ_INDEX_READF01
    fill_range_random instead of fill_range.
    Has anyone managed to assign the PACKNO of his choice to the infoset selection?
    How?
    Thanks in advance
    Volker

    Well, it is a bit more complicated
    ODS one, that the parallel framework selects for being the one to deliver the PACKNO
    is about equal in size (~120GB each) to ODS two which has two significant field which cuts down the
    amount of data to be retreived.
    Currently we execute the generated SQL in the best possible manner (by faking some stats )
    The problem is, that I'd like to have a Statement that has the PACKNO in the very same table.
    PACKNO is a generated random number esp. to be used for parallel processing.
    The job starts about 100 slaves
    Each slave gets a packet to be processed from the framework, which is internaly represented
    by a BETWEEN clause on this PACKNO. This is joined against ODS2 and then the selective fields
    can be compared resultin in 90% of the already fetched rowes can be discarded.
    Basicly it goes like
    select ...
    from
      ods1 T_00,
      ods2 T_01,
      ods3 T_02,
      ods4 T_03
    where
    ... some key equivalence join-conditions ...
    AND  T_00.PACKNO BETWEEN '000000' and '000050' -- very selective on T_00
    AND  T_01.TYPE = '202'  -- selective Value 10% on second table
    I'd trying to change this to
    AND  T_01.PACKNO BETWEEN '000000' and '000050'
    AND  T_01.TYPE = '202'  -- selective Value 10%
    so I can use a combined Index on T_01 (TYPE;PACKNO)
    This would be times 10 more selective on the driving table and due to the fact,
    that T_00 would be joined for just the rows I need, about a calculated time 20-30 faster.
    It really boosts when I do this in sqlplus
    Hope this clearyfies a bit.
    Problem is, that I can not change the code either for doing the
    build of the packets or the one that executes the application.
    I need to change the Inofset, so that the framework decides to build
    proper SQL with T_01.PACKNO instead of T_00.PACKNO.
    Thanks a lot
    Volker

  • ABAP OO and parallel processing

    Hello ABAP community,
    I am trying to implement a ABAP OO scenario where i have to take into account parallel processing and processing logic in the sense of update function modules (TYPE V1).
    The szenario is definied as follows:
    Frame class X creates a instance of class Y and a instance of class Z.
    Classes Y and Z sould be processed in parallel, so class X calls classes Y and Z.
    Classes Y and Z call BAPIS and do different database changes.
    If classes Y or Z have finished, the status of processing is written into a status table by caller class X.
    The processing logic within class Y and class Z should be a SAP LUW in the sense of a update function module (TYP V1).
    Can i use events?
    (How) Should i use "call function in upgrade task"?
    (How) Should i use "call function starting new task"?
    What is the best method to realise that behaviour?
    Many thanks for your suggestions.

    Hallo Christian,
    I will describe you in detail, whow I have solved this
    problem. May be there is a newer way ... but it works.
    STEPS:
    I asume you have splitt your data in packages.
    1.) create a RFC-FM: Z_WAIT
    It return OK or NOT OK.
    This FM: does following:
    DO.
      call function TH_WPINFO -> until the WPINFO has more
    than a certain number of lines. (==> free tasks)
    ENDDO.
    If it is OK ==> free tasks are available
    call your FM (RFC!) like this:
    CALL FUNCTION <FM>
    STARTING NEW TASK ls_tasknam " Unique identifier!
    DESTINATION IN GROUP p_group
    PERFORMING return_info ON END OF TASK
    EXPORTING
    TABLES
    IMPORTING
    EXCEPTIONS
    *:--- Take care of the order of the exceptions!
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    RESOURCE_FAILURE = 5
    OTHERS = 1.
    *:--- Then you must check the difference between
    *:--- the started Calls and the received calls.
    *:--- If the number increases a certain value limit_tasks.
    wait until CALLED_TASK < LIMIT_TASKS up to '600' seconds.
    The value should be not greater then 20!
    DATA-Description:
    parameters: p_group like bdfields-rfcgr default 'Server_alle'. " For example. Use the F4 help
    if you have defined the report-parameter as above.
    ls_tasknam ==> Just the increasing number of RFC-Calls
    as Character.
    RETURN_INFO is a form routine in which You can check the results. Within this Form you must call:
    RECEIVE RESULTS FROM FUNCTION <FM>
    TABLES: ... " The tables of your <FM> exactly the same order!
    EXCEPTIONS
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    NO_ACTIVATE_INFOSTRUCTURE = 1.
    Her eyou must count the received Calls!
    And you can save them into a internal table for checking!
    I hope I could help you a little bit
    God luck
    Michael

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Parallel Processing

    Hi,
    I am trying to implement parallel processing and am facing a problem where in the Function Module contains the statement :
    submit (iv_repid) to sap-spool
                        with selection-table it_rspar_tmp
                        spool parameters iv_print_parameters
                        without spool dynpro
                        via job iv_name
                        number iv_number
                        and return.
    I call the Function Module as such :
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    But I keep getting the error Output Device "" unknown.
    Kindly advise.
    Thanks.

    I need the output of a report to be generated in the spool and then I am retreiving it later on from the spool and displaying ti along with another ALV in my current program.
    I have called the Job Open and Job Close function modules. Between these 2 FM calls, I have written the code for the parallel processing.
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    After this, I retrieve the data from Function Module : RSPO_RETURN_SPOOLJOB.
    All the above steps work while I am in debugging mode.At the RFC call, it opens a new session and i execute the session completely, return to the main program execution, and then execute to the end , and I get the desired output.
    But in debug mode, if i reach the RFC, a new session opens.I do not execute the FM, instead if i go back to the main program and execute it directly, i can replicate the error : Output device "" unknown.
    So i guess it has got something to do with the Submit statment in the RFC.
    Any assistance would be great !
    Thanks !!

  • Parallel processing in background

    Hi All,
    I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
    Is there any other solutions using that i can reduce total processing time.
    Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
    Please note that all other performance tunings done.
    Thanks,
    Rajesh.

    Hi Rajesh,
    Refer sample code for <b>Parallel Processing</b>:
    By doing this your <b>processing</b> time will be highly optimized.
    Go thru the description given in the code at each level.
    This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
    Hope it helps.
    REPORT PARAJOB.
    Data declarations
    DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
    "Parallel processing group.
    "SPACE = group default (all
    "servers)
    WP_AVAILABLE TYPE I, "Number of dialog work processes
    "available for parallel processing
    "(free work processes)
    WP_TOTAL TYPE I, "Total number of dialog work
    "processes in the group
    MSG(80) VALUE SPACE, "Container for error message in
    "case of remote RFC exception.
    INFO LIKE RFCSI, C, "Message text
    JOBS TYPE I VALUE 10, "Number of parallel jobs
    SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
    RCV_JOBS TYPE I VALUE 1, "Work packet replies received
    EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
    TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
    "parallel processing work unit)
    BEGIN OF TASKLIST OCCURS 10, "Task administration
    TASKNAME(4) TYPE C,
    RFCDEST LIKE RFCSI-RFCDEST,
    RFCHOST LIKE RFCSI-RFCHOST,
    END OF TASKLIST.
    Optional call to SBPT_INITIALIZE to check the
    group in which parallel processing is to take place.
    Could be used to optimize sizing of work packets
    work / WP_AVAILABLE).
    CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
    EXPORTING
    GROUP_NAME = GROUP
    "Name of group to check
    IMPORTING
    MAX_PBT_WPS = WP_TOTAL
    "Total number of dialog work
    "processes available in group
    "for parallel processing
    FREE_PBT_WPS = <b>WP_AVAILABLE</b>
    "Number of work processes
    "available in group for
    "parallel processing at this
    "moment
    EXCEPTIONS
    INVALID_GROUP_NAME = 1
    "Incorrect group name; RFC
    "group not defined. See
    "transaction RZ12
    INTERNAL_ERROR = 2
    "R/3 System error; see the
    "system log (transaction
    "SM21) for diagnostic info
    PBT_ENV_ALREADY_INITIALIZED = 3
    "Function module may be
    "called only once; is called
    "automatically by R/3 if you
    "do not call before starting
    "parallel processing
    CURRENTLY_NO_RESOURCES_AVAIL = 4
    "No dialog work processes
    "in the group are available;
    "they are busy or server load
    "is too high
    NO_PBT_RESOURCES_FOUND = 5
    "No servers in the group
    "met the criteria of >
    "two work processes
    "defined.
    CANT_INIT_DIFFERENT_PBT_GROUPS = 6
    "You have already initialized
    "one group and have now tried
    "initialize a different group.
    OTHERS = 7..
    CASE SY-SUBRC.
    WHEN 0.
    "Everything’s ok. Optionally set up for optimizing size of
    "work packets.
    WHEN 1.
    "Non-existent group name. Stop report.
    MESSAGE E836. "Group not defined.
    WHEN 2.
    "System error. Stop and check system log for error
    "analysis.
    WHEN 3.
    "Programming error. Stop and correct program.
    MESSAGE E833. "PBT environment was already initialized.
    WHEN 4.
    "No resources: this may be a temporary problem. You
    "may wish to pause briefly and repeat the call. Otherwise
    "check your RFC group administration: Group defined
    "in accordance with your requirements?
    MESSAGE E837. "All servers currently busy.
    WHEN 5.
    "Check your servers, network, operation modes.
    WHEN 6.
    Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
    DESTINATION IN GROUP to call the function module that does the
    work. Make a call for each record that is to be processed, or
    divide the records into work packets. In each case, provide the
    set of records as an internal table in the CALL FUNCTION
    keyword (EXPORT, TABLES arguments).
    DO.
    CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
    "in parallel
    STARTING NEW TASK TASKNAME "Name for identifying this
    "RFC call
    DESTINATION IN GROUP group "Name of group of servers to
    "use for parallel processing.
    "Enter group name exactly
    "as it appears in transaction
    "RZ12 (all caps). You may
    "use only one group name in a
    "particular ABAP program.
    PERFORMING RETURN_INFO ON END OF TASK
    "This form is called when the
    "RFC call completes. It can
    "collect IMPORT and TABLES
    "parameters from the called
    "function with RECEIVE.
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1 MESSAGE msg
    "Destination server not
    "reached or communication
    "interrupted. MESSAGE msg
    "captures any message
    "returned with this
    "exception (E or A messages
    "from the called FM, for
    "example. After exception
    "1 or 2, instead of aborting
    "your program, you could use
    "SPBT_GET_PP_DESTINATION and
    "SPBT_DO_NOT_USE_SERVER to
    "exclude this server from
    "further parallel processing.
    "You could then re-try this
    "call using a different
    "server.
    SYSTEM_FAILURE = 2 MESSAGE msg
    "Program or other internal
    "R/3 error. MESSAGE msg
    "captures any message
    "returned with this
    "exception.
    RESOURCE_FAILURE = 3. "No work processes are
    "currently available. Your
    "program MUST handle this
    "exception.
    YOUR_EXCEPTIONS = X. "Add exceptions generated by
    "the called function module
    "here. Exceptions are
    "returned to you and you can
    "respond to them here.
    CASE SY-SUBRC.
    WHEN 0.
    "Administration of asynchronous RFC tasks
    "Save name of task...
    TASKLIST-TASKNAME = TASKNAME.
    "... and get server that is performing RFC call.
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    APPEND TASKLIST.
    WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
    TASKNAME = TASKNAME + 1.
    SND_JOBS = SND_JOBS + 1.
    "Mechanism for determining when to leave the loop. Here, a
    "simple counter of the number of parallel processing tasks.
    "In production use, you would end the loop when you have
    "finished dispatching the data that is to be processed.
    JOBS = JOBS - 1. "Number of existing jobs
    IF JOBS = 0.
    EXIT. "Job processing finished
    ENDIF.
    WHEN 1 OR 2.
    "Handle communication and system failure. Your program must
    "catch these exceptions and arrange for a recoverable
    "termination of the background processing job.
    "Recommendation: Log the data that has been processed when
    "an RFC task is started and when it returns, so that the
    "job can be restarted with unprocessed data.
    WRITE msg.
    "Remove server from further consideration for
    "parallel processing tasks in this program.
    "Get name of server just called...
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    "Then remove from list of available servers.
    CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
    IMPORTING
    SERVERNAME = TASKLIST-RFCDEST
    EXCEPTIONS
    INVALID_SERVER_NAME = 1
    NO_MORE_RESOURCES_LEFT = 2
    "No servers left in group.
    PBT_ENV_NOT_INITIALIZED_YET = 3
    OTHERS = 4.
    WHEN 3.
    "No resources (dialog work processes) available at
    "present. You need to handle this exception, waiting
    "and repeating the CALL FUNCTION until processing
    "can continue or it is apparent that there is a
    "problem that prevents continuation.
    MESSAGE I837. "All servers currently busy.
    "Wait for replies to asynchronous RFC calls. Each
    "reply should make a dialog work process available again.
    IF EXCP_FLAG = SPACE.
    EXCP_FLAG = 'X'.
    "First attempt at RESOURCE_FAILURE handling. Wait
    "until all RFC calls have returned or up to 1 second.
    "Then repeat CALL FUNCTION.
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
    ELSE.
    "Second attempt at RESOURCE_FAILURE handling
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
    "SY-SUBRC 0 from WAIT shows that replies have returned.
    "The resource problem was therefore probably temporary
    "and due to the workload. A non-zero RC suggests that
    "no RFC calls have been completed, and there may be
    "problems.
    IF SY-SUBRC = 0.
    CLEAR EXCP_FLAG.
    ELSE. "No replies
    "Endless loop handling
    ENDIF.
    ENDIF.
    ENDCASE.
    ENDDO.
    Wait for end of job: replies from all RFC tasks.
    Receive remaining asynchronous replies
    WAIT UNTIL RCV_JOBS >= SND_JOBS.
    LOOP AT TASKLIST.
    WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
    30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
    ENDLOOP.
    This routine is triggered when an RFC call completes and
    returns. The routine uses RECEIVE to collect IMPORT and TABLE
    data from the RFC function module.
    Note that the WRITE keyword is not supported in asynchronous
    RFC. If you need to generate a list, then your RFC function
    module should return the list data in an internal table. You
    can then collect this data and output the list at the conclusion
    of processing.
    FORM RETURN_INFO USING TASKNAME.
    DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
    RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
    IMPORTING RFCSI_EXPORT = INFO
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1
    SYSTEM_FAILURE = 2.
    RCV_JOBS = RCV_JOBS + 1. "Receiving data
    IF SY-SUBRC NE 0.
    Handle communication and system failure
    ELSE.
    READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
    IF SY-SUBRC = 0. "Register data
    TASKLIST-RFCHOST = INFO_RFCHOST.
    MODIFY TASKLIST INDEX SY-TABIX.
    ENDIF.
    ENDIF.
    ENDFORM
    Reward points if that helps.
    Manish
    Message was edited by:
            Manish Kumar

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Dynamic parallel processing of the same object using asynchronous method

    Hi,
    Please can anyone help me?
    I have to send the same DMS document to several agents for parallel processing. The number of agents is not known until runtime. Each of them should process the document and at least change the status of it. In next step I check if he has changed it.
    I use dynamic parallel processing of subworkflows. Key task of this subworkflow uses standard method of object DRAW - DOCUMENT.EDIT  (standard transaction CV02N) which is asynchronous. The task is finished by event DOCUMENT.CHANGED. 
    During the parallel processing the appropriate number of workitems is generated. However, when the agent who processes the document as first completes his workitem the event DOCUMENT.CHANGED is generated and all parallel workitems are completed, even those of other agents that were not processed yet.
    Any help would be appreciated.
    Thanks.
    Eva Vahalova

    Hi all,
    The process is used to approve incoming invoices. Each scanned invoice is attached to a DMS document and than sent to one or several agents in parallel. People from several departments can approve the same invoice for instance energy or mobile phone costs. We have no HR module fully implemented. Each agent may write some remarks and has to sets the document status to either approved or rejected. This status is temporary therefore the others see the original status for approving.
    The process of incoming invoices was implemented by SAP consultants in 2003 on 4.6B and now runs on our 4.7 system.  Now new company was established running on a new SAP system ECC 6.0 and our accountant department and some agents will deal with invoices in both systems. Therefore, the process should appear the same or at least very similar. The majority of the old process was realized by programming while I would like to use workflow features that are available now and reduce the programming part.
    As I see, I will have to choose one of the solutions that Arghadip suggested.
    I wonder if there is a possibility to use asynchronous method and control the end of each work item by means of u201CComplete Work Itemu201D or u201CComplete executionu201D Conditions. I have never used them and I do not know how they work and what condition to use. Maybe program exit might be used as well. While controlling the agents I think I will have to do some programming anyway because the work item can be finished by a substitute too.
    Thanks for your help.
    Eva

Maybe you are looking for