Parallel Processing - Rechecking number of available resources.

Hi  SAP Gurus,
Anyone got an idea on how to determine the number of available resources when using parallel processing / multithreading approach to optimize a program?  I was able to determine number of free resources  by calling fm SPBT_INITIALIZE but wasn't able to perform another similar call to this fm (exception PBT_ENV_ALREADY_INITIALIZED is being triggered) for the purpose of rechecking the current available resources that I may use from time to time.  Any idea?
Thanks,
Allex

Hi,
insert after INITIALIZE:
case sy-subrc.
    when 0. "ok
    when 3.
      call function 'SPBT_GET_CURR_RESOURCE_INFO'
       importing
*   MAX_PBT_WPS                       =
         free_pbt_wps                      = gv_maxno_pbt_available
exceptions
         internal_error                    = 1
         pbt_env_not_initialized_yet       = 2
         others                            = 3.
  when others.
  endcase.
kind regards,
hp

Similar Messages

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Defining index in Dynamic parallel processing of workflow

    Hi all,
    I am using dynamic parallel processing feature in workflow for a particular multi-line element. But I am not able to define the index in that particular task.
    Consider that i have a multi-line element "Material". For this element, I need to loop that task for that n number of records. so i am using dynamic parallel processing. Now for each workitem generated, i need to show that particular material in the workitem. I remember that we need to use index, but i couldn't recollect how it is defined.
    Could anyone help me in this regard?
    Thanks in advance

    Nikhil,
    When you use dynamic parallel processing the index is available in <b>_Wf_ParForEach_Index</b>. A reference for the line of the multi line element is automatically generated for each work item created. You can see this in the Binding Editor for the step. In your case this will be "Material()". When you drag this element to the WF to Step binding window, it will be resolved as "&Material[&_Wf_ParForEach_Index&]&. Therefore you can get the material for each WI by defining "Material" in your task container (not as multiline) and doing the appropriate binding. If you in fact need the Index in your method, you can define a container element your task with reference to Type SWC_INDEX and bind to ]_Wf_ParForEach_Index.
    Cheers,
    Ramki Maley.
    Please reward points if the answer is helpful.
    For info on awarding points click on this link: https://www.sdn.sap.com/sdn/index.sdn?page=crp_help.htm

  • Parallel Processing : How to Handle Resource failure?

    Hi,
    I have implemented the parallel processing/ asynchronous rfc call in my system because we have to process millions of records and processing is important. My Program does work fine in Development and quality for small number of records but during SVT I am encountering RESOURCE_FAILURE exception. As of now I have tried to wait for more time and then process it again and also on failure I have tried to process sequential but nothing worked with second approach that is on resource_failure execute normal FM call it is resulting in terminating Parallel processing.
    Any Pointer on how to handle it is appreciated.
    Regards,
    Deepak Bhalla

    <b>Handling the RESOURCE_FAILURE exception:</b> As each parallel processing task is dispatched, the SAP system counts down the number of resources (dialog work processes) available for processing additional tasks. This count goes up again as each parallel processing task is completed and returns to your program.
    Should your parallel processing tasks take a long time to complete, then the parallel processing resources may temporarily run out. In this case, CALL FUNCTION returns the exception RESOURCE_FAILURE. This means simply that all dialog work processes in the RFC group that your program is using are in use.
    Your program must now wait until resources become available and then re-issue the CALL FUNCTION that failed. In the sample program, we use a simple, reasonably failsafe wait mechanism. The program waits for parallel processing tasks to return, freeing up resources. The WAIT also specifies a initial timeout of 1 second. If the CALL FUNCTION again fails, the WAIT is repeated with a longer time-out. You can increase the time-outs if you expect that your parallel tasks will take longer to complete. You should also add code to exit from the retry loop after a suitable number of iterations.
    Use WAIT statement.
    Hope this resolves u r issue.
    - Raj

  • Limit number of parallell processes when archiving

    Hello,
    I am about to achive object SD_VBAK. This will take a long time to run, so I want to limit the number of parallell batch process it runs in. When I did this in our development system the archiving took all available batch processes. It there a way to set the number of parallell processes? I don´t want to affect the normal business.
    We run on ECC 6.0.
       Best Regards
       Ann-Sofie Svensson

    Hi ann,
    Check this link
    http://help.sap.com/saphelp_erp2004/helpdata/en/d2/36e2791560ed4a96e97a3175694886/content.htm
    You can set parallel work process, or you can run archiving only in night, for example in Cross-Object Customizing you set parameter Max. Duration Hrs, then schedule Archiving job and finish automatically in XX Hours.
    Regards,
    William Neira

  • How to define "leading" random number in Infoset fpr parallel processing

    Hello,
    in Bankanalyzer we use an Infoset which consists of a selection across 4 ODS tables to gather data.
    No matter which PACKNO fields we check or uncheck in the infoset definition screen (TA RSISET), the parallel frameworks always selects the same PACKNO field from one ODS table.
    Unfortunately, the table that is selected by the framework is not suitable, because our
    "leading" ODS table which holds most of our selection criteria is another one.
    How to "convince" the parallel framework to select our leading table for the specification
    of the PACKNO in addition (this would be times 20 faster due to better select options).
    We even tried to assign "alternate characteristics" to the packnos we do not liek to use,
    but it seems that note 999101 just fixes this for non-system-fields.
    But for the random number a diffrent form routine is used in /BA1/LF3_OBJ_INDEX_READF01
    fill_range_random instead of fill_range.
    Has anyone managed to assign the PACKNO of his choice to the infoset selection?
    How?
    Thanks in advance
    Volker

    Well, it is a bit more complicated
    ODS one, that the parallel framework selects for being the one to deliver the PACKNO
    is about equal in size (~120GB each) to ODS two which has two significant field which cuts down the
    amount of data to be retreived.
    Currently we execute the generated SQL in the best possible manner (by faking some stats )
    The problem is, that I'd like to have a Statement that has the PACKNO in the very same table.
    PACKNO is a generated random number esp. to be used for parallel processing.
    The job starts about 100 slaves
    Each slave gets a packet to be processed from the framework, which is internaly represented
    by a BETWEEN clause on this PACKNO. This is joined against ODS2 and then the selective fields
    can be compared resultin in 90% of the already fetched rowes can be discarded.
    Basicly it goes like
    select ...
    from
      ods1 T_00,
      ods2 T_01,
      ods3 T_02,
      ods4 T_03
    where
    ... some key equivalence join-conditions ...
    AND  T_00.PACKNO BETWEEN '000000' and '000050' -- very selective on T_00
    AND  T_01.TYPE = '202'  -- selective Value 10% on second table
    I'd trying to change this to
    AND  T_01.PACKNO BETWEEN '000000' and '000050'
    AND  T_01.TYPE = '202'  -- selective Value 10%
    so I can use a combined Index on T_01 (TYPE;PACKNO)
    This would be times 10 more selective on the driving table and due to the fact,
    that T_00 would be joined for just the rows I need, about a calculated time 20-30 faster.
    It really boosts when I do this in sqlplus
    Hope this clearyfies a bit.
    Problem is, that I can not change the code either for doing the
    build of the packets or the one that executes the application.
    I need to change the Inofset, so that the framework decides to build
    proper SQL with T_01.PACKNO instead of T_00.PACKNO.
    Thanks a lot
    Volker

  • SAP job not using all dialog processes that are available for parallel processing

    He Experts,
    The customer is running a job which is not using all the dialog processes that are available for parallel processing. It appears to use up the parallel processes (60) for the first 4-5 minutes of the job and then maxes out about 3-5 processes for the remainder of the job.
    How do I analyze the job to find out the issue from a Basis perspective?
    Thanks,
    Zahra

    Hi Daniel,
    Thanks for replying!
    I don't believe its a standard job.
    I was thinking of starting a trace using ST05 before the job. What do you think?
    Thanks,
    Zahra

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • How to limit number of parallel processes for a query???

    Hi,
    I have set table parallelism to degree (4 in my case) and when i run a query on that table i see on v$session that this query is using 8 parallel processes.
    Why my query is using all this processes?? Can i limit this number?? I think this will cause poor performance if all my parallel processes stay BUSY on v$pq_slave.
    1     1     P000     BUSY     22     0     8     0     2051     10     2     15     0     79179     76884
    2     1     P001     BUSY     22     0     8     0     2054     10     2     15     0     81905     77443
    3     1     P004     BUSY     2     0     1592     0     0     0     0     1592     0     1039     3
    4     1     P005     BUSY     2     0     1592     0     0     0     0     1592     0     1038     4
    5     1     PZ99     BUSY     533     0     0     0     1     3     0     0     0     1071     1107
    6     2     P000     BUSY     14     0     8     0     2053     10     3     15     1     53014     73297
    7     2     P001     BUSY     14     0     8     0     2048     10     2     15     1     51266     73318
    8     2     P002     BUSY     14     0     8     0     2052     10     2     15     2     51043     73271
    9     2     P003     BUSY     14     0     8     0     2053     9     2     15     2     49417     73327
    10     2     P004     BUSY     13     0     8     0     2055     9     2     15     2     68428     12468
    11     2     P005     BUSY     13     0     8     0     2059     10     2     15     1     69968     12473
    12     2     PZ99     BUSY     461     0     0     0     1     3     0     0     0     921     936
    Tks,
    Paulo.

    select /*+ PARALLEL(a,4) */ ...... from owner.table a;
    Or
    ALTER SESSION FORCE PARALLEL DML PARALLEL <degree>
    (But I am not sure whether this will affect the degree of select also ?)

  • Parallel processing not possible (Message number: RSRD186)

    During broadcasting with several parallel processes our system generates the error message:
    Parallel processing not possible (Message number: RSRD186)
    On SDN I found note 1265745 which states exact the same problem but when I want to implement this note into our system SAP gives the message:
    The requested SAP Note is either in reworking or is released internally only
    Can anyone explain wat this means! I realy need this correction!

    Hi,
    i am also getting the same problem after upgrading the SP16 to SP18 (JAVA) and Support pack  SAPKW70020 (ABAP)
    Parallel processing not possible: no processing of 27 package(s)
    Message no. RSRD186
    Every week at least one broadcasting job is failed with the above error in ABAP
    The logs in appliation.log i found is
    ABEND BRAIN (635): Query ZSALESREPORTACTUALCOST could not be opened.
      MSGV1: ZSALESREPORTACTUALCOST#
    #1.#00306E0C1AE7007400002EA40000748800047F1738882EA8#1265637627502#/Applications/BI#sap.com/com.sap.prt.application.rfcframework#com.sap.ip.bi.base.application.message.impl.MessageBase#MTHEIN#79174##n/a##4b6d489014ba11dfb68500306e0c1ae7#SAPEngine_Application_Thread[impl:3]_3##0#0#Fatal#1#com.sap.ip.bi.base.application.message.impl.MessageBase#Plain###A message was generated:
    ABEND RSBOLAP (000): Program error in class SAPMSSY1 method : UNCAUGHT_EXCEPTION
      MSGV1: SAPMSSY1
      MSGV3: UNCAUGHT_EXCEPTION#
    #1.#00306E0C1AE70079000034730000748800047F17388E8DE4#1265637627919#/Applications/BI#sap.com/com.sap.prt.application.rfcframework#com.sap.ip.bi.base.application.message.impl.MessageBase#PMACH#79178##n/a##4c5fbee014ba11dfc2e200306e0c1ae7#SAPEngine_Application_Thread[impl:3]_11##0#0#Error#1#com.sap.ip.bi.base.application.message.impl.MessageBase#Plain###A message was generated:
    ERROR MC (601): Object requested is currently locked by user xxxxxxx
    MSGV1: xxxxxxxx
      MSGV2: E_RSRREPDIR#
    #1.#00306E0C1AE70079000034750000748800047F17388E93FC#1265637627921#/Applications/BI#sap.com/com.sap.prt.application.rfcframework#com.sap.ip.bi.base.application.message.impl.MessageBase#PMACH#79178##n/a##4c5fbee014ba11dfc2e200306e0c1ae7#SAPEngine_Application_Thread[impl:3]_11##0#0#Fatal#1#com.sap.ip.bi.base.application.message.impl.MessageBase#Plain###A message was generated:
    Could any body know me the solution
    Thanks & Regads,
    Arun

  • Parallel Processing: Error message 00-250: "No CUA area available"

    Dear colleagues,
    I have implemented a parallel processing with asynchronous RFC for a large data analysis in CO-PC. I think I was able to implement everything properly as described in the sap help: http://help.sap.com/saphelp_nw04/helpdata/en/22/0425c6488911d189490000e829fbbd/content.htm
    But now I face one problem: My jobs are often cancelled with the error message 250 of class 00 "No CUA area available".
    I don't have a clue what this error message means. I couldn't find any help in the SDN, OSS or in the internet. Has anybody an idea how to handle this problem?
    Thx very much for your help!
    Marius

    In your program while you doing the CALL FUNCTION - STARTING NEW TASK, did you check how many work processors are available.
    Suppose you have 40K data in your internal table, and in one step you are passing 4 K data. Then 10 work processes are required. But in your group setting if you have define 9, then last one will fail. So before any 'submit job in new task' or any 'CALL FUNCTION - STARTING NEW TASK' statement if have to check if any work processes are available. if available then submit otherwise wait.
    Thanks
    Subhankar

  • Parallel processing in background

    Hi All,
    I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
    Is there any other solutions using that i can reduce total processing time.
    Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
    Please note that all other performance tunings done.
    Thanks,
    Rajesh.

    Hi Rajesh,
    Refer sample code for <b>Parallel Processing</b>:
    By doing this your <b>processing</b> time will be highly optimized.
    Go thru the description given in the code at each level.
    This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
    Hope it helps.
    REPORT PARAJOB.
    Data declarations
    DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
    "Parallel processing group.
    "SPACE = group default (all
    "servers)
    WP_AVAILABLE TYPE I, "Number of dialog work processes
    "available for parallel processing
    "(free work processes)
    WP_TOTAL TYPE I, "Total number of dialog work
    "processes in the group
    MSG(80) VALUE SPACE, "Container for error message in
    "case of remote RFC exception.
    INFO LIKE RFCSI, C, "Message text
    JOBS TYPE I VALUE 10, "Number of parallel jobs
    SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
    RCV_JOBS TYPE I VALUE 1, "Work packet replies received
    EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
    TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
    "parallel processing work unit)
    BEGIN OF TASKLIST OCCURS 10, "Task administration
    TASKNAME(4) TYPE C,
    RFCDEST LIKE RFCSI-RFCDEST,
    RFCHOST LIKE RFCSI-RFCHOST,
    END OF TASKLIST.
    Optional call to SBPT_INITIALIZE to check the
    group in which parallel processing is to take place.
    Could be used to optimize sizing of work packets
    work / WP_AVAILABLE).
    CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
    EXPORTING
    GROUP_NAME = GROUP
    "Name of group to check
    IMPORTING
    MAX_PBT_WPS = WP_TOTAL
    "Total number of dialog work
    "processes available in group
    "for parallel processing
    FREE_PBT_WPS = <b>WP_AVAILABLE</b>
    "Number of work processes
    "available in group for
    "parallel processing at this
    "moment
    EXCEPTIONS
    INVALID_GROUP_NAME = 1
    "Incorrect group name; RFC
    "group not defined. See
    "transaction RZ12
    INTERNAL_ERROR = 2
    "R/3 System error; see the
    "system log (transaction
    "SM21) for diagnostic info
    PBT_ENV_ALREADY_INITIALIZED = 3
    "Function module may be
    "called only once; is called
    "automatically by R/3 if you
    "do not call before starting
    "parallel processing
    CURRENTLY_NO_RESOURCES_AVAIL = 4
    "No dialog work processes
    "in the group are available;
    "they are busy or server load
    "is too high
    NO_PBT_RESOURCES_FOUND = 5
    "No servers in the group
    "met the criteria of >
    "two work processes
    "defined.
    CANT_INIT_DIFFERENT_PBT_GROUPS = 6
    "You have already initialized
    "one group and have now tried
    "initialize a different group.
    OTHERS = 7..
    CASE SY-SUBRC.
    WHEN 0.
    "Everything’s ok. Optionally set up for optimizing size of
    "work packets.
    WHEN 1.
    "Non-existent group name. Stop report.
    MESSAGE E836. "Group not defined.
    WHEN 2.
    "System error. Stop and check system log for error
    "analysis.
    WHEN 3.
    "Programming error. Stop and correct program.
    MESSAGE E833. "PBT environment was already initialized.
    WHEN 4.
    "No resources: this may be a temporary problem. You
    "may wish to pause briefly and repeat the call. Otherwise
    "check your RFC group administration: Group defined
    "in accordance with your requirements?
    MESSAGE E837. "All servers currently busy.
    WHEN 5.
    "Check your servers, network, operation modes.
    WHEN 6.
    Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
    DESTINATION IN GROUP to call the function module that does the
    work. Make a call for each record that is to be processed, or
    divide the records into work packets. In each case, provide the
    set of records as an internal table in the CALL FUNCTION
    keyword (EXPORT, TABLES arguments).
    DO.
    CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
    "in parallel
    STARTING NEW TASK TASKNAME "Name for identifying this
    "RFC call
    DESTINATION IN GROUP group "Name of group of servers to
    "use for parallel processing.
    "Enter group name exactly
    "as it appears in transaction
    "RZ12 (all caps). You may
    "use only one group name in a
    "particular ABAP program.
    PERFORMING RETURN_INFO ON END OF TASK
    "This form is called when the
    "RFC call completes. It can
    "collect IMPORT and TABLES
    "parameters from the called
    "function with RECEIVE.
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1 MESSAGE msg
    "Destination server not
    "reached or communication
    "interrupted. MESSAGE msg
    "captures any message
    "returned with this
    "exception (E or A messages
    "from the called FM, for
    "example. After exception
    "1 or 2, instead of aborting
    "your program, you could use
    "SPBT_GET_PP_DESTINATION and
    "SPBT_DO_NOT_USE_SERVER to
    "exclude this server from
    "further parallel processing.
    "You could then re-try this
    "call using a different
    "server.
    SYSTEM_FAILURE = 2 MESSAGE msg
    "Program or other internal
    "R/3 error. MESSAGE msg
    "captures any message
    "returned with this
    "exception.
    RESOURCE_FAILURE = 3. "No work processes are
    "currently available. Your
    "program MUST handle this
    "exception.
    YOUR_EXCEPTIONS = X. "Add exceptions generated by
    "the called function module
    "here. Exceptions are
    "returned to you and you can
    "respond to them here.
    CASE SY-SUBRC.
    WHEN 0.
    "Administration of asynchronous RFC tasks
    "Save name of task...
    TASKLIST-TASKNAME = TASKNAME.
    "... and get server that is performing RFC call.
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    APPEND TASKLIST.
    WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
    TASKNAME = TASKNAME + 1.
    SND_JOBS = SND_JOBS + 1.
    "Mechanism for determining when to leave the loop. Here, a
    "simple counter of the number of parallel processing tasks.
    "In production use, you would end the loop when you have
    "finished dispatching the data that is to be processed.
    JOBS = JOBS - 1. "Number of existing jobs
    IF JOBS = 0.
    EXIT. "Job processing finished
    ENDIF.
    WHEN 1 OR 2.
    "Handle communication and system failure. Your program must
    "catch these exceptions and arrange for a recoverable
    "termination of the background processing job.
    "Recommendation: Log the data that has been processed when
    "an RFC task is started and when it returns, so that the
    "job can be restarted with unprocessed data.
    WRITE msg.
    "Remove server from further consideration for
    "parallel processing tasks in this program.
    "Get name of server just called...
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    "Then remove from list of available servers.
    CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
    IMPORTING
    SERVERNAME = TASKLIST-RFCDEST
    EXCEPTIONS
    INVALID_SERVER_NAME = 1
    NO_MORE_RESOURCES_LEFT = 2
    "No servers left in group.
    PBT_ENV_NOT_INITIALIZED_YET = 3
    OTHERS = 4.
    WHEN 3.
    "No resources (dialog work processes) available at
    "present. You need to handle this exception, waiting
    "and repeating the CALL FUNCTION until processing
    "can continue or it is apparent that there is a
    "problem that prevents continuation.
    MESSAGE I837. "All servers currently busy.
    "Wait for replies to asynchronous RFC calls. Each
    "reply should make a dialog work process available again.
    IF EXCP_FLAG = SPACE.
    EXCP_FLAG = 'X'.
    "First attempt at RESOURCE_FAILURE handling. Wait
    "until all RFC calls have returned or up to 1 second.
    "Then repeat CALL FUNCTION.
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
    ELSE.
    "Second attempt at RESOURCE_FAILURE handling
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
    "SY-SUBRC 0 from WAIT shows that replies have returned.
    "The resource problem was therefore probably temporary
    "and due to the workload. A non-zero RC suggests that
    "no RFC calls have been completed, and there may be
    "problems.
    IF SY-SUBRC = 0.
    CLEAR EXCP_FLAG.
    ELSE. "No replies
    "Endless loop handling
    ENDIF.
    ENDIF.
    ENDCASE.
    ENDDO.
    Wait for end of job: replies from all RFC tasks.
    Receive remaining asynchronous replies
    WAIT UNTIL RCV_JOBS >= SND_JOBS.
    LOOP AT TASKLIST.
    WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
    30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
    ENDLOOP.
    This routine is triggered when an RFC call completes and
    returns. The routine uses RECEIVE to collect IMPORT and TABLE
    data from the RFC function module.
    Note that the WRITE keyword is not supported in asynchronous
    RFC. If you need to generate a list, then your RFC function
    module should return the list data in an internal table. You
    can then collect this data and output the list at the conclusion
    of processing.
    FORM RETURN_INFO USING TASKNAME.
    DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
    RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
    IMPORTING RFCSI_EXPORT = INFO
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1
    SYSTEM_FAILURE = 2.
    RCV_JOBS = RCV_JOBS + 1. "Receiving data
    IF SY-SUBRC NE 0.
    Handle communication and system failure
    ELSE.
    READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
    IF SY-SUBRC = 0. "Register data
    TASKLIST-RFCHOST = INFO_RFCHOST.
    MODIFY TASKLIST INDEX SY-TABIX.
    ENDIF.
    ENDIF.
    ENDFORM
    Reward points if that helps.
    Manish
    Message was edited by:
            Manish Kumar

  • Strange behaviour in parallel processing (aRFC)

    Hi,
    I have programmed a data extraction program in SAP IS-U. Due to the sheer size of the tables (900 million+) if had to use parallel processing to keep the runtime acceptable.
    When running in the QA-environment I see a funny behaviour. There are about 49 dialog processes available, but the program never uses more than around 15. In transaction SARFC the settings are, that the appserver may get a load of 100% processes.
    Furthermore I see that in the first few minutes a lot of jobs are being created. Then for a few minutes almost nothing happens (maybe 2 or 3 are running) and then a few minutes later it's back to normal. This cycle repeats on and on and takes around 20 minutes. The Basis-people say the system load is not very high, neither is the DB-load.
    My questions are:
    1) Why does the job counter never exceed the 15 jobs, even though there are plenty available (65 in total, 49 on my app server)?
    2) Why is the performance so wobbly? I would expect that the slots in SM51 should always be filled with fresh jobs.
    With kind regards,
    Crispian Stones
    P.S.
    The mechanism I use is similar to the following:
    loop at tb_todo into wa_todo.
      call function 'SPBT_GET_CURR_RESOURCE_INFO'
       importing FREE_PBT_WPS = available.
      check available gt 1.
      call function 'Z_MY_EXTRACTOR'
        starting new task my_taskname
        in destination my_destination
        performing my_callback on end of task
        exporting
          i_data = wa_todo.
      if sy-subrc eq 0.
        add 1 to created.
      endif.
    endloop.
    wait until returned ge created.
    form my_callback.
      add 1 to returned.
      receive results from function 'Z_MY_EXTRACTOR'.
    endform.

    Hello,
    I am facing a similar issue in one of my || processing program as well. The program, when executed with a data-set of 10,000 records takes 65 minutes to complete. One would expect it to take 650 minutes (or even lesser) to process a data-set of app. 100,000 records.
    However, when I run the program for a file with app. 100,000 records the program runs OK initially (i.e; I could see multiple dialog processes getting invoked in SM50) but, after a while it starts running on ONLY ONE dialog process. I am not quite sure where, when and why this PARALLEL to SEQUENTIAL switch is happening. Due to this, the program drags on and on and on. I would highly appreciate your suggestions/tips to put this bug to sleep.
    Here is a summary of the logic used...
      w_group = 'BATCH_PARALLEL'.
      w_task  = w_task + 1.
      CALL FUNCTION 'SPBT_INITIALIZE'
       EXPORTING
         group_name                           = w_group
       IMPORTING
         max_pbt_wps                          = w_pr_total   "Total processes
         free_pbt_wps                         = w_pr_avl     "Avail processes
       EXCEPTIONS
         invalid_group_name                   = 1
         internal_error                       = 2
         pbt_env_already_initialized          = 3
         currently_no_resources_avail         = 4
         no_pbt_resources_found               = 5
         cant_init_different_pbt_groups       = 6
         OTHERS                               = 7.
      IF sy-subrc <> 0.
      Raise error mesage and quit
        w_wait = c_x.
    If everything went well, continue processing
      ELSE.
        CLEAR: w_wait.
    The subroutine that receives results from the parallel FMs will reduce
    this counter and set the flag W_WAIT once the value is equal to ZERO
        w_count = LINES( data ).
    Refresh the temporary table that will be populated for every partner
        REFRESH: t_data.
        LOOP AT data.
    Keep appending data to the temporary table
          APPEND data TO t_data.
          AT END OF partner.
            CLEAR: w_subrc.
            CALL FUNCTION 'Z_PARALLEL_FUNCTION'
              STARTING NEW TASK w_task
              DESTINATION IN GROUP w_group
              PERFORMING process_return ON END OF TASK
              TABLES
                data                  = t_data
              EXCEPTIONS
                communication_failure = 1      "Mandatory for || processing
                system_failure        = 2      "Mandatory for || processing
                RESOURCE_FAILURE      = 3      "Mandatory for || processing
                OTHERS                = 4.
            w_subrc = sy-subrc.
    Check if everything went well...
            CLEAR: w_rfcdest.
            CASE w_subrc.
              WHEN 0.
    This variable keeps track of the number of threads initiated. In case
    all the processes are busy, we should compare this with the variable
    w_recd (set later in the subroutine 'PROCESS_RETURN'), and wait till
    w_sent >= w_recd.
                w_sent = w_sent + 1.
    Track all the tasks initiated.
                CLEAR: wa_tasklist.
                wa_tasklist-taskname = w_task.
                APPEND wa_tasklist TO t_tasklist.
              WHEN 1 OR 2.
    Populate the error log table and continue to process the rest.
              WHEN OTHERS.
    There might be a lack of resources. Wait till some processes
    are freed again. Populate the records back to the main table
                CLEAR: wa_data.
                LOOP AT t_data INTO wa_data.
                  APPEND wa_data TO data.
                ENDLOOP.
                WAIT UNTIL w_recd >= w_sent. "IS THIS THE CULPRIT?
            ENDCASE.
    Increment the task number
            w_task = w_task + 1.
    Refresh the temporary table
            REFRESH t_data.
          ENDAT.
        ENDLOOP.
      ENDIF.
    Wait till all the records are returned.
      WAIT UNTIL w_wait = c_x UP TO '120' SECONDS.
    FORM process_return USING p_taskname.                       "#EC CALLED
      REFRESH: t_data_tmp.
      CLEAR  : w_subrc.
    Check the task for which this subroutine is processed!!!
      CLEAR: wa_tasklist.
      READ TABLE t_tasklist INTO wa_tasklist WITH KEY taskname = p_taskname.
    If the task wasn't already processed...
      IF sy-subrc eq 0.
    Delete the task from the table T_TASKLIST
        DELETE TABLE t_tasklist FROM wa_tasklist.
    Receive the results back from the function module
        RECEIVE RESULTS FROM FUNCTION 'Z_PARALLEL_FUNCTION'
          TABLES
            address_data          = t_data_tmp
          EXCEPTIONS
            communication_failure = 1      "Mandatory for || processing
            system_failure        = 2      "Mandatory for || processing
            RESOURCE_FAILURE      = 3      "Mandatory for || processing
            OTHERS                = 4.
    Store sy-subrc in a temporary variable.
        w_subrc = sy-subrc.
    Update the counter (Number of tasks/jobs/threads received)
        w_recd = w_recd + 1.
    Check the returned values
        IF w_subrc EQ 0.
    Do necessary processing!!!
        ENDIF.
    Subtract the number of records that were returned back from the
    total number of records to be processed
        w_count = w_count - LINES( t_data_tmp ).
    If the counter is ZERO, set W_WAIT.
        IF w_count = 0.
          w_wait = c_x.
        ENDIF.
      ENDIF.
    ENDFORM.                    " process_return
    Thanks,
    Muthu

  • Parallel processing question

    I have written a module to read data from an external source, convert input to IDOCS, and process these IDOCS.
    Due to performance of IDOC processing, I decided to use parallel processing to spread the IDOC creation load among several processes using server groups.
    My main program sets up the parallel processing environment with a call to function module SPBT_INITIALIZE, then make repeated calls to a function module that processes the IDOCS, using the STARTING NEW TASK and DESTINATION IN GROUP xxx keywords.
    What I'm seeing is the main process running as a background task and one dialog process running the parallel function module, even though there are still 8 or 9 other dialog processes available.
    What I expected to see was several dialog process running my child function modules, not just the one.
    Does anyone know why the other dialog processes are not being used?
    Thanks for any input,
    Dorian.

    Thomas:
    I'm logging any errors that occur; I'm not seeing any resource failures - in fact no errors at all, other than expected application data errors. It seems that the RFC calls are all being made in a single child process that queues up the parallel jobs and uses just one dialog process to run them all. I expected to see as many dialog tasks being used as were available.
    As far as the RFC parameters go - are you referring to the RZ10 values? I looked at all of the parameters containing "rfc" as part of their name, and nothing looked as though it was restricting the parallel task behaviour. Do you have any advice as to suitable settings?
    I'm also wondering if what I am seeing is just the way SAP is supposed to work? Although I expected to see lots of child processes running in multiple dialogs if processes are available, maybe by design only one remote process per server is allowed? I checked the documentation I could find on the "starting new task" keyword, and nowhere does it say that multiple processes will be started on each server in the server group; only that a child process will NOT be started if the number of unused processes fall below a defined threshold.

  • Parallel processing: Help!

    Hi Experts!
    I need to use parallel processing for BAPI_INQUIRY_CREATEFROMDATA2 since this process huge data.
    How could I actually start a parallel processing using this BAPI ?
    Thanks!

    Hi,
    Check this example..This code is there in the abap help..
    Press F1 on the CALL FUNCTION key word..
    Then choose the link..
    "CALL FUNCTION func STARTING NEW TASK taskname. "
    There you will find the code that I mentioned here...
    DATA: MSG_TEXT(80) TYPE C. "Message text
    Asynchronous call to Transaction SM59 -->
    Create a new session
    CALL FUNCTION 'ABAP4_CALL_TRANSACTION' STARTING NEW TASK 'TEST'
      DESTINATION 'NONE'
      EXPORTING
          TCODE = 'SM59'
      EXCEPTIONS
        COMMUNICATION_FAILURE = 1 MESSAGE MSG_TEXT
        SYSTEM_FAILURE        = 2 MESSAGE MSG_TEXT.
      IF SY-SUBRC NE 0.
        WRITE: MSG_TEXT.
      ELSE.
        WRITE: 'O.K.'.
      ENDIF.
    Using RFC groups to parallelize function module calls (RFC parallel processing)
    TYPES: BEGIN OF TASKLIST_TYPE,
           TASKNAME(4) TYPE C, "Task administration
           RFCDEST LIKE RFCSI-RFCDEST
           END OF TASKLIST_TYPE.
    DATA: INFO LIKE RFCSI, C,  "Message text
          JOBS TYPE I VALUE 10,  "Number of parallel jobs
          SND_JOBS TYPE I VALUE 1,  "Sent jobs
          RCV_JOBS TYPE I VALUE 1,  "Received replies
          EXCP_FLAG(1) TYPE C,  "Number of RESOURCE_FAILUREs
          TASKNAME(4) TYPE N VALUE '0001',  "Task name administration
          TASKLIST TYPE TABLE OF TASKLIST_TYPE,
          WA_TASKLIST TYPE TASKLIST_TYPE.
    DO.
      CALL FUNCTION 'RFC_SYSTEM_INFO'
           STARTING NEW TASK TASKNAME DESTINATION IN GROUP DEFAULT
           PERFORMING RETURN_INFO ON END OF TASK
           EXCEPTIONS
             COMMUNICATION_FAILURE = 1
             SYSTEM_FAILURE        = 2
             RESOURCE_FAILURE      = 3.
      CASE SY-SUBRC.
        WHEN 0.
    Administration of asynchronous tasks
          WA_TASKLIST-TASKNAME = TASKNAME.
          CLEAR WA_TASKLIST-RFCDEST.
          APPEND WA_TASKLIST TO TASKLIST.
          WRITE: /  'Started task: ', WA_TASKLIST-TASKNAME COLOR 2.
          TASKNAME = TASKNAME + 1.
          SND_JOBS = SND_JOBS + 1.
          JOBS     = JOBS - 1.  "Number of existing jobs
          IF JOBS = 0.
            EXIT.  "Job processing finished
          ENDIF.
        WHEN 1 OR 2.
    Handling of communication and system failure
        WHEN 3.  "No resources available at present
    Receive reply to asynchronous RFC calls
          IF EXCP_FLAG = SPACE.
             EXCP_FLAG = 'X'.
    First attempt for RESOURCE_FAILURE handling
             WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '0.01' SECONDS.
          ELSE.
    Second attempt for RESOURCE_FAILURE handling
             WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '0.1' SECONDS.
          ENDIF.
          IF SY-SUBRC = 0.
            CLEAR EXCP_FLAG.  "Reset flag
          ELSE.  "No replies
            "Endless loop handling
          ENDIF.
        ENDCASE.
    ENDDO.
    Receive remaining asynchronous replies
    WAIT UNTIL RCV_JOBS >= SND_JOBS.
    LOOP AT TASKLIST INTO WA_TASKLIST.
      WRITE:/   'Received task:', WA_TASKLIST-TASKNAME COLOR 1,
            30  'Destination: ', WA_TASKLIST-RFCDEST COLOR 1.
    ENDLOOP
    FORM RETURN_INFO USING TASKNAME.
      RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
        IMPORTING RFCSI_EXPORT = INFO
        EXCEPTIONS
          COMMUNICATION_FAILURE = 1
          SYSTEM_FAILURE        = 2.
      RCV_JOBS = RCV_JOBS + 1.  "Receiving data
        IF SY-SUBRC NE 0.
    Handling of communication and system failure
        ELSE.
          READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME
                              INTO WA_TASKLIST
          IF SY-SUBRC = 0.  "Register data
            WA_TASKLIST-RFCDEST = INFO_RFCDEST.
            MODIFY TASKLIST INDEX SY-TABIX FROM WA_TASKLIST.
          ENDIF.
        ENDIF.
    ENDFORM
    Thanks,
    Naren

Maybe you are looking for

  • Animated .gif won't loop a specified amount of times

    I created an animated .gif in photoshop and set it to loop twice. It works fine in every other browser except firefox. Here is a test example: http://elliottkirby.com/test/ The box is set to animate across twice. Any thoughts? I tested setting it to

  • Strategy 50 for AFS material

    Hi,    In my current project we use AFS material, For a finished material we use strategy 50, after creating planned independent requirement, without running MRP, in stock requirements we can able to see the planned orders. How is this possible?. Als

  • Firefox 24.0 will not start in Win 7 64bit environment on Dell N5010 i3 Inspiron after upgrade/fresh instal

    My Computer Notebook is a Dell N5010 Inspiron with an i3 processor running a Win 7 64bit OS. Firefox has always run smoothly on it. I could not exactly determine when it started failing to open after each attempt to start but I think it was when I ac

  • Display an error and cancel the job in F110

    Hi all. I use a BTE's to check some data in F110 transaction. I want to display some error and abort the proposal creation JOB. I can to abort the Proposal creation JOB with RAISE_EXCEPTION command, but I cannot display the error. how I can to displa

  • Create New Detail Rows

    I have a jsp page with Master-detail tables. My current create new row button has to set current on one detail row to be able to create a new row. I have some master rows have no detail items. So my create new row button won't show up.How can I creat