Build parallel processing

HI,
I want to build simple WF with parallel processing ,
That the mail send to 2 user simultaneously .
i need step by step help.
Regards

Hi,
Here my three user can see the Material in his work item .
But when one user exeuted his work item .
othe user work item is automatically deleted ....................? that should nt happened ?
Actually My requriement is that When End User ( A1 )is created material ....
After the creation of Material Worklfow should triggerd ...........
After Creation of Material Can be View by 3 Higher Level Person of Different Department i.e ( FI , SD , PP )
A1 ( Created By End User ) -
> A2 , A3 , A4 ( This Person Can view Material in their Respective Inbox Work Item )
My Prb is that Work Item goes to ( A2 , A3 , A4 ) But anyone of this one executed his Work Item .........then other two person Work Item is automatically Deleted ...........and this should nt happened...........
I want that each and every person can view the Material in Display Mode at any time and any Sequence ....Please help me Friend .............
I have done lot's of R/D but didn't got any solution ...
Regrads,
Sandeep Jadhav

Similar Messages

  • Parallel processing of mass data : sy-subrc value is not changed

    Hi,
    I have used the Parallel processing of mass data using the "Start New Task" . In my function module I am handling the exceptions and finally raise the application specific old exception to be handled in my main report program. Somehow the sy-subrc is not getting changed and always returns 0 even if the expection is raised.
    Can anyone help me about the same.
    Thanks & Regards,
    Nitin

    Hi Silky,
    I've build a block of code to explain this.
      DATA: ls_edgar TYPE zedgar,
            l_task(40).
      DELETE FROM zedgar.
      COMMIT WORK.
      l_task = 'task1'.
      ls_edgar-matnr = '123'.
      ls_edgar-text = 'qwe'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task2'.
      ls_edgar-matnr = 'abc'.
      ls_edgar-text = 'def'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task3'.
      ls_edgar-matnr = '456'.
      ls_edgar-text = 'xyz'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
    *&      Form  f_go
    FORM f_go USING p_c TYPE ctype.
      RECEIVE RESULTS FROM FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' EXCEPTIONS err = 2.
      IF sy-subrc = 2.
    *this won't affect the LUW of the received function
        ROLLBACK WORK.
      ELSE.
    *this won't affect the LUW of the received function
        COMMIT WORK.
      ENDIF.
    ENDFORM.                    "f_go
    and the function is:
    FUNCTION z_edgar_commit_rollback.
    *"*"Interface local:
    *"  IMPORTING
    *"     VALUE(LINE) TYPE  ZEDGAR
    *"  EXCEPTIONS
    *"      ERR
      MODIFY zedgar FROM line.
      IF line-matnr CP 'a*'.
    *comment raise or rollback/commit to test
    *    RAISE err.
        ROLLBACK WORK.
      ELSE.
        COMMIT WORK.
      ENDIF.
    ENDFUNCTION.
    ok.
    In your main program you have a Logical Unit of Work (LUW), witch consists of an application transaction and is associated with a database transaction. Once you start a new task, your creating an independent LUW, with it's own database transaction.
    So if you do a commit or rollback in your function the effect is only on the records your processing in the function.
    There is a way to capture the event when this LUW concludes in the main LUW. That is the PERFORMING whatever ON END OF TASK. In there you can get the result of the function but you cannot commit or rollback the LUW from the function since it already have implicitly happened at the conclusion of the funtion. You can test it by correctly comment the code I've supplied.
    So, if you  want to rollback the LUW of the function you better do it inside it.
    I don't think it matches exactly your question, maybe it lead you on the right track. Give me more details if it doesn't.
    Hope it helps,
    Edgar

  • How to define "leading" random number in Infoset fpr parallel processing

    Hello,
    in Bankanalyzer we use an Infoset which consists of a selection across 4 ODS tables to gather data.
    No matter which PACKNO fields we check or uncheck in the infoset definition screen (TA RSISET), the parallel frameworks always selects the same PACKNO field from one ODS table.
    Unfortunately, the table that is selected by the framework is not suitable, because our
    "leading" ODS table which holds most of our selection criteria is another one.
    How to "convince" the parallel framework to select our leading table for the specification
    of the PACKNO in addition (this would be times 20 faster due to better select options).
    We even tried to assign "alternate characteristics" to the packnos we do not liek to use,
    but it seems that note 999101 just fixes this for non-system-fields.
    But for the random number a diffrent form routine is used in /BA1/LF3_OBJ_INDEX_READF01
    fill_range_random instead of fill_range.
    Has anyone managed to assign the PACKNO of his choice to the infoset selection?
    How?
    Thanks in advance
    Volker

    Well, it is a bit more complicated
    ODS one, that the parallel framework selects for being the one to deliver the PACKNO
    is about equal in size (~120GB each) to ODS two which has two significant field which cuts down the
    amount of data to be retreived.
    Currently we execute the generated SQL in the best possible manner (by faking some stats )
    The problem is, that I'd like to have a Statement that has the PACKNO in the very same table.
    PACKNO is a generated random number esp. to be used for parallel processing.
    The job starts about 100 slaves
    Each slave gets a packet to be processed from the framework, which is internaly represented
    by a BETWEEN clause on this PACKNO. This is joined against ODS2 and then the selective fields
    can be compared resultin in 90% of the already fetched rowes can be discarded.
    Basicly it goes like
    select ...
    from
      ods1 T_00,
      ods2 T_01,
      ods3 T_02,
      ods4 T_03
    where
    ... some key equivalence join-conditions ...
    AND  T_00.PACKNO BETWEEN '000000' and '000050' -- very selective on T_00
    AND  T_01.TYPE = '202'  -- selective Value 10% on second table
    I'd trying to change this to
    AND  T_01.PACKNO BETWEEN '000000' and '000050'
    AND  T_01.TYPE = '202'  -- selective Value 10%
    so I can use a combined Index on T_01 (TYPE;PACKNO)
    This would be times 10 more selective on the driving table and due to the fact,
    that T_00 would be joined for just the rows I need, about a calculated time 20-30 faster.
    It really boosts when I do this in sqlplus
    Hope this clearyfies a bit.
    Problem is, that I can not change the code either for doing the
    build of the packets or the one that executes the application.
    I need to change the Inofset, so that the framework decides to build
    proper SQL with T_01.PACKNO instead of T_00.PACKNO.
    Thanks a lot
    Volker

  • Java Proxy Generation not working - Support for Parallel Processing

    Hi Everyone,
    As per SAP Note 1230721 - Java Proxy Generation - Support for Parallel Processing, when we generate a java proxy from an interface we are supposed to get 2 archives (one for serial processing and another suffixed with "PARALLEL" for parallel processing of jaav proxies in JPR).
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1230721
    We are on the correct patch level as per the Note, however when we generate java proxy from the IR for an outbound interface, it genrates only 1 zip archive (whose name we ourselves provide in the craete new archive section). This does not enable the parallel processsing of the messages in JPR.
    Could you please help me in this issue, and guide as to how archives can be generated for parallel processing.
    Thanks & Regards,
    Rosie Sasidharan.

    Hi,
    Thanks a lot for your reply, Prateek.
    I have already checked SAP Note 1142580 - "Java Proxy is not processing messages in parallel" where they ask to modify the ejb-jar.xml. However, on performing the change in ejb-jar.xml and while building the EAR, I get the following error:
    Error! The state of the source cache is INCONSISTENT for at least one of the request DCs. The build might produce incorrect results.
    Then, on going through the SAP Note 1142580 again, I realised that the SAP Note 1230721 also should be looked onto which will be needed for generating the Java proxy from Message Interfaces in IR for parallel processing.
    Kindly help me if any of you have worked on such a scenario.
    Thanks in advance,
    Regards,
    Rosie Sasidharan.

  • Parallel process in labview

    I have an image that i split in 2; left and right, and they both go through the same process of plotting a line. In order to see the process for both images, I have to do parallel processing to see the lines for both left and right graphs.
    I put a while loop on both left an right image processes, but i still have the same output. Can someone help me? 

    As a rule of thumb, copy-paste code is not usually a good thing. Like was stated above, there is not much processing, no need to do it in parallel. I would build the two arrays you have into a 2-d array. Then use an auto indexing for loop with all the identical processing in it. From there you can build your x input and y input arrays for your graph and feed them to your xy graphs. Hopefully this is clear. If you post your code in version 8.5 I can draw it up for you.
    CLA, LabVIEW Versions 2010-2013

  • Parallel process in Application engine

    could any one explain me what is parallel process in Application engine where temp table is use?
    give me with example?

    Parallel processing is used when considerable amounts of data must be updated or processed within a limited amount of time, or batch window. In most cases, parallel processing is more efficient in environments containing partitioned data.
    To use parallel processing, partition the data between multiple concurrent runs of a program, each with its own dedicated version of a temporary table (for example, PS_MYAPPLTMP). If you have a payroll batch process, you could divide the employee data by last name. For example, employees with last names beginning with A through M get inserted into PS_MYAPPLTMP1; employees with last names beginning with N-Z get inserted into PS_MYAPPLTMP2.
    To use two instances of the temporary table, you would define your program (say, MYAPPL) to access to one of two dedicated temporary tables. One execution would use A-M and the other N-Z.
    The Application Engine program invokes logic to pick one of the available instances. After each program instance gets matched with an available temporary table instance, the %Table meta-SQL construct uses the corresponding temporary table instance. Run control parameters passed to each instance of the MYAPPL program enable it to identify which input rows belong to it, and each program instance inserts the rows from the source table into its assigned temporary table instance using %Table. The following diagram illustrates this process:
    Multiple program instances running against multiple temporary table instances
    There is no simple switch or check box that enables you to turn parallel processing on and off. To implement parallel processing, you must complete the following set of tasks. With each task, you must consider details regarding your specific implementation.
    Define and save temporary table records in PeopleSoft Application Designer.
    You don't need to run the SQL Build process at this point.
    In PeopleSoft Application Engine, assign temporary tables to Application Engine programs, and set the instance counts dedicated for each program.
    Employ the %Table meta-SQL construct so that PeopleSoft Application Engine can resolve table references to the assigned temporary table instance dynamically at runtime.
    Set the number of total and online temporary table instances on the PeopleTools Options page.
    Build temporary table records in PeopleSoft Application Designer by running the SQL Build process.

  • Too  many parallel processes

    Hi
    I will have to build process chain to cube 0SD_C03 and the datasources are
    2LIS_12_VCITM
    2LIS_12_VCHDR
    2LIS_12_V_ITM
    2LIS_12_VDHDR
    2LIS_12_VAITM
    2LIS_12_VDITM
    2LIS_12_VADHDR
    Now the question is after providing the links between " Delete index" process and individual  loading process (Infopackages),the message I am getting in the checking view is " Too many parallel processes for chosen server " and furthe,r the suggested procedure by system is " Reduce the number of parallel processes in the chain or include sub-chains :
    How can I reduce the processes? Is there any alterante method of building this flow to avoid warning messages..
    Though these are warning messages ,what is the correct flow of building process chain for this without getting any warning messages.

    Hi,
    Based on dependency better you can go for 3 parellel process at a time as what we are doing in our project. 
    check schedule time for each your process chain which fetchs data from source system (Info Package) and re schedule them which should not execute at a time (make it max 3) and try again
    Regards
    BVR

  • Parallel Processing (RSEINB00)

    Hi,
    I am trying to achieve parrallel processing for the below scenario:
    I am trying to create a child job for processing each file. 
    Suppose if i have 5 files, in this case i will have one Main Job & 5 child Jobs. Apprecaite if anyone has come across such scenario.
    Each file can have two types of data. One set of data for trasferring to application server & another set for posting idocs.
    LOOP AT t_files_to_proc INTO wl_files_to_proc.
    *-- This perform builds two sets of data  [  Data Set-1 for transferring file onto App Server
    *--                                                                 Data Set-2 for posting idocs using RSEINB00     ]
       PERFORM build_table_data.
    *-- Data Set-1 for transferring file onto App Server
    PERFORM transfer_data_to_appserver.   
    *-- Data Set-2 for posting idocs using RSEINB00    
    PERFORM submit_rseinb00.
    ENDLOOP.

    Hi rao,
    here is a sample, adapt to your needs:
    [Easily implement parallel processing in online and batch processing |https://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing]
    Regards,
    Clemens

  • Parallel processing - some questions

    Hi,
    I'm about to develop a class for task management.
    It should have methods like GET_TASK returning a taskid and task_return importing task_id received and possibly task_fail importing task_id sent with a bad sy-subrc when calling function with starting new task.
    The idea behind is: New task is started by system using a dialog process with time limit restriction applying. Thus I have to split the whole task into more small packets than processes available. So I think I have to use the same task again after it returned successfully.
    My assumption is I can't use more tasks than processes initially available and I can use the same tsak again after it has successsfully returned results.
    I want some ideas and information on that exceeding the SAP documentation.
    TIA.
    Regards,
    Clemens

    ... and here comes the program (no formatting available with the answer given with "solved".
    The programs retrieves billing documents with function BAPI_BILLINGDOC_GETLIST - this was the only BAPI I found quickly that has ranges as import parameters. This allows giving packages to the tasks.
    *& Report  ZZZPARTEST                                                  *
    REPORT  zzzpartest.
    PARAMETERS:
      p_dbcnt                                 TYPE sydbcnt DEFAULT 1010,
      p_pacsz                                 TYPE sydbcnt DEFAULT 95.
    CONSTANTS:
      gc_function                             TYPE tfdir-funcname
        VALUE 'BAPI_BILLINGDOC_GETLIST'.
    DATA:
      gt_bapivbrksuccess                      TYPE TABLE OF
        bapivbrksuccess,
      gv_activ                                TYPE i,
      gv_rcv                                  TYPE i,
      gv_snd                                  TYPE i,
      BEGIN OF ls_intval,
        task                                  TYPE numc4,
        idxfr                                 TYPE i,
        idxto                                 TYPE i,
        activ                                 TYPE flag,
        fails                                 TYPE i,
      END OF ls_intval,
      gt_intval                               LIKE TABLE OF ls_intval.
    START-OF-SELECTION.
      PERFORM paralleltest.
    *       CLASS task DEFINITION
    CLASS task DEFINITION.
      PUBLIC SECTION.
        CLASS-METHODS:
          provide
            RETURNING
              value(name)                     TYPE numc4,
          return
            IMPORTING
              name                            TYPE numc4,
          initialize
            RETURNING
              value(group)                    TYPE rzllitab-classname.
      PRIVATE SECTION.
        CLASS-DATA:
          gv_group                            TYPE rzllitab-classname,
          BEGIN OF ls_task,
          name                                TYPE numc4,
          used                                TYPE flag,
          END OF ls_task,
          gt_task                             LIKE TABLE OF ls_task.
    ENDCLASS.                    "itab DEFINITION
    ***       CLASS itab IMPLEMENTATION ***
    CLASS task IMPLEMENTATION.
      METHOD initialize.
        DATA:
          lv_max                              TYPE i,
          lv_inc                              TYPE numc7,
          lv_free                             TYPE i.
        CHECK gt_task IS INITIAL.
        SELECT classname
          INTO gv_group
          FROM rzllitab UP TO 1 ROWS
          WHERE grouptype                     = 'S'.
        ENDSELECT.
        CALL FUNCTION 'SPBT_INITIALIZE'
             EXPORTING
                  group_name                     = gv_group
             IMPORTING
                  max_pbt_wps                    = lv_max
                  free_pbt_wps                   = lv_free
             EXCEPTIONS
                  invalid_group_name             = 1
                  internal_error                 = 2
                  pbt_env_already_initialized    = 3
                  currently_no_resources_avail   = 4
                  no_pbt_resources_found         = 5
                  cant_init_different_pbt_groups = 6
                  OTHERS                         = 7.
        IF sy-subrc                           <> 0.
        MESSAGE ID sy-msgid                   TYPE sy-msgty NUMBER sy-msgno
                                   WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        ENDIF.
        SUBTRACT 2 FROM lv_free.
        IF lv_free >= 1.
          DO lv_free TIMES.
            ls_task-name                        = sy-index.
            APPEND ls_task TO gt_task.
          ENDDO.
          group                                 = gv_group.
          MESSAGE s000(r1)
            WITH
            'Parallelverarbeitung benutzt'
            lv_free
            'Prozesse in Gruppe'
            gv_group.
        ELSE.
          MESSAGE e000(r1)
            WITH
            'Parallelverarbeitung abgebrochen,'
            lv_free
            'Prozesse in Gruppe'
            gv_group.
        ENDIF.
      ENDMETHOD.                    "initialize
      METHOD provide.
        FIELD-SYMBOLS:
          <task>                              LIKE ls_task.
        IF  gv_group IS INITIAL.
          MESSAGE e000(r1)
            WITH 'Task group not initialized'.
        ENDIF.
        LOOP AT gt_task ASSIGNING <task>
          WHERE used IS initial.
          EXIT.
        ENDLOOP.
        CHECK sy-subrc                        = 0.
        <task>-used                           = 'X'.
        name                                  = <task>-name.
      ENDMETHOD.
      METHOD return.
        LOOP AT gt_task INTO ls_task
          WHERE
          name                                = name
          AND used                            = 'X'.
          DELETE gt_task.
        ENDLOOP.
        IF sy-subrc                           = 0.
          CLEAR ls_task-used.
          APPEND ls_task TO gt_task.
        ELSE.
    * fatal error
        ENDIF.
      ENDMETHOD.
    ENDCLASS.                    "itab IMPLEMENTATION
    *&      Form  paralleltest
    FORM paralleltest.
      DATA:
      ls_bapi_ref_doc_range                   TYPE bapi_ref_doc_range,
      lv_done                                 TYPE flag,
      lv_group                                TYPE rzllitab-classname,
      lv_task                                 TYPE numc4,
      lv_msg                                  TYPE text255,
      lv_grid_title                           TYPE lvc_title,
      lv_tfill                                TYPE sytfill,
      lv_vbelv                                TYPE vbelv,
      lv_npacs                                TYPE i,
      lt_vbelv                                TYPE SORTED TABLE OF vbelv
        WITH UNIQUE KEY table_line,
      lv_mod                                  TYPE i.
      FIELD-SYMBOLS:
        <intval>                              LIKE LINE OF gt_intval.
    * build intervals
      SELECT vbelv  INTO lv_vbelv
        FROM vbfa.
        INSERT lv_vbelv INTO TABLE lt_vbelv.
        CHECK sy-subrc = 0.
        ADD 1 TO lv_tfill.
        CHECK:
          p_dbcnt                             > 0,
          lv_tfill                            >= p_dbcnt.
        EXIT.
      ENDSELECT.
      DESCRIBE TABLE lt_vbelv LINES lv_tfill.
      IF (
           p_pacsz                            < p_dbcnt OR
           p_dbcnt                            = 0
          ) AND
           p_pacsz                            > 0.
    *        p_dbcnt                              > 0 ).
        lv_npacs                              = lv_tfill DIV p_pacsz.
        lv_mod                                = lv_tfill MOD p_pacsz.
        IF lv_mod                             <> 0.
          ADD 1 TO lv_npacs.
        ENDIF.
        DO lv_npacs TIMES.
          ls_intval-idxfr                     = ls_intval-idxto + 1.
          ls_intval-idxto                     = ls_intval-idxfr - 1
                                              + p_pacsz.
          IF ls_intval-idxto                  > lv_tfill.
            ls_intval-idxto                   = lv_tfill.
          ENDIF.
          APPEND ls_intval TO gt_intval.
        ENDDO.
      ELSE.
        ls_intval-idxfr                       = 1.
        ls_intval-idxto                       = lv_tfill.
        APPEND ls_intval TO gt_intval.
      ENDIF.
      WHILE lv_done IS INITIAL.
    * find an interval to be processed
        LOOP AT gt_intval ASSIGNING <intval>
          WHERE activ                         = space
            AND fails BETWEEN 0 AND  5.
          EXIT.
        ENDLOOP.
        IF sy-subrc                           <> 0.
    * no inactive unprocessed interval. All complete or must wait?
    * check for intervals with unsuccesful tries
          LOOP AT gt_intval ASSIGNING <intval>
            WHERE fails BETWEEN 0 AND  5.
            EXIT.
          ENDLOOP.
          IF sy-subrc                         = 0.
    * wait until all started processes have been received.
    * Note: No receive is executed without WAIT
            WAIT UNTIL gv_activ IS INITIAL UP TO 600 SECONDS.
          ELSE.
    * all done
            lv_done                           = 'X'.
          ENDIF.
          UNASSIGN <intval>.
        ENDIF.
    * process interval if provided
        IF <intval> IS ASSIGNED.
          WHILE lv_task IS INITIAL.
            IF lv_group IS INITIAL.
    * init parallel processing
              lv_group = task=>initialize( ).
            ENDIF.
    * get unused task
            lv_task                           = task=>provide( ).
            CHECK lv_task IS INITIAL.
    * no unused task? wait for all started task are received
            WAIT UNTIL gv_activ IS INITIAL UP TO 600 SECONDS.
          ENDWHILE.
    * call if task assigned
          CHECK NOT lv_task IS INITIAL.
    * prepare function parameters
          ls_bapi_ref_doc_range               = 'IBT'.
          READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_low
            INDEX  <intval>-idxfr.
          READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_high
            INDEX  <intval>-idxto.
    * mark interval as failed
          ADD 1 TO <intval>-fails.
          ADD 1 TO gv_snd.
          CALL FUNCTION gc_function
             STARTING NEW TASK lv_task
             DESTINATION                      IN GROUP lv_group
             PERFORMING bapi_receive ON END OF TASK
             EXPORTING
                refdocrange                   = ls_bapi_ref_doc_range
             EXCEPTIONS
               communication_failure          = 1 MESSAGE lv_msg
               system_failure                 = 2 MESSAGE lv_msg
               RESOURCE_FAILURE               = 3.
          IF sy-subrc                         = 0.
            <intval>-activ                    = 'X'.
            <intval>-task                     = lv_task.
            ADD 1 TO gv_activ.
          ELSE.
            CALL METHOD task=>return EXPORTING name = lv_task.
          ENDIF.
          CLEAR lv_task.
        ENDIF.
      ENDWHILE.
    * wait for pending processes
      MESSAGE s000(r1) WITH 'Wait for pending processes'.
      WAIT UNTIL gv_activ IS INITIAL.
    * report unfinished intervals
      LOOP AT gt_intval ASSIGNING <intval>
        WHERE fails >= 0.
        READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_low
          INDEX  <intval>-idxfr.
        READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_high
          INDEX  <intval>-idxto.
        MESSAGE i000(r1)
        WITH
        'Unverarbeitetes Intervall von'
        ls_bapi_ref_doc_range-ref_doc_low
        'bis'
        ls_bapi_ref_doc_range-ref_doc_high.
      ENDLOOP.
      MESSAGE s000(r1) WITH 'start ALV'.
    * transfer results to standard table
      WRITE gv_rcv TO lv_grid_title LEFT-JUSTIFIED.
      lv_grid_title+40(1) = '+'.
      WRITE gv_snd TO lv_grid_title+50 LEFT-JUSTIFIED.
      REPLACE '+' WITH 'RCV/SND' INTO lv_grid_title.
      CONDENSE lv_grid_title.
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
           EXPORTING
                i_structure_name = 'BAPIVBRKSUCCESS'
                i_grid_title     = lv_grid_title
           TABLES
                t_outtab         = gt_bapivbrksuccess.
    ENDFORM.                    " paralleltest
    *&      Form  bapi_receive
    FORM bapi_receive USING pv_task TYPE any.
      DATA:
        lv_task                               TYPE numc4,
        lt_bapivbrksuccess                    TYPE TABLE OF bapivbrksuccess,
        lv_msg                                TYPE text80,
        lv_subrc                              TYPE sy-subrc.
      FIELD-SYMBOLS:
        <intval>                              LIKE LINE OF gt_intval.
      CLEAR lt_bapivbrksuccess.
      RECEIVE RESULTS FROM FUNCTION gc_function
          TABLES
            success                           = lt_bapivbrksuccess
          EXCEPTIONS
            communication_failure             = 1 MESSAGE lv_msg
            system_failure                    = 2 MESSAGE lv_msg .
      lv_subrc                                = sy-subrc.
      lv_task                                 = pv_task.
      CALL METHOD task=>return EXPORTING name = lv_task.
      LOOP AT gt_intval ASSIGNING <intval>
        WHERE task = lv_task
        AND fails <> -1.
        EXIT.
      ENDLOOP.
      IF sy-subrc                             <> 0.
    * fatal error
        MESSAGE e000(r1)
          WITH 'returned task' lv_task 'not in task table'.
      ENDIF.
      CLEAR  <intval>-activ.
      CASE lv_subrc.
        WHEN 0.
          <intval>-fails                      = -1.
          APPEND LINES OF lt_bapivbrksuccess TO gt_bapivbrksuccess.
          ADD 1 TO gv_rcv.
        WHEN 1.
          ADD 1 TO <intval>-fails.
          WRITE: 'communication_failure for task', lv_task, lv_msg.
        WHEN 2.
          WRITE: 'system_failure', lv_task, lv_msg.
          ADD 1 TO <intval>-fails.
      ENDCASE.
      SUBTRACT 1 FROM gv_activ.
    ENDFORM.                    " bapi_receive
    Regards,
    Clemens

  • Allowing parallel processing of cube partitions using OWB mapping

    Hi All,
    Iam using an OWB mapping to load a MOLAP cube partitioned on TIME dimension. I configured the OWB mapping by checking the 'Allow parallel processing' option with the no.of parallel jobs to be 2. I then deployed the mapping.The data loaded using the mapping is spread across multiple partitions.
    The server has 4 CPU's and 6 GB RAM.
    But, when i kick off the mapping, i can see only one partition being processed at a time in the XML_LOAD_LOG.
    If i process the same cube in AWM, using parallel processing, i can see that multiple partitions are processed.
    Could you pls suggest if i missed any setting on OWB side.
    Thanks
    Chakri

    Hi,
    I have assigned the OLAP_DBA to the user under which the OWB map is running and the job started off.
    But, it failed soon with the below error:
    ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
    Here is the log :
    Load ID     Record ID     AW     Date     Actual Time     Message Time     Message
    3973     13     SYS.AWXML     12/1/2008 8:26     8:12:51     8:26:51     ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
    3973     12     XPRO_OLAP_NON_AGG.OLAP_NON_AGG     12/1/2008 8:19     8:12:57     8:19:57     Attached AW XPRO_OLAP_NON_AGG.OLAP_NON_AGG in RW Mode.
    3973     11     SYS.AWXML     12/1/2008 8:19     8:12:56     8:19:56     Started Build(Refresh) of XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace.
    3973     1     XPRO_OLAP_NON_AGG.OLAP_NON_AGG     12/1/2008 8:19     8:12:55     8:19:55     Job# AWXML$_3973 to Build(Refresh) Analytic Workspace XPRO_OLAP_NON_AGG.OLAP_NON_AGG Submitted to the Queue.
    Iam using AWM (10.2.0.3 A with OLAP Patch A) and OWB (10.2.0.3).
    Can anyone suggest why the job failed this time ?
    Regards
    Chakri

  • Which Parallel processing is faster?

    There are various ways in which parallel processing can be implemented e.g.
    1. Call function STARTING NEW TASK which uses dialog work process.
    2. Using background RFC (trfc) .. call function IN BACKGROUND TASK AS SEPARATE UNIT
    3. Using submit via jobs
    I want to know which technique is fastest and why?

    soadyp wrote:
    The throughput of the various workprocess is NOT identical.
    > I was a little surprised to discover this.  Is seems the DISP+WORK behaves differently under different workprocess types.
    > Clearly a background process does need/ doesnt support dialog processing and the CLASSIC PBO/PAI process.
    >
    > All I can say, is TEST it on your kernel.
    >
    > We have done testing on this since every millisecond is import to us.
    >
    > Dialog processes are sometimes twice as slow as baclground tasks depending on what is happening.
    > Im talking abap the ABAP execution time here.Not DB, RFC, external call , PURE ABAP execution times.
    >
    > DIALOG was simply slower than Background processes.
    >
    > TRY it:  Build a report. SUBMIT report in background.
    > Run the same abap in dialog.
    > Include GET Runtime statements to measure the execution time microseconds.
    > fill it with  PERFORM X using Y , CALL FUNCTION B or CALL CLASS->Method_M
    > set the clock resolution high, and measure the total execution time.
    >
    > When running HUGE interfaces, processing 10 of millions of steps every day, this is of genuine consideration.
    >
    > ALSO NOTE:
    > The cost of open JOB   submit via job  close should also be measured.
    > If your packets are too small, then background speed is lost to the overhead of submission.
    >
    > The new Background RFC functions should also be tested with the SAME CODE.
    >
    > happy testing
    > Phil
    Dialog might be slower only due to the GUI communication or difference in ABAP source code. In some standard SAP applications there is a different processing depending on SY-BATCH flag.
    Technically, as it was already mentioned several time above, both work processes (dialog and batch) are pretty identical (with slight differences in memory allocation).
    So please don't confuse the community.

  • Parallel Processing : Unable to capture return results using RECIEVE

    Hi,
    I am using parallel processing in one of my program and it is working fine but I am not able to collect return results using RECIEVE statement.
    I am using
      CALL FUNCTION <FUNCTION MODULE NAME>
             STARTING NEW TASK TASKNAME DESTINATION IN GROUP DEFAULT_GROUP
             PERFORMING RETURN_INFO ON END OF TASK
    and then in subroutine RETURN_INFO I am using RECEIVE statement.
    My RFC is calling another BAPI and doing explicit commit as well.
    Any pointer will be of great help.
    Regards,
    Deepak Bhalla
    Message was edited by: Deepak Bhalla
    I used the wait command after rfc call and it worked additionally I have used Message switch in Receive statement because RECIEVE statement was returing sy-subrc 2.

    Not sure what's going on here. Possibly a corrupt drive? Or the target drive is full?
    Try running the imagex command manually from a F8 cmd window (in WinPE)
    "\\OCS-MDT\CCBShare$\Tools\X64\imagex.exe" /capture /compress maximum C: "\\OCS-MDT\CCBShare$\Captures\CCB01-8_15_14.wim" "CCB01CDrive" /flags ENTERPRISE
    Keith Garner - Principal Consultant [owner] -
    http://DeploymentLive.com

  • Parallel Processing and Capacity Utilization

    Dear Guru's,
    We have following requirement.
    Workcenter A Capacity is 1000.   (Operations are similar)
    Workcenter B Capacity is 1500.   (Operations are similar)
    Workcenter C Capacity is 2000.   (Operations are similar)
    1) For Product A: Production Order Qty is 4500. Can we use all workcenter as a parallel processing through Routing.
    2) For Product B: Production Order Qty is 2500. Can we use only W/C A and B as a parallel processing through Routing.
    If yes, plz explain how?
    Regards,
    Rashid Masood

    May be you can create a virtual WC VWCA=ABC (connected with a hierarchy with transaction CR22) and another VWCB=A+B and route your products to each VWC

  • Parallel processing open items (FPO4P)

    Hello,
    I have a question about transaction FPO4p (parallel processing of open items).
    When saving the parameters the following message always appears : "Report cannot be evaluated in parallel". The information details tells that when you use a specific parallel processing object, you also need to use that field to sort on.
    I my case I use the object GPART for parallel processing (see tab technical settings). In the tab output control I selected a line layout which is sorted by business partner (GPART). Furthermore no selection options are used.
    Does anyone know why the transaction cannot save the parameters and shows the error message specified above. I really don't know what goes wrong.
    Thank you in advance.
    Regards, Ramon.

    Ramon
    Apply note 1115456.
    Maybe that note can help you
    Regards
    Arcturus

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

Maybe you are looking for

  • TV Video Out is Dark?

    Anyone experiencing an issue where the iPhone TV out video come out very dark on the TV? Before update 1.1.3 I was able to connect my iPhone to my TV and watch video with no problems. Since update 1.1.3, the video out from my iPhone to my TV comes ou

  • Reply to message sends me to position below received message, not above. how can this be changed?

    my question says it all

  • From 3GS to 4S

    I have recently got the new iphone 4S and restored it from my 3GS backup. But when I open apps, they quit and return back to the home screen (no matter what and how many apps there were). After I restored it, the problem is solved. What happened?

  • How could this be achieved

    Hey everyone, I have information stored in side a file, it is first stored in arrays like this: String[] cars = {"ford", "ford", "mini"} int[] car_id = {101, 102, 103} It is then written to a file. I would like to make a record or my own type of some

  • Monitoring or the Task to the Project Team members in the Project Managemen

    Hi Guys , I came across the new requirement of my project . My client is looking for the below functionalities. Please let me know the possibilities. 1.How do you assign a specific task to the Project Team member and how do you know its progress - 2.