Initial Load R3AM1 Parallel Processing (load objects)

Dear
After optimizing the parallel processing for the initial load between ISU -> CRM (Initial Load ISU / CRM Issues), I would like to know the following:
On the CRM system , transaction R3AM1 you can monitor the Load Objects.Is it possible to parallel process the load objects?
At this time we can see each block being processed one at the time. We can however see there are multiple queues available.
Any info would be most welcome.
Kind regards
Lucas

There you have it!
increase MAX_PARALLEL_PROCESSES(changing crm table smofparsfa) from 5 to 10 and you will see more queues getting processed simultaneously.
More info:
The maximum number of loads and requests that can be processed simultaneously is defined in the table SMOFPARSFA under  the MAX_PARALLEL_PROCESSES..The hierarchy in order of preference is initial load, request load, and then SDIMA.
If the number of running  loads and the number of running requests in total is as much  as or higher than this number, the remaining loads have to  wait until a process is free again.Consequently this parameter can be modified if  the processing of loads\requests is delayed due to non availability of free processes.The default value of this parameter is 5(as is there in your CRM as well)
There's no need to change anything in table 'CRMPAROLTP' on the CRM system

Similar Messages

  • Golden Gate - Initial Load using parallel process group

    Dear all,
    I am new to GG and I was wondering if GG can support initial load with parallel process groups? I have manage to do an initial load using "Direct BULK Load" and "File to Replicat", but I have several big tables and replicat is not catching up. I am aware that GG is not ideal for making initial load, but it is complicated to explain why I am using it.
    Is it possible to user @RANGE function while performing Initial Load regardless of which method is used (file to replicat, direct bulk, ...) ?
    Thanks in advance

    you may use datapump for initial load for large tables.

  • Parallel processing - some questions

    Hi,
    I'm about to develop a class for task management.
    It should have methods like GET_TASK returning a taskid and task_return importing task_id received and possibly task_fail importing task_id sent with a bad sy-subrc when calling function with starting new task.
    The idea behind is: New task is started by system using a dialog process with time limit restriction applying. Thus I have to split the whole task into more small packets than processes available. So I think I have to use the same task again after it returned successfully.
    My assumption is I can't use more tasks than processes initially available and I can use the same tsak again after it has successsfully returned results.
    I want some ideas and information on that exceeding the SAP documentation.
    TIA.
    Regards,
    Clemens

    ... and here comes the program (no formatting available with the answer given with "solved".
    The programs retrieves billing documents with function BAPI_BILLINGDOC_GETLIST - this was the only BAPI I found quickly that has ranges as import parameters. This allows giving packages to the tasks.
    *& Report  ZZZPARTEST                                                  *
    REPORT  zzzpartest.
    PARAMETERS:
      p_dbcnt                                 TYPE sydbcnt DEFAULT 1010,
      p_pacsz                                 TYPE sydbcnt DEFAULT 95.
    CONSTANTS:
      gc_function                             TYPE tfdir-funcname
        VALUE 'BAPI_BILLINGDOC_GETLIST'.
    DATA:
      gt_bapivbrksuccess                      TYPE TABLE OF
        bapivbrksuccess,
      gv_activ                                TYPE i,
      gv_rcv                                  TYPE i,
      gv_snd                                  TYPE i,
      BEGIN OF ls_intval,
        task                                  TYPE numc4,
        idxfr                                 TYPE i,
        idxto                                 TYPE i,
        activ                                 TYPE flag,
        fails                                 TYPE i,
      END OF ls_intval,
      gt_intval                               LIKE TABLE OF ls_intval.
    START-OF-SELECTION.
      PERFORM paralleltest.
    *       CLASS task DEFINITION
    CLASS task DEFINITION.
      PUBLIC SECTION.
        CLASS-METHODS:
          provide
            RETURNING
              value(name)                     TYPE numc4,
          return
            IMPORTING
              name                            TYPE numc4,
          initialize
            RETURNING
              value(group)                    TYPE rzllitab-classname.
      PRIVATE SECTION.
        CLASS-DATA:
          gv_group                            TYPE rzllitab-classname,
          BEGIN OF ls_task,
          name                                TYPE numc4,
          used                                TYPE flag,
          END OF ls_task,
          gt_task                             LIKE TABLE OF ls_task.
    ENDCLASS.                    "itab DEFINITION
    ***       CLASS itab IMPLEMENTATION ***
    CLASS task IMPLEMENTATION.
      METHOD initialize.
        DATA:
          lv_max                              TYPE i,
          lv_inc                              TYPE numc7,
          lv_free                             TYPE i.
        CHECK gt_task IS INITIAL.
        SELECT classname
          INTO gv_group
          FROM rzllitab UP TO 1 ROWS
          WHERE grouptype                     = 'S'.
        ENDSELECT.
        CALL FUNCTION 'SPBT_INITIALIZE'
             EXPORTING
                  group_name                     = gv_group
             IMPORTING
                  max_pbt_wps                    = lv_max
                  free_pbt_wps                   = lv_free
             EXCEPTIONS
                  invalid_group_name             = 1
                  internal_error                 = 2
                  pbt_env_already_initialized    = 3
                  currently_no_resources_avail   = 4
                  no_pbt_resources_found         = 5
                  cant_init_different_pbt_groups = 6
                  OTHERS                         = 7.
        IF sy-subrc                           <> 0.
        MESSAGE ID sy-msgid                   TYPE sy-msgty NUMBER sy-msgno
                                   WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        ENDIF.
        SUBTRACT 2 FROM lv_free.
        IF lv_free >= 1.
          DO lv_free TIMES.
            ls_task-name                        = sy-index.
            APPEND ls_task TO gt_task.
          ENDDO.
          group                                 = gv_group.
          MESSAGE s000(r1)
            WITH
            'Parallelverarbeitung benutzt'
            lv_free
            'Prozesse in Gruppe'
            gv_group.
        ELSE.
          MESSAGE e000(r1)
            WITH
            'Parallelverarbeitung abgebrochen,'
            lv_free
            'Prozesse in Gruppe'
            gv_group.
        ENDIF.
      ENDMETHOD.                    "initialize
      METHOD provide.
        FIELD-SYMBOLS:
          <task>                              LIKE ls_task.
        IF  gv_group IS INITIAL.
          MESSAGE e000(r1)
            WITH 'Task group not initialized'.
        ENDIF.
        LOOP AT gt_task ASSIGNING <task>
          WHERE used IS initial.
          EXIT.
        ENDLOOP.
        CHECK sy-subrc                        = 0.
        <task>-used                           = 'X'.
        name                                  = <task>-name.
      ENDMETHOD.
      METHOD return.
        LOOP AT gt_task INTO ls_task
          WHERE
          name                                = name
          AND used                            = 'X'.
          DELETE gt_task.
        ENDLOOP.
        IF sy-subrc                           = 0.
          CLEAR ls_task-used.
          APPEND ls_task TO gt_task.
        ELSE.
    * fatal error
        ENDIF.
      ENDMETHOD.
    ENDCLASS.                    "itab IMPLEMENTATION
    *&      Form  paralleltest
    FORM paralleltest.
      DATA:
      ls_bapi_ref_doc_range                   TYPE bapi_ref_doc_range,
      lv_done                                 TYPE flag,
      lv_group                                TYPE rzllitab-classname,
      lv_task                                 TYPE numc4,
      lv_msg                                  TYPE text255,
      lv_grid_title                           TYPE lvc_title,
      lv_tfill                                TYPE sytfill,
      lv_vbelv                                TYPE vbelv,
      lv_npacs                                TYPE i,
      lt_vbelv                                TYPE SORTED TABLE OF vbelv
        WITH UNIQUE KEY table_line,
      lv_mod                                  TYPE i.
      FIELD-SYMBOLS:
        <intval>                              LIKE LINE OF gt_intval.
    * build intervals
      SELECT vbelv  INTO lv_vbelv
        FROM vbfa.
        INSERT lv_vbelv INTO TABLE lt_vbelv.
        CHECK sy-subrc = 0.
        ADD 1 TO lv_tfill.
        CHECK:
          p_dbcnt                             > 0,
          lv_tfill                            >= p_dbcnt.
        EXIT.
      ENDSELECT.
      DESCRIBE TABLE lt_vbelv LINES lv_tfill.
      IF (
           p_pacsz                            < p_dbcnt OR
           p_dbcnt                            = 0
          ) AND
           p_pacsz                            > 0.
    *        p_dbcnt                              > 0 ).
        lv_npacs                              = lv_tfill DIV p_pacsz.
        lv_mod                                = lv_tfill MOD p_pacsz.
        IF lv_mod                             <> 0.
          ADD 1 TO lv_npacs.
        ENDIF.
        DO lv_npacs TIMES.
          ls_intval-idxfr                     = ls_intval-idxto + 1.
          ls_intval-idxto                     = ls_intval-idxfr - 1
                                              + p_pacsz.
          IF ls_intval-idxto                  > lv_tfill.
            ls_intval-idxto                   = lv_tfill.
          ENDIF.
          APPEND ls_intval TO gt_intval.
        ENDDO.
      ELSE.
        ls_intval-idxfr                       = 1.
        ls_intval-idxto                       = lv_tfill.
        APPEND ls_intval TO gt_intval.
      ENDIF.
      WHILE lv_done IS INITIAL.
    * find an interval to be processed
        LOOP AT gt_intval ASSIGNING <intval>
          WHERE activ                         = space
            AND fails BETWEEN 0 AND  5.
          EXIT.
        ENDLOOP.
        IF sy-subrc                           <> 0.
    * no inactive unprocessed interval. All complete or must wait?
    * check for intervals with unsuccesful tries
          LOOP AT gt_intval ASSIGNING <intval>
            WHERE fails BETWEEN 0 AND  5.
            EXIT.
          ENDLOOP.
          IF sy-subrc                         = 0.
    * wait until all started processes have been received.
    * Note: No receive is executed without WAIT
            WAIT UNTIL gv_activ IS INITIAL UP TO 600 SECONDS.
          ELSE.
    * all done
            lv_done                           = 'X'.
          ENDIF.
          UNASSIGN <intval>.
        ENDIF.
    * process interval if provided
        IF <intval> IS ASSIGNED.
          WHILE lv_task IS INITIAL.
            IF lv_group IS INITIAL.
    * init parallel processing
              lv_group = task=>initialize( ).
            ENDIF.
    * get unused task
            lv_task                           = task=>provide( ).
            CHECK lv_task IS INITIAL.
    * no unused task? wait for all started task are received
            WAIT UNTIL gv_activ IS INITIAL UP TO 600 SECONDS.
          ENDWHILE.
    * call if task assigned
          CHECK NOT lv_task IS INITIAL.
    * prepare function parameters
          ls_bapi_ref_doc_range               = 'IBT'.
          READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_low
            INDEX  <intval>-idxfr.
          READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_high
            INDEX  <intval>-idxto.
    * mark interval as failed
          ADD 1 TO <intval>-fails.
          ADD 1 TO gv_snd.
          CALL FUNCTION gc_function
             STARTING NEW TASK lv_task
             DESTINATION                      IN GROUP lv_group
             PERFORMING bapi_receive ON END OF TASK
             EXPORTING
                refdocrange                   = ls_bapi_ref_doc_range
             EXCEPTIONS
               communication_failure          = 1 MESSAGE lv_msg
               system_failure                 = 2 MESSAGE lv_msg
               RESOURCE_FAILURE               = 3.
          IF sy-subrc                         = 0.
            <intval>-activ                    = 'X'.
            <intval>-task                     = lv_task.
            ADD 1 TO gv_activ.
          ELSE.
            CALL METHOD task=>return EXPORTING name = lv_task.
          ENDIF.
          CLEAR lv_task.
        ENDIF.
      ENDWHILE.
    * wait for pending processes
      MESSAGE s000(r1) WITH 'Wait for pending processes'.
      WAIT UNTIL gv_activ IS INITIAL.
    * report unfinished intervals
      LOOP AT gt_intval ASSIGNING <intval>
        WHERE fails >= 0.
        READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_low
          INDEX  <intval>-idxfr.
        READ TABLE lt_vbelv INTO ls_bapi_ref_doc_range-ref_doc_high
          INDEX  <intval>-idxto.
        MESSAGE i000(r1)
        WITH
        'Unverarbeitetes Intervall von'
        ls_bapi_ref_doc_range-ref_doc_low
        'bis'
        ls_bapi_ref_doc_range-ref_doc_high.
      ENDLOOP.
      MESSAGE s000(r1) WITH 'start ALV'.
    * transfer results to standard table
      WRITE gv_rcv TO lv_grid_title LEFT-JUSTIFIED.
      lv_grid_title+40(1) = '+'.
      WRITE gv_snd TO lv_grid_title+50 LEFT-JUSTIFIED.
      REPLACE '+' WITH 'RCV/SND' INTO lv_grid_title.
      CONDENSE lv_grid_title.
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
           EXPORTING
                i_structure_name = 'BAPIVBRKSUCCESS'
                i_grid_title     = lv_grid_title
           TABLES
                t_outtab         = gt_bapivbrksuccess.
    ENDFORM.                    " paralleltest
    *&      Form  bapi_receive
    FORM bapi_receive USING pv_task TYPE any.
      DATA:
        lv_task                               TYPE numc4,
        lt_bapivbrksuccess                    TYPE TABLE OF bapivbrksuccess,
        lv_msg                                TYPE text80,
        lv_subrc                              TYPE sy-subrc.
      FIELD-SYMBOLS:
        <intval>                              LIKE LINE OF gt_intval.
      CLEAR lt_bapivbrksuccess.
      RECEIVE RESULTS FROM FUNCTION gc_function
          TABLES
            success                           = lt_bapivbrksuccess
          EXCEPTIONS
            communication_failure             = 1 MESSAGE lv_msg
            system_failure                    = 2 MESSAGE lv_msg .
      lv_subrc                                = sy-subrc.
      lv_task                                 = pv_task.
      CALL METHOD task=>return EXPORTING name = lv_task.
      LOOP AT gt_intval ASSIGNING <intval>
        WHERE task = lv_task
        AND fails <> -1.
        EXIT.
      ENDLOOP.
      IF sy-subrc                             <> 0.
    * fatal error
        MESSAGE e000(r1)
          WITH 'returned task' lv_task 'not in task table'.
      ENDIF.
      CLEAR  <intval>-activ.
      CASE lv_subrc.
        WHEN 0.
          <intval>-fails                      = -1.
          APPEND LINES OF lt_bapivbrksuccess TO gt_bapivbrksuccess.
          ADD 1 TO gv_rcv.
        WHEN 1.
          ADD 1 TO <intval>-fails.
          WRITE: 'communication_failure for task', lv_task, lv_msg.
        WHEN 2.
          WRITE: 'system_failure', lv_task, lv_msg.
          ADD 1 TO <intval>-fails.
      ENDCASE.
      SUBTRACT 1 FROM gv_activ.
    ENDFORM.                    " bapi_receive
    Regards,
    Clemens

  • Parallel Processing for BI Load

    Hi All,
    I have a datasource which i migrated from BW 3.x to BI 7 .
    I am loading the data from datasource to ODS .
    In the DTP -> Execute tab i can see only 'Serial Extraction
    and processing of source package' . I Think because of this
    i am not able to do parallel processing . I mean when i try to load data from PSA to ODS by DTP , the data is loading package by package ( It is not triggering parallel jobs while loading) .
    Could you please advice me why i am not able to see
    ' Serial Extraction , Immediate parallel processing' in my
    Execute tab of DTP .
    Is there anything i need to configure at Datasource Level .
    Please help me .
    Regards
    Santosh
    Edited by: santosh on Jun 3, 2008 2:37 AM

    Hi, check your extraction tab on the Datasource. I am pretty sure this has something to do with it. This is what it is on the help for DTP Processing.
    Processing Mode
        The processing mode describes the order in which processing steps such
        as extraction, transformation and transfer to target are processed at
        runtime of a DTP request. The processing mode also determines when
        parallel processes are to be separated.
        The processing mode of a request is based on whether the request is
        processed asynchronously, synchronously or in real-time mode, and on the
        type of the source object:
        o   Serial extraction, immediate parallel processing (asynchronous
            processing)
            A request is processed asynchronously in a background process when a
            DTP is started in a process chain or a request for real-time data
            acquisition is updated. The processing mode is based on the source
            type.

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • DIRECT MODE에서의 PARALLEL DATA LOADING

    제품 : ORACLE SERVER
    작성날짜 : 1999-08-10
    Direct mode에서의 parallel Data Loading
    =======================================
    SQL*Loader는 동일 table에 대한 direct mode에서의 parallel data load를
    지원하고 있다. 이는 여러 session에서 동시에 데이타를 direct mode로 올림으로써
    대용량의 데이타의 로드 속도를 향상시킬 수 있다. 특히, data file을 물리적으로
    다른 disk에 위치시킴으로써 더욱 큰 효과를 낼 수 있다.
    1. 제약사항
    - index가 없는 table에만 로드 가능
    - APPEND mode에서만 가능. (replace, truncate, insert mode는 지원 안됨)
    - parallel query option이 설치되어 있어야 함.
    2. 사용방법
    각각의 data file을 load할 control file들을 생성한 후, 차례차례 수행하면 됨.
    $sqlldr scott/tiger control=load1.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load2.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load3.ctl direct=true parallel=true
    3. constraint
    - enable parameter를 사용하면 데이타 로드 작업이 모두 끝난 후, 자동으로
    constraint을 enable시켜 준다. 그러나 종종 enable되지 못하는 경우가
    있으므로 반드시 status를 확인해야 한다.
    - primary key 나 unique key constraint이 걸려 있는 경우, 데이타 로드 후
    자동으로 enable할 때, index를 생성하느라 시간이 많이 소모될 수 있다.
    따라서 data만 parallel direct 모드로 로드 한 후, index를 따로 parallel로
    생성하는 것이 성능 측면에서 바람직하다.
    4. storage 할당 방법 및 주의사항
    direct로 데이타를 로드하는 경우 다음 절차를 따라 작업한다.
    - 대상 table의 storage 절에 기초해 temporary segment를 생성한다.
    - 마지막 데이타 로드 작업이 끝난 후, 마지막에 할당되었던 extent의 비어 있는
    즉, 사용하지 않은 부분을 trim 한다.
    - temporary segment에 해당되어 있는 extent들의 header 정보를
    변경하고, HWM 정보를 수정하여, 대상 table에 extent가 소속되도록 한다.
    이러한 extent 할당 방법은 다음과 같은 문제를 야기시킨다.
    - parallel data load에서는 table 생성 시 할당된 최초 INITIAL extent를
    사용하지 않는다.
    - 정상적인 extent 할당 rule을 따르지 않고, 각 process는 next extent에
    정의된 크기를 할당하여 data load를 시작하고, 새로운 extent가 요구될
    때에는 pctincrease 값을 기준으로 할당되게 되는데, 이는 process 간에
    독립적으로 계산되어진다.
    - fragmentation이 심하게 발생할 수 있다.
    fragmentation을 줄이고, storage 할당을 효율적으로 하기 위해서는
    - INITIAL을 2-5 block 정도로 작게 하여 table을 생성한다.
    - 7.2 이상 버젼에서는 options 절에서 storage parameter를 지정하여
    사용한다. 이 때 initial과 next를 동일한 크기로 주는 것이 바람직하다.
    OPTIONS (STORAGE=(MINEXTENTS n
    MAXEXTENTS n
    INITIAL n K
    NEXT n K
    PCTINCREASE n))
    - options 절을 control file에 기술하는 경우 반드시 insert into tables
    절 다음에 기술해야 한다.

    First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
    As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
    - InfoPackage 1, Students 1 - 10.000
    - InfoPackage 2, Students 10.001 - 20.000
    ...and so on.
    Following that I need to create a Process Chain that starts loading all packages at the same point in time.
    Now...when the extractor is called, there are two parts that it runs through:
    - Initialization of the extractor
    - Fetching of records
    ( via flag i_initflag in the extractor ).
    In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
    What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
    Jeroen

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Loading through Process Chains 2 Delta Loads and 1 Full Load (ODS to Cube).

    Dear All,
    I am loading through Process chains with 2 Delta Loads and 1 Full load from ODS to Cube in 3.5. Am in the development process.
    My loading process is:
    Start - 2 Delta Loads - 1 Full Load - ODS Activation - Delete Index - Further Update - Delete overlapping requests from infocube - Creating Index.
    My question is:
    When am loading for the first am getting some data and for the next load i should get as Zero as there is no data for the next load but am getting same no of records for the next load. May be it is taking data from full upload, i guess. Please, guide me.
    Krishna.

    Hi,
    The reason you are getting the same no. of records is as you said (Full load), after running the delta you got all the changed records but after those two delta's again you have a full load step which will pick whole of the data all over again.
    The reason you are getting same no. of records is:
    1> You are running the chain for the first time.
    2> You ran this delta ip's for the first time, as such while initializing these deltas you might have choosen "Initialization without data transfer", as such now when you ran these deltas for the first time they picked whole of the data.Running a full load after that will also pick the same no. of records too.
    If the two delats you are talking are one after another then is say u got the data because of some changes, since you are loading for a single ods to a cube both your delta and full will pick same "For the first time " during data marting, for they have the same data source(ODS).
    Hope fully this will serve your purpose and will be expedite.
    Thax & Regards
    Vaibhave Sharma
    Edited by: Vaibhave Sharma on Sep 3, 2008 10:28 PM

  • ODS delete, load and Activation process

    We recently ran into temp space issues during ODS activation in serial of 26 million records. We are loading the data from the source using 8 separate requests. This is a weekly job complete delete and reload of ODS with BEx reporting, no delta possible.
    So this is what I am trying next in the process chain:
    1. Load package 1
    2 .Load package 2
    3. Selective deletion of existing ODS content
    4. Activate packages 1 & 2
    4. Load package 3
    5. Load package 4
    6. Activate packages 3 & 4
    6. Load package 5
    7. Load package 6
    8. Activate packages 5 & 6
    8. Load package 7
    9. Load package 8
    10. Activate packages 7 & 8
    Note the duplicate process numbers meaning the processes happen in parallel with 2 Successful Actions from the previous process number and terminating at each activation process.
    My question is would there be an issue with overlapping more than one activation job in the same ODS? If so, is there a better way to do this?
    Thanks loads,
    Al B.

    Before I try the 4 ODS scenario, I will revise my process chain:
    1. Load packages 1 & 2.
    2. Selective deletion of Active data
    Split flow of succesfull actions to similtaneously
    3. Activate packages 1 & 2 and load packages 3 & 4.
    4. AND process, when 3 is complete
    similtaneously
    5. Activate packages 3 & 4 and load packages 5 & 6.
    6. AND process, when 5 is complete
    similtaneously
    Activate packages 5 & 6 and load packages 7 & 8.
    7. AND process when 6 is complete
    8. Activate Packages 7 & 8.
    Sound complicated? It's not really. I will let you know after this weekend if it worked and how long it took.

  • "URGENT"Infopackage load error in process chian after upgrade

    Hi Gurus,
                   We have a strange issue in one of the process chain after upgrade from 3.5 to NW 2004s. In a process chain we have 5 info packages, 3 in parallel as they load data in 3 different cubes and 2 in series after successful completion of info packages on top.
    Now the process chain shows red info package with error message “ Last delta upload not yet completed. Cancel”
    When we check the RSMO, we find all the 3 loads on top finish successfully while in process chain log display as error at info package with above error message and process chain never advances. Actually loads finish successfully but process chain shows error in info package.
    When we investigated further, the request ID in failed info package in process chain shows different ID which never loaded in cube but data load takes place successfully using different request ID(checked in RSMO and request in cube).
    I suppose the process chain trigger the info package load twice simultaneously, one load get successful and second load request failed in log as first one is running…this is my guess…
    Please help me as this is an urgent issue.
    Regards,
    Anil

    Hi All,
    We fond the root cause of the error message. Actually when we trigger the process chain, the info packages loads get triggered twice after an interval or around 2 minutes.
    Here if zero records are loaded then second request ( after 2 min) get loaded too with zero records but if first request is loading some records then second request will find the data load still running and will throw error as last delta still not finished.
    Have you seen this scenario before?
    Regards,
    Anil

  • Loading simultaneously several process chains

    Is it recommended to load fiscal year 1, fiscal year 2, fiscal year 3, fiscal year 4, etc. at the same time using process chains ALEREMOTE?
    What are the consequences that would cause to the system if several process chains are laoded simultaneously to 1 InfoCube?
    There are millions of records coming from the DSO for each fiscal year.

    Hi Yosemite,
    It is a good option to load data parallely into same target, when the selection criteria is different. But please ensure that number of parallel processes in each info package/DTP are restricted  to sustain parallel loads in the system.
    For DTP this setting can be enabled from Goto->Settings for batch manager.
    Regards,
    Vikram

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Load Failure for Master Data object

    Dear Sdns,
                        Am doing daily master data loading for VENODR_ATTR master data object.... Am loading it through DELTA Update.... I got an error in Status Tab.
    rror message from the source system
    Diagnosis
    An error occurred in the source system.
    System response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Service API .
    Refer to the error message.
    Procedure
    How you remove the error depends on the error message.
    Note
    If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
    The Error Message is : - Update mode R is not supported by the extraction API.The application program for the extraction of the data was called using  
    update mode R . However, this is not supported by the InfoSource.        
    Kindly Help me if any of the Snd people might have face this kind of problem...
    Answering Getz Appreciated,
    Warm Regards,
    Aluri

    Hi Aluri,
    The file might be used or being edited by some other person when you are doing the delta.
    CHeck the source system connection @ AWB.activate it.
    Follow these steps
    1. Activate transfer rules for data source in source system.
    2. Activate the ODS
    3. Activate the update rules (ods to ods and also ods to cube)
    5. Right-click at the ODS, choose 'Generate Export Data Source'
    6. Replicate the data source
    7.se38---->RS_TRANSTRU_ACTIVATE_ALL for source system .
    Then start data loading again.
    Hope this helps.
    Assign points if useful.
    Thanks & Regards,
    Bindu

  • Process LOADING has no predecessor in Process chain

    Hi Experts,
    I have copied Process chain with all process and I have included new steps in process chain like change log deletion  and delete & create index(the process chain has data mart functionality which mean that, first it load to DSO and then from DSO it load to Cube) so after Start variant I have included Change log deletion Step and then after Activation of DSO I have included delete Index step and then after IP load to Cube I have included Create Index step therefore which is the PC in the form of
    1) Start Process
    2) Change log deletion
    3) Load IP (Target to only PSA)
    4) Read PSA and Update to data target (DSO)
    5) Activate the DSO
    6) Delete Index
    7) Load IP (Data mart from DSO to Cube)
    8) Create Index
    9) Delete request from PSA.
    all process are connected but when I activated I was getting error message that Process LOADING has no predecessor
    so pls advise me what went wrong or what should I do.
    Note
    1) load type is full load and I was in BI.7
    2) there is no DTP engaged

    Hi
    during creation of modification of chain you must have deleted few links to Loading step.
    Go to settings-> Default chains  and see if autosuggest is on, if it is on this might have cause this.
    To Resolve this problem go to edit mode of chain and then go to View->Detail view on
    now you will be able to see unlinked loading step.
    Delete unlinked and un necessary steps from here and reactivate chain.
    Hope this works
    Regards
    Sudeep

  • MacBook Pro w/10.6.8 requires me to sign in twice at initial log-in screen before loading

    My MacBook Pro w/10.6.8 requires me to sign in twice at initial log-in screen before loading.. In addition, after Safari 5.1.7 has been open for a few hours the browser stops loading pages and then the entire computer  freezes and I have to do a forced reboot with the power button. The first push of the power button brings up the four options screen: "sleep, restart, turn off, logout" but they don't work. I have to force shutdown.
    I don't have either of these problems from a safe boot. However a boot from a clean "Test" administrator account still requires me to login twice before the desktop loads.
    If I login once as either Test or my User Name then choose the " back" button then switch to the other login option before pressig the return key, I only have to login in once at the new login screen. 
    This problem began after I partitioned my 500GB HD with a 10.6.8 partition--the problem partition-- and a Lion 10.7.4 partition (to avoid iCloud issues with Snow Leopard). The Lion partition does not require double login. I have not done anything yet with the Lion partition except software updates.
    I have also been having LCD problems  (according to recent Genius Bar visit, not related to the NVIDIA 8300 problem). Sometimes (every 2-3 times I start the MBP) the MBP will reboot but the screen will be so dark I can't see the cursor and can barely see the desktop. The brightness is set at max and pressing the buttons up/down don't affect the dark screen. Usually I restart and the screen appears normal. This problem occurs with Safe, Test and User Name boots.
    Thanks for any suggestions

    I have the same issue and eventually revirted to logging in web wise to emails and have found that its not so bad- but the moment app wants to access mail it slows down and have had had issues again.
    I know that Mac products when accessing emails have a tendancey to semi download images- and was told that this can be the issue by a friend( not sure how rellable this is) but have noticed a coralation between the two factors.
    I know that there are issues with hotmail when on certain devices ( had the warning form hotmail's site)- but still have yet to be shown how to remove my email from Mail- if yo have any ideas I might try it.
    Please if you have any advice of how to remove main email that would be great and it may be a  option for you. 

Maybe you are looking for

  • How do i add more memory to my mack book

    need help

  • Cannot show PDF contain at Acrobat X standard

    I have a PDF send from external parties, I open it from Acrobat X Standard, it shows a blank page on the screen. But it can normally print out the content from Acrobat X Standard and preview at Windows Explorer. I did repair and check update.  Everyt

  • Correcting Document Splitting

    Hi Gurus, I have the following issue at hand: We posted several general ledger documents, however, the document splitting rules were incorrect and thus a great number of the documents were split incorrectly. I have corrected the document splitting ru

  • Starting a Website - iWeb vs Wordpress?

    Hi I am new to creating a website. I have two questions 1) Does iWeb only work to create a website with a dot mac account? - ie can the website you create on iWeb synchronise with another platform like Wordpress? 2) Can dotmac/iweb created website of

  • MAC Server & iTunes : Does MAC Server support central iTunes repositories?

    I need some help Problem: I have a family of "5" who are all MAC users - all alas have different MAC iTunes repositories. I want to create a central MAC Music repository to ease the pain on my checkbook. Three people bought Nora Jones new Album... on