Mechanism AUTO parallel process

Hi,
I have an Oracle RDBMS 11gR2.
I have a query that runs with parallel_degree_limit = CPU, parallel_degree_policy = AUTO, en parallel_max_servers = 8.
The Cloud Control shows a varying degree of parallelism (which is conform with the above settings), but it also show that parallel server execution requested = 48.
Does someone know from where comes that value of 48 from?
Thanks aby advance.
Kind Regards

This may mean 2 things.
1. If you load data to same target from same source, you can do this in parallel; that means you can split the data in multiple info packages and load.
2.Second is when you load multiple targets from multple sources.
To what extent you can run jobs in parallel depends on factors such as available memory, no of jobs, frequency of loads,   and top it all, the business needs.
Ravi Thothadri

Similar Messages

  • Parallel process in labview

    I have an image that i split in 2; left and right, and they both go through the same process of plotting a line. In order to see the process for both images, I have to do parallel processing to see the lines for both left and right graphs.
    I put a while loop on both left an right image processes, but i still have the same output. Can someone help me? 

    As a rule of thumb, copy-paste code is not usually a good thing. Like was stated above, there is not much processing, no need to do it in parallel. I would build the two arrays you have into a 2-d array. Then use an auto indexing for loop with all the identical processing in it. From there you can build your x input and y input arrays for your graph and feed them to your xy graphs. Hopefully this is clear. If you post your code in version 8.5 I can draw it up for you.
    CLA, LabVIEW Versions 2010-2013

  • Parallel processing in background

    Hi All,
    I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
    Is there any other solutions using that i can reduce total processing time.
    Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
    Please note that all other performance tunings done.
    Thanks,
    Rajesh.

    Hi Rajesh,
    Refer sample code for <b>Parallel Processing</b>:
    By doing this your <b>processing</b> time will be highly optimized.
    Go thru the description given in the code at each level.
    This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
    Hope it helps.
    REPORT PARAJOB.
    Data declarations
    DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
    "Parallel processing group.
    "SPACE = group default (all
    "servers)
    WP_AVAILABLE TYPE I, "Number of dialog work processes
    "available for parallel processing
    "(free work processes)
    WP_TOTAL TYPE I, "Total number of dialog work
    "processes in the group
    MSG(80) VALUE SPACE, "Container for error message in
    "case of remote RFC exception.
    INFO LIKE RFCSI, C, "Message text
    JOBS TYPE I VALUE 10, "Number of parallel jobs
    SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
    RCV_JOBS TYPE I VALUE 1, "Work packet replies received
    EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
    TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
    "parallel processing work unit)
    BEGIN OF TASKLIST OCCURS 10, "Task administration
    TASKNAME(4) TYPE C,
    RFCDEST LIKE RFCSI-RFCDEST,
    RFCHOST LIKE RFCSI-RFCHOST,
    END OF TASKLIST.
    Optional call to SBPT_INITIALIZE to check the
    group in which parallel processing is to take place.
    Could be used to optimize sizing of work packets
    work / WP_AVAILABLE).
    CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
    EXPORTING
    GROUP_NAME = GROUP
    "Name of group to check
    IMPORTING
    MAX_PBT_WPS = WP_TOTAL
    "Total number of dialog work
    "processes available in group
    "for parallel processing
    FREE_PBT_WPS = <b>WP_AVAILABLE</b>
    "Number of work processes
    "available in group for
    "parallel processing at this
    "moment
    EXCEPTIONS
    INVALID_GROUP_NAME = 1
    "Incorrect group name; RFC
    "group not defined. See
    "transaction RZ12
    INTERNAL_ERROR = 2
    "R/3 System error; see the
    "system log (transaction
    "SM21) for diagnostic info
    PBT_ENV_ALREADY_INITIALIZED = 3
    "Function module may be
    "called only once; is called
    "automatically by R/3 if you
    "do not call before starting
    "parallel processing
    CURRENTLY_NO_RESOURCES_AVAIL = 4
    "No dialog work processes
    "in the group are available;
    "they are busy or server load
    "is too high
    NO_PBT_RESOURCES_FOUND = 5
    "No servers in the group
    "met the criteria of >
    "two work processes
    "defined.
    CANT_INIT_DIFFERENT_PBT_GROUPS = 6
    "You have already initialized
    "one group and have now tried
    "initialize a different group.
    OTHERS = 7..
    CASE SY-SUBRC.
    WHEN 0.
    "Everything’s ok. Optionally set up for optimizing size of
    "work packets.
    WHEN 1.
    "Non-existent group name. Stop report.
    MESSAGE E836. "Group not defined.
    WHEN 2.
    "System error. Stop and check system log for error
    "analysis.
    WHEN 3.
    "Programming error. Stop and correct program.
    MESSAGE E833. "PBT environment was already initialized.
    WHEN 4.
    "No resources: this may be a temporary problem. You
    "may wish to pause briefly and repeat the call. Otherwise
    "check your RFC group administration: Group defined
    "in accordance with your requirements?
    MESSAGE E837. "All servers currently busy.
    WHEN 5.
    "Check your servers, network, operation modes.
    WHEN 6.
    Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
    DESTINATION IN GROUP to call the function module that does the
    work. Make a call for each record that is to be processed, or
    divide the records into work packets. In each case, provide the
    set of records as an internal table in the CALL FUNCTION
    keyword (EXPORT, TABLES arguments).
    DO.
    CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
    "in parallel
    STARTING NEW TASK TASKNAME "Name for identifying this
    "RFC call
    DESTINATION IN GROUP group "Name of group of servers to
    "use for parallel processing.
    "Enter group name exactly
    "as it appears in transaction
    "RZ12 (all caps). You may
    "use only one group name in a
    "particular ABAP program.
    PERFORMING RETURN_INFO ON END OF TASK
    "This form is called when the
    "RFC call completes. It can
    "collect IMPORT and TABLES
    "parameters from the called
    "function with RECEIVE.
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1 MESSAGE msg
    "Destination server not
    "reached or communication
    "interrupted. MESSAGE msg
    "captures any message
    "returned with this
    "exception (E or A messages
    "from the called FM, for
    "example. After exception
    "1 or 2, instead of aborting
    "your program, you could use
    "SPBT_GET_PP_DESTINATION and
    "SPBT_DO_NOT_USE_SERVER to
    "exclude this server from
    "further parallel processing.
    "You could then re-try this
    "call using a different
    "server.
    SYSTEM_FAILURE = 2 MESSAGE msg
    "Program or other internal
    "R/3 error. MESSAGE msg
    "captures any message
    "returned with this
    "exception.
    RESOURCE_FAILURE = 3. "No work processes are
    "currently available. Your
    "program MUST handle this
    "exception.
    YOUR_EXCEPTIONS = X. "Add exceptions generated by
    "the called function module
    "here. Exceptions are
    "returned to you and you can
    "respond to them here.
    CASE SY-SUBRC.
    WHEN 0.
    "Administration of asynchronous RFC tasks
    "Save name of task...
    TASKLIST-TASKNAME = TASKNAME.
    "... and get server that is performing RFC call.
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    APPEND TASKLIST.
    WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
    TASKNAME = TASKNAME + 1.
    SND_JOBS = SND_JOBS + 1.
    "Mechanism for determining when to leave the loop. Here, a
    "simple counter of the number of parallel processing tasks.
    "In production use, you would end the loop when you have
    "finished dispatching the data that is to be processed.
    JOBS = JOBS - 1. "Number of existing jobs
    IF JOBS = 0.
    EXIT. "Job processing finished
    ENDIF.
    WHEN 1 OR 2.
    "Handle communication and system failure. Your program must
    "catch these exceptions and arrange for a recoverable
    "termination of the background processing job.
    "Recommendation: Log the data that has been processed when
    "an RFC task is started and when it returns, so that the
    "job can be restarted with unprocessed data.
    WRITE msg.
    "Remove server from further consideration for
    "parallel processing tasks in this program.
    "Get name of server just called...
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    "Then remove from list of available servers.
    CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
    IMPORTING
    SERVERNAME = TASKLIST-RFCDEST
    EXCEPTIONS
    INVALID_SERVER_NAME = 1
    NO_MORE_RESOURCES_LEFT = 2
    "No servers left in group.
    PBT_ENV_NOT_INITIALIZED_YET = 3
    OTHERS = 4.
    WHEN 3.
    "No resources (dialog work processes) available at
    "present. You need to handle this exception, waiting
    "and repeating the CALL FUNCTION until processing
    "can continue or it is apparent that there is a
    "problem that prevents continuation.
    MESSAGE I837. "All servers currently busy.
    "Wait for replies to asynchronous RFC calls. Each
    "reply should make a dialog work process available again.
    IF EXCP_FLAG = SPACE.
    EXCP_FLAG = 'X'.
    "First attempt at RESOURCE_FAILURE handling. Wait
    "until all RFC calls have returned or up to 1 second.
    "Then repeat CALL FUNCTION.
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
    ELSE.
    "Second attempt at RESOURCE_FAILURE handling
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
    "SY-SUBRC 0 from WAIT shows that replies have returned.
    "The resource problem was therefore probably temporary
    "and due to the workload. A non-zero RC suggests that
    "no RFC calls have been completed, and there may be
    "problems.
    IF SY-SUBRC = 0.
    CLEAR EXCP_FLAG.
    ELSE. "No replies
    "Endless loop handling
    ENDIF.
    ENDIF.
    ENDCASE.
    ENDDO.
    Wait for end of job: replies from all RFC tasks.
    Receive remaining asynchronous replies
    WAIT UNTIL RCV_JOBS >= SND_JOBS.
    LOOP AT TASKLIST.
    WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
    30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
    ENDLOOP.
    This routine is triggered when an RFC call completes and
    returns. The routine uses RECEIVE to collect IMPORT and TABLE
    data from the RFC function module.
    Note that the WRITE keyword is not supported in asynchronous
    RFC. If you need to generate a list, then your RFC function
    module should return the list data in an internal table. You
    can then collect this data and output the list at the conclusion
    of processing.
    FORM RETURN_INFO USING TASKNAME.
    DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
    RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
    IMPORTING RFCSI_EXPORT = INFO
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1
    SYSTEM_FAILURE = 2.
    RCV_JOBS = RCV_JOBS + 1. "Receiving data
    IF SY-SUBRC NE 0.
    Handle communication and system failure
    ELSE.
    READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
    IF SY-SUBRC = 0. "Register data
    TASKLIST-RFCHOST = INFO_RFCHOST.
    MODIFY TASKLIST INDEX SY-TABIX.
    ENDIF.
    ENDIF.
    ENDFORM
    Reward points if that helps.
    Manish
    Message was edited by:
            Manish Kumar

  • Strange behaviour in parallel processing (aRFC)

    Hi,
    I have programmed a data extraction program in SAP IS-U. Due to the sheer size of the tables (900 million+) if had to use parallel processing to keep the runtime acceptable.
    When running in the QA-environment I see a funny behaviour. There are about 49 dialog processes available, but the program never uses more than around 15. In transaction SARFC the settings are, that the appserver may get a load of 100% processes.
    Furthermore I see that in the first few minutes a lot of jobs are being created. Then for a few minutes almost nothing happens (maybe 2 or 3 are running) and then a few minutes later it's back to normal. This cycle repeats on and on and takes around 20 minutes. The Basis-people say the system load is not very high, neither is the DB-load.
    My questions are:
    1) Why does the job counter never exceed the 15 jobs, even though there are plenty available (65 in total, 49 on my app server)?
    2) Why is the performance so wobbly? I would expect that the slots in SM51 should always be filled with fresh jobs.
    With kind regards,
    Crispian Stones
    P.S.
    The mechanism I use is similar to the following:
    loop at tb_todo into wa_todo.
      call function 'SPBT_GET_CURR_RESOURCE_INFO'
       importing FREE_PBT_WPS = available.
      check available gt 1.
      call function 'Z_MY_EXTRACTOR'
        starting new task my_taskname
        in destination my_destination
        performing my_callback on end of task
        exporting
          i_data = wa_todo.
      if sy-subrc eq 0.
        add 1 to created.
      endif.
    endloop.
    wait until returned ge created.
    form my_callback.
      add 1 to returned.
      receive results from function 'Z_MY_EXTRACTOR'.
    endform.

    Hello,
    I am facing a similar issue in one of my || processing program as well. The program, when executed with a data-set of 10,000 records takes 65 minutes to complete. One would expect it to take 650 minutes (or even lesser) to process a data-set of app. 100,000 records.
    However, when I run the program for a file with app. 100,000 records the program runs OK initially (i.e; I could see multiple dialog processes getting invoked in SM50) but, after a while it starts running on ONLY ONE dialog process. I am not quite sure where, when and why this PARALLEL to SEQUENTIAL switch is happening. Due to this, the program drags on and on and on. I would highly appreciate your suggestions/tips to put this bug to sleep.
    Here is a summary of the logic used...
      w_group = 'BATCH_PARALLEL'.
      w_task  = w_task + 1.
      CALL FUNCTION 'SPBT_INITIALIZE'
       EXPORTING
         group_name                           = w_group
       IMPORTING
         max_pbt_wps                          = w_pr_total   "Total processes
         free_pbt_wps                         = w_pr_avl     "Avail processes
       EXCEPTIONS
         invalid_group_name                   = 1
         internal_error                       = 2
         pbt_env_already_initialized          = 3
         currently_no_resources_avail         = 4
         no_pbt_resources_found               = 5
         cant_init_different_pbt_groups       = 6
         OTHERS                               = 7.
      IF sy-subrc <> 0.
      Raise error mesage and quit
        w_wait = c_x.
    If everything went well, continue processing
      ELSE.
        CLEAR: w_wait.
    The subroutine that receives results from the parallel FMs will reduce
    this counter and set the flag W_WAIT once the value is equal to ZERO
        w_count = LINES( data ).
    Refresh the temporary table that will be populated for every partner
        REFRESH: t_data.
        LOOP AT data.
    Keep appending data to the temporary table
          APPEND data TO t_data.
          AT END OF partner.
            CLEAR: w_subrc.
            CALL FUNCTION 'Z_PARALLEL_FUNCTION'
              STARTING NEW TASK w_task
              DESTINATION IN GROUP w_group
              PERFORMING process_return ON END OF TASK
              TABLES
                data                  = t_data
              EXCEPTIONS
                communication_failure = 1      "Mandatory for || processing
                system_failure        = 2      "Mandatory for || processing
                RESOURCE_FAILURE      = 3      "Mandatory for || processing
                OTHERS                = 4.
            w_subrc = sy-subrc.
    Check if everything went well...
            CLEAR: w_rfcdest.
            CASE w_subrc.
              WHEN 0.
    This variable keeps track of the number of threads initiated. In case
    all the processes are busy, we should compare this with the variable
    w_recd (set later in the subroutine 'PROCESS_RETURN'), and wait till
    w_sent >= w_recd.
                w_sent = w_sent + 1.
    Track all the tasks initiated.
                CLEAR: wa_tasklist.
                wa_tasklist-taskname = w_task.
                APPEND wa_tasklist TO t_tasklist.
              WHEN 1 OR 2.
    Populate the error log table and continue to process the rest.
              WHEN OTHERS.
    There might be a lack of resources. Wait till some processes
    are freed again. Populate the records back to the main table
                CLEAR: wa_data.
                LOOP AT t_data INTO wa_data.
                  APPEND wa_data TO data.
                ENDLOOP.
                WAIT UNTIL w_recd >= w_sent. "IS THIS THE CULPRIT?
            ENDCASE.
    Increment the task number
            w_task = w_task + 1.
    Refresh the temporary table
            REFRESH t_data.
          ENDAT.
        ENDLOOP.
      ENDIF.
    Wait till all the records are returned.
      WAIT UNTIL w_wait = c_x UP TO '120' SECONDS.
    FORM process_return USING p_taskname.                       "#EC CALLED
      REFRESH: t_data_tmp.
      CLEAR  : w_subrc.
    Check the task for which this subroutine is processed!!!
      CLEAR: wa_tasklist.
      READ TABLE t_tasklist INTO wa_tasklist WITH KEY taskname = p_taskname.
    If the task wasn't already processed...
      IF sy-subrc eq 0.
    Delete the task from the table T_TASKLIST
        DELETE TABLE t_tasklist FROM wa_tasklist.
    Receive the results back from the function module
        RECEIVE RESULTS FROM FUNCTION 'Z_PARALLEL_FUNCTION'
          TABLES
            address_data          = t_data_tmp
          EXCEPTIONS
            communication_failure = 1      "Mandatory for || processing
            system_failure        = 2      "Mandatory for || processing
            RESOURCE_FAILURE      = 3      "Mandatory for || processing
            OTHERS                = 4.
    Store sy-subrc in a temporary variable.
        w_subrc = sy-subrc.
    Update the counter (Number of tasks/jobs/threads received)
        w_recd = w_recd + 1.
    Check the returned values
        IF w_subrc EQ 0.
    Do necessary processing!!!
        ENDIF.
    Subtract the number of records that were returned back from the
    total number of records to be processed
        w_count = w_count - LINES( t_data_tmp ).
    If the counter is ZERO, set W_WAIT.
        IF w_count = 0.
          w_wait = c_x.
        ENDIF.
      ENDIF.
    ENDFORM.                    " process_return
    Thanks,
    Muthu

  • Parallel processing for one large message

    I have some troubles from messaging performance perspective.
    Sender:ABAP Proxy
    Receiver:File Adapter
    I'd like use parallel processing for one large message.
    And the file for receiver is needed to be one file.
    Could you let me know how to set them ?
    Best regards,
    Koji Nagai

    Hi
    Can you elaborate your requirement more?
    How are you trying to achieve parallel processing in XI.
    Since you mentioned that the source is Proxy, there should be some trigger mechanism say selection screen, you restrict the values here and use append strategy in File and can execute the same.
    REgards
    Krish

  • Parallel Processing : How to Handle Resource failure?

    Hi,
    I have implemented the parallel processing/ asynchronous rfc call in my system because we have to process millions of records and processing is important. My Program does work fine in Development and quality for small number of records but during SVT I am encountering RESOURCE_FAILURE exception. As of now I have tried to wait for more time and then process it again and also on failure I have tried to process sequential but nothing worked with second approach that is on resource_failure execute normal FM call it is resulting in terminating Parallel processing.
    Any Pointer on how to handle it is appreciated.
    Regards,
    Deepak Bhalla

    <b>Handling the RESOURCE_FAILURE exception:</b> As each parallel processing task is dispatched, the SAP system counts down the number of resources (dialog work processes) available for processing additional tasks. This count goes up again as each parallel processing task is completed and returns to your program.
    Should your parallel processing tasks take a long time to complete, then the parallel processing resources may temporarily run out. In this case, CALL FUNCTION returns the exception RESOURCE_FAILURE. This means simply that all dialog work processes in the RFC group that your program is using are in use.
    Your program must now wait until resources become available and then re-issue the CALL FUNCTION that failed. In the sample program, we use a simple, reasonably failsafe wait mechanism. The program waits for parallel processing tasks to return, freeing up resources. The WAIT also specifies a initial timeout of 1 second. If the CALL FUNCTION again fails, the WAIT is repeated with a longer time-out. You can increase the time-outs if you expect that your parallel tasks will take longer to complete. You should also add code to exit from the retry loop after a suitable number of iterations.
    Use WAIT statement.
    Hope this resolves u r issue.
    - Raj

  • Parallel Processing : Unable to capture return results using RECIEVE

    Hi,
    I am using parallel processing in one of my program and it is working fine but I am not able to collect return results using RECIEVE statement.
    I am using
      CALL FUNCTION <FUNCTION MODULE NAME>
             STARTING NEW TASK TASKNAME DESTINATION IN GROUP DEFAULT_GROUP
             PERFORMING RETURN_INFO ON END OF TASK
    and then in subroutine RETURN_INFO I am using RECEIVE statement.
    My RFC is calling another BAPI and doing explicit commit as well.
    Any pointer will be of great help.
    Regards,
    Deepak Bhalla
    Message was edited by: Deepak Bhalla
    I used the wait command after rfc call and it worked additionally I have used Message switch in Receive statement because RECIEVE statement was returing sy-subrc 2.

    Not sure what's going on here. Possibly a corrupt drive? Or the target drive is full?
    Try running the imagex command manually from a F8 cmd window (in WinPE)
    "\\OCS-MDT\CCBShare$\Tools\X64\imagex.exe" /capture /compress maximum C: "\\OCS-MDT\CCBShare$\Captures\CCB01-8_15_14.wim" "CCB01CDrive" /flags ENTERPRISE
    Keith Garner - Principal Consultant [owner] -
    http://DeploymentLive.com

  • Parallel processing of mass data : sy-subrc value is not changed

    Hi,
    I have used the Parallel processing of mass data using the "Start New Task" . In my function module I am handling the exceptions and finally raise the application specific old exception to be handled in my main report program. Somehow the sy-subrc is not getting changed and always returns 0 even if the expection is raised.
    Can anyone help me about the same.
    Thanks & Regards,
    Nitin

    Hi Silky,
    I've build a block of code to explain this.
      DATA: ls_edgar TYPE zedgar,
            l_task(40).
      DELETE FROM zedgar.
      COMMIT WORK.
      l_task = 'task1'.
      ls_edgar-matnr = '123'.
      ls_edgar-text = 'qwe'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task2'.
      ls_edgar-matnr = 'abc'.
      ls_edgar-text = 'def'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task3'.
      ls_edgar-matnr = '456'.
      ls_edgar-text = 'xyz'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
    *&      Form  f_go
    FORM f_go USING p_c TYPE ctype.
      RECEIVE RESULTS FROM FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' EXCEPTIONS err = 2.
      IF sy-subrc = 2.
    *this won't affect the LUW of the received function
        ROLLBACK WORK.
      ELSE.
    *this won't affect the LUW of the received function
        COMMIT WORK.
      ENDIF.
    ENDFORM.                    "f_go
    and the function is:
    FUNCTION z_edgar_commit_rollback.
    *"*"Interface local:
    *"  IMPORTING
    *"     VALUE(LINE) TYPE  ZEDGAR
    *"  EXCEPTIONS
    *"      ERR
      MODIFY zedgar FROM line.
      IF line-matnr CP 'a*'.
    *comment raise or rollback/commit to test
    *    RAISE err.
        ROLLBACK WORK.
      ELSE.
        COMMIT WORK.
      ENDIF.
    ENDFUNCTION.
    ok.
    In your main program you have a Logical Unit of Work (LUW), witch consists of an application transaction and is associated with a database transaction. Once you start a new task, your creating an independent LUW, with it's own database transaction.
    So if you do a commit or rollback in your function the effect is only on the records your processing in the function.
    There is a way to capture the event when this LUW concludes in the main LUW. That is the PERFORMING whatever ON END OF TASK. In there you can get the result of the function but you cannot commit or rollback the LUW from the function since it already have implicitly happened at the conclusion of the funtion. You can test it by correctly comment the code I've supplied.
    So, if you  want to rollback the LUW of the function you better do it inside it.
    I don't think it matches exactly your question, maybe it lead you on the right track. Give me more details if it doesn't.
    Hope it helps,
    Edgar

  • Parallel Processing and Capacity Utilization

    Dear Guru's,
    We have following requirement.
    Workcenter A Capacity is 1000.   (Operations are similar)
    Workcenter B Capacity is 1500.   (Operations are similar)
    Workcenter C Capacity is 2000.   (Operations are similar)
    1) For Product A: Production Order Qty is 4500. Can we use all workcenter as a parallel processing through Routing.
    2) For Product B: Production Order Qty is 2500. Can we use only W/C A and B as a parallel processing through Routing.
    If yes, plz explain how?
    Regards,
    Rashid Masood

    May be you can create a virtual WC VWCA=ABC (connected with a hierarchy with transaction CR22) and another VWCB=A+B and route your products to each VWC

  • Parallel processing open items (FPO4P)

    Hello,
    I have a question about transaction FPO4p (parallel processing of open items).
    When saving the parameters the following message always appears : "Report cannot be evaluated in parallel". The information details tells that when you use a specific parallel processing object, you also need to use that field to sort on.
    I my case I use the object GPART for parallel processing (see tab technical settings). In the tab output control I selected a line layout which is sorted by business partner (GPART). Furthermore no selection options are used.
    Does anyone know why the transaction cannot save the parameters and shows the error message specified above. I really don't know what goes wrong.
    Thank you in advance.
    Regards, Ramon.

    Ramon
    Apply note 1115456.
    Maybe that note can help you
    Regards
    Arcturus

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • How to achieve parallel processing in a single request?

    Hi all,
    I have a method in a Session EJB that will perform some business logic before it returns an answer to the client. The logic it will perform is to collect data from the applications database and two external systems, before sending all data to a third external system to get a response and send it back to the client. Each external system is quite slow so I would like to do all the collecting of data concurrent, parallel processing. How should I handle this? I'm not allowed to create my own threads in EJB's. Can I use MDB in some way? To the calling client this should be a synchronous call...
    Greatfull for any suggestions
    Cheers
    Anders =)

    Usually, the request is received by a component located in the web container, such as by an HTTP request (including Web Services). This component is able to start threads to allow parallel processing. Now, if for some reason the request arrives directly at EJB level and that you cannot move its receiver to web component, I think JMS is not a viable solution because you will switch to asynchronous processing and you have no way to make your EJB wait for the responses while preserving the client request (waiting implies programmatic life cycle management, which is forbidden in EJB container). Maybe a resource adapter (JCA) can bring a solution. A resource adapter acts as a datasource (a datasource is a specialization of a resource adapter) and thus it is a logical way to implement an adapter to an external, eventually non-J2EE, resource, as the name implies :) But I don't have enough knowledge in JCA to be sure of this.
    Hope it helps.
    Bruno Collet
    http://www.practicalsoftwarearchitect.com

  • FORK is Not happening Parallel processing- It's working sequential

    Hi,
       we are into PI 7.O and SP 13.
       I am trying to test Parallel processing using Fork step. (With Two branches)
    My problem is sxm_moni both branches are not executed simultenously and it's executing one after the other.
    Did any body done in XI parallel processing using BPM...both calls has to finish at the same time. I mean first call 10 min and second call aslo has to finish first 10 min ..not other 10 min.
    I heard this problem from XI 3.0 and PI 7.O. But PI 7.1 did any body test the Parallel processing using Fork step.
       Pls help me is this issue will resolve if I go to PI 7.1.
    Regards,
    Venu.

    Hi Henrique,
    they would not necessarily start at the same time but shouldnt also be queued - Customer expecting the response within a 17 sec or 20 Sec but coming response 34 sec will not ok for the customer..tomorrow need add some more target again 17 sec will take...How PI can handle the Multi threading they are checking...I am not sure this problem fixed in PI 7.1 or not.
    there're # of connection restrictions in your system? Check that - Where can I check connections restrictions...If you know pls through some light on this.
    Also, how's your BPM transactional behavior (did you flag the create new transaction steps)?
    - I did not checked the flag for create new transaction step..once my server is up I can check the flag and I can test.
    Regards,
    Venu.

  • Unable to set default date for Date Picker item using Auto Row Processing

    Okay, I have searched through the forum for an answer, and have not found a thing to account for my problem.
    First, does anyone know if using Auto Row Processing has problems updating an item/field in a record where the Source is defined as Database Column if the 'Display As' is defined as 'Date Picker (MM/DD/YYYY)'?
    I ask this only because I found out the hard way that Auto Row Processing does NOT fetch the value for an item where the field is defined as TIMESTAMP in the database.
    My problem is as follows: I have a form that will CREATE a new record, allowing the user to select dates from Date Pickers, text from Select Lists, and entering in text into a Textarea item. The information is saved using a standard (created through the Auto Row Processing wizared) CREATE page level button. After the record is created the user is able to go into it and update the information. At that time, or later, they will click on one of two buttons, 'ACCEPT' or 'DECLINE'. These are Item level buttons, which set the REQUEST value to 'APPLY' (Accept) and 'UPDATE' (Decline). The Accept button executes a Process that changes the Status Code from 'Initiated' to 'Accepted', and sets the Declined_Accepted_Date to SYSDATE, then another Process SAVEs the record. The Declined button runs a Process that changes the Status Code from 'Initiated' to 'Declined', and sets the Declined_Accepted_Date to SYSDATE, then another Process SAVEs the record.
    However, even though the Status Code field is updated in the database record in both Accepted and Declined processing, the Declined_Accepted_Date field remains NULL in the database record (by looking at the records via SQL Developer). WHY??? I looked at the Session State values for both the Status Code and the Declined_Accepted_Date fields and saw that the fields (items) had the expected values after the process that SAVEs the record.
    The following is the code from the Accept button Page Process Source/Process:
    BEGIN
    :P205_STATUS_CD := 'A';
    :P205_REF_DECLINE_ACCEPT_DT := SYSDATE;
    END;
    As can be seen, the Status Code and Declined_Accepted_Date items are set one right after the other.
    As an aside, just what is the difference between Temporary Session State vs Permanent Session State? And what is the sequence of events to differentiate the two?

    Here's yet another thing that I just looked into, further information...
    One other difference between the date field I am having problems with (Accepted_Declined_Date), and other dates (with Date Pickers) in the record is that the Accepted_Declined_Date never gets displayed until after it is set with a default date when the Accept and Decline buttons are pressed.
    One of the other dates that works, the Received Date, is able to write a default date to the record that is never typed into the box or selected from the calendar. That date is placed into the box via a Post Calculation Computation in the Source, which I set up as: NVL(:P205_REF_RECEIVED_DT,TO_CHAR(SYSDATE,'MM/DD/YYYY'))
    However, I do remember actually trying this also with the Accepted_Declined_Date, and setting the Post Calculation Computation did not work for the Accept_Decline_Date. Could this be because the Accept_Decline_Date is never rendered until the Status Code is set to Declined (in other words, there is no need to display the date and allow the user to change it until the record is actually declined)???
    The control of the displaying (rendering) of the date is set via the Conditions / Condition Type: Value of Item in Expression 1 = Expression 2
    Expression 1 = P205_STATUS_CD and Expression 2 = L
    Does this shed any light???

  • Parallel Processing in CRM 5.0 for Customer download

    Hi
    I was referring to OSS note 350176 for implementing parallel processing. I wanted to achieve parallel processing for request loads for Customer download.
    Is it possible to achieve it by setting the parameter CRM_MAX_QUEUE_NUMBER_INITIAL in CRMPAROLTP or it can be achieved only through implementing exits?
    Thanks for your help in anticipation.
    Regards
    Karthik

    HI
    Karthik in Parralllel processing first you need to set up diffent filters for diffrent types of data basic or business data, customizing and condition data R3 AC1, R3 AC4, R3 Ac5 separately then start down loading the objects to CRM , remaning objects left over can be bypassed , then set the Satus of down loading Yellow red and Blue depends up on percentage of down load y for more than 100% R for more than 75%, B for less than 50%
    Reward if helpful
    Venkat

Maybe you are looking for

  • Problem With Remote, and strange error that crashes diagnos

    Hey guys. I'm a n00b to the forum, but I think this is my best hope. I'm having this problem getting my remote to work, and I'm getting an error that appears to common, but I haven't been able to find a solid solution. I can't do a reload, since I ha

  • Sample CLASSES for different types of OIM components!!!

    Hi Experts Can any one provide me sample classes for different types of OIM components, adapters, scheduled tasks, etc. with your best practices for OIM java development and documentation? It would Help me lot. Please let us know Links/Documentations

  • Three Orders Ship confirmation done but Invoice not Generated

    Dear all, There are three orders which are ship confirmed but Invoice not Generated.Please help to check on this. A quick reply will be very much helpfull. Thanks for your help in advance. With Regards, Sunil Kumar Mallina.

  • IDE is invisible!

    When I launch Sun ONE Studio the splash screen comes up for a few seconds, then it vanishes. I get a tab for Sun ONE Studio on my taskbar but clicking on it does nothing. So I can't actually see the IDE! I've had Sun ONE Studio for a few weeks and it

  • Illustrator CC2014 won't reopen file just recently edited and closed. "Illustrator cannot open file" message.

    Windows 7 Home Premium, Dell XPS desktop.  First problems with AI in a LONG time.  Used to be able to go to the control panel for programs and "repair" the installation, now can only un-install. This file is originally from a previous version of AI,