Parallel Process Option for Optimization in Background.

Hi,
I am testing the SNP Optimizer with various settings this week on  demo version from SAP for a client.  I am looking for information that anyone might have on the SNP Parallel Processing Option in the execution of the Optimizer in the background.   The information that I could find is very thin.   I would be interested in any documentation or experience that you have.
Sincerely,
Michael M. Stahl
[email protected]

Hello,
While running the transaction /SAPAPO/SNPOP - Supply Network Optimization  in the background, In the variant of it you can enter Parallel Processing profile in the field Paral. Proc. Profile.
This profile you will require to define in the Customization(SPRO) before use it in the variant.
Path to maintain it is as below Use transaction SPRO
Advanced Planning and Optimization --> Supply Chain Planning -->Supply Network Planning (SNP) --> Profiles --> Define Parallel Processing Profile
Here you will require to define your profile... e.g. as below
Paral. Proc. Profile SNP_OPT
Description          SNP OPTIMIZER PP PROFILE
Appl. (Parallel Pr.) : Optimization
Parallel Processes   2
Logical system :
Server Group :
Block Size:
You will require to take Basis team's help to enter value for Server Group and Block size.
I hope, above information is helpful for you.
Regards,
Anjali

Similar Messages

  • CORUPROC (Process chain for confirmation, collective background processing)

    Hello Guru,
    Can you please explain me the program CORUPROC (Process chain for confirmation, collective background processing) that can be use in SE38. and how it its important for CO16N (Reprocessing Incorrect Production Order)?
    CORUPROC is also the same as CO1P transaction code.
    Please explain the use of this program.
    Thanks
    Edited by: Ryan on Feb 3, 2009 7:55 AM

    Hi Carina,
    It might take some time for the attribute change run.
    what you can do is right click on th running process , go to display messages and check if every thing is fine.
    also goto batch monitor from there and see if the job is still running.
    alternatively, if you are not sure if the job got strucked up, kill the job from batch monitor and run the attribute change run manually.
    you got two options two do now.
    1)Wait for some time for the process to complete and check the messages.
    2)kill the job and run att change run manually.
    i suggest you to go for option 1 and if it is taking too long ar there is any error message, analyze and then go for option 2
    hope this helps.
    cheers,
    Srinath.
    Edited by: Srinath Singamsetti on Aug 4, 2009 5:44 AM

  • Parallel process define for batch job

    Hi,
    I would like to run a batch job with a few processes run parallel together. May I know where can i define it ? T-code ?
    Regards
    Lauran

    Hi Lauren,
    First of all there is no transaction code as such.
    First of all the report that  needs to be run in background should enable you to do parrallel processing. For that code has to be written accordingly.
    Check this link:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/fa/096e92543b11d1898e0000e8322d00/content.htm
    It gives details of function modules needed for this purpose.
    After this you need to create a variant for the report and schedule it to run in background using either SE38 (dirrectly) or by creating a job explicitily- SM36.
    A standard report that has parallel processing feature available is RBDAPP01.
    Also check transactions like BD18. They also make use of parallel processing.
    Regards.
    Ruchit.

  • Processing options for EDI vendor Invoices

    I'm trying to get a better understanding of what my options are for processing EDI vendor invoices and would appreciate some help.  I'm using  message type INVOIC02 with process code INVL and in configuration I've set the processing option in table T076S to 4 (tolerence corresponding to online processing).  When an invoice fails either the vendor specific tolerance check or the payment blocking checks the invoice gets parked with error status 3.  This is behaving as I would expect.  The issue I have is the price on the parked invoice is the price from the PO and not the price from the EDI segment.  Is there anyway to change this?  This makes it difficult to analyze the cause of the error without going back to the Idoc segments.  I've tried playing with the other processing options but they also enter the PO price in the parked invoice.  If it's not possible to change this are there any tricks that i'm missing that can help the users analyze the cause of the errors without going back into the idoc?
    thanks,

    If anyone has any expierence with this, I would appreciate any advice you may have.
    thanks,

  • ProcessAdd in processing options for keeping data fresh in dimensions (very large dimension)

    hi all, maybe it's a stupid questions but when I open Analysis services processing task editor in ssis  to process dimension, I just can not find the ProcessAdd in the Process Options List? 
    thanks
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

    thanks sqlsaga, I did check but no luck ...
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --
    This is what you should do for ProcessAdd option on a dimension
    http://www.mssqltips.com/sqlservertip/2997/using-processadd-to-add-rows-to-a-dimension-in-sql-server-analysis-services-ssas/
    http://www.purplefrogsystems.com/blog/2013/09/dimension-processadd-in-ssas/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Disjoint Selections in Parallel Processing Profile

    Hello All,
             I wanted to use the parallel processing profile for my DP background jobs.
    When I referred the 'Note 853142 - Consulting: Parallel processing profile for DP batch runs' it says use this parallel processing using disjoint selections (external parallel processing). Can you please explain what is the meaning of disjoint selections.
    Thanks,
    Siva.

    Hi Siva,
    With quite large background runs, the background work process can experience
    memory problems when the characteristics combinations are being imported.
    Therefore, an additional external' parallel processing (using several background jobs
    and disjoint selections) may be necessary.  Disjoint selections has no link to its
    selections made.
    Regards
    R. Senthil Mareeswaran.

  • SAP job not using all dialog processes that are available for parallel processing

    He Experts,
    The customer is running a job which is not using all the dialog processes that are available for parallel processing. It appears to use up the parallel processes (60) for the first 4-5 minutes of the job and then maxes out about 3-5 processes for the remainder of the job.
    How do I analyze the job to find out the issue from a Basis perspective?
    Thanks,
    Zahra

    Hi Daniel,
    Thanks for replying!
    I don't believe its a standard job.
    I was thinking of starting a trace using ST05 before the job. What do you think?
    Thanks,
    Zahra

  • Dynamic Parallel Processing using Rule

    Hello,
    I am using a User Decision within a Block (ParForEach type) step to send work-items to multiple Approvers parallelly.
    For this I have created a Multi-line container LI_APPROVERS and bound &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& to &_LI_APPROVERS_LINE& in the "Parallel Processing" tab of the Block.
    Now in User Decision I am using Agent as Expression = &_LI_APPROVERS_LINE&. This is working perfectly fine if I fetch the values in LI_APPROVERS via a background method before "Block" step is executed.
    I want to know if we can do this using a "Rule" within the User Decision? Meaning approvers are determined by the Rule(through a FM) at the run time instead of fetching them beforehand. I created a custom Rule and tried passing it under Agents but it didn't work. I do not know what bindings need to be done and how each line will be passed to User Decision to create a work-item for each user.
    Or
    I should remove the Block step completely and directly use the User Decision Task with Parallel Processing option under Miscellaneous tab?
    Can someone please explain how to achieve this using a Rule and exactly what bindings are required.
    Thanks.

    Hi Anjan,
    Yes, that's exactly what I want to know. I saw your below response in one of the threads but could not understand exactly how to do it. Can you please explain it.
    You have all  your agents in one multiline container element in workflow.
    Then you take a block step with perforeach.
    Then create a custom rule which will import multiline element of agents , and a line_no. Then in the rule you populate the actor_tab with agents from that multiline contaier elemens of agent. The logic will take the agent from the multiline container[line_no].
    Then you take a activity step . In agent use your custom rule usin prpoer bindin of multiline element of agents and for line_no you pass _***_line from block container. Then workitem will sent to n no of people parrallaly.
    This is my current design:
    Activity returns agents in LI_APPROVERS.
    At Block: I have binding &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& --> &_LI_APPROVERS_LINE&
    At UD: I have Agents as Expression = &_LI_APPROVERS_LINE&
    I want to remove the Activity step (to get Agents in background) and replace with Rule within UD. What binding do I need from Rule to Workflow? How to get the "Line_no" from rule as you mentioned above.
    Thanks for your response.

  • Parallel processing in ck40n

    Hi All,
    We are trying to use parallel processing option in ck40n to reduce time of costing run.
    But, I am not quite sure what to input for the "Maximum No. of Servers/Modes" while selecting variant for costing.
    Is it application servers or number of WPs which we desire to allocate to the costing run.
    For what ever value I give, I am getting a dump.
    Documentation for system log message D0 1 :                         |
    The transaction has been terminated.  This may be caused by a
    termination message from the application (MESSAGE Axxx) or by an
    error detected by the SAP System due to which it makes no sense to
    proceed with the transaction.  The actual reason for the termination
    is indicated by the T100 message and the parameters.
    Additional documentation for message 00                  671
    ABAP/4 processor: &
    |No documentation exists for message 00671                  
    Could anyone guide me here and explain me how exactly I should use this parallel processing option
    Regards,
    Brahmeshwar.

    It is best to coordinmate with your BASIS team on this. You do not want to run this on all processors and that will cause problems with other jobs. We schedule it on 8 processors at a time; and we split our jobs into 10 jobs (to cost 7.5m materials).

  • CIF compare/reconcile -- Parallel process profile in R/3

    Dear experts, I'm opening a complete new discussion with this topic...
    does anybody have experience setting up parallel processing for CCR?
    I did create 2 parallel profiles in customizing (1 for APO and 1 for R3) and I executed the report in background.
    Most of the performance time that the report uses is spent on R3 side reading tables like VAPMA and VLPMA. In order to improve runtime performance I set up the 2 parallel process profiles as shown above (defiining server, number of processes and block size in customizing following SAP recommendations already).
    In APO I am perfectly able to see how more than one process is being triggered and therefore parallelization works.
    Nevertheless, I never see in R/3 the same thing happening. No matter how I define the parallel process profile for R/3, I always see one unique process in transations SM51 and SM66 on this system as shown below.
    Is there a specific setting I need to maintain to achieve this process parallelization in R/3? Is it even possible to split processes in R3? and if not, what is the purpose of having Parallel Process profiles in R3?
    Thanks for your support
    Salvador

    Hi Rupesh,
    indeed a good remark, but we already got this note in R3 upfront.
    Regarding VBBE, we did ran CCR without it being selected. This has an equivalent effect to running report SDRQCR21 as it will also update index tables. It is recommented by SAP to schedule it on a regular basis and that's the direction we took.
    These ones certainly improve performance significantly, but still we are wondering about parallel processing in R3.
    If there is a profile for parallel process available in CCR, why are we not able to see parallel processes in R3 when executing the report? This should also help in improving even more performance if we could get it working properly...
    Further information...
    Is maybe someone aware of the following points?
    - RFC call is using DIA processes. Are these suitable for parallellization in any case?
    - what are the profile parameters that need to be set up (RZ10) and is there a direct link with parallel processing? For example, I notice that rdisp/rfc_pool_size is equal to zero in my R3 system but it is 10% in APO.
    - are there other parameters like rdisp/rfc* that can really block me from getting parallel processes?
    Thanks for your interventions
    Salvador
    Message was edited by: Salvador García

  • Parallel processing of GRC AC 5.3 SP13 Batch Risk Analysis

    Hello,
    i have a question regarding the parallel processing of Batch Risk Analysis background jobs.
    We have just implemented GRC AC 5.3 SP13 in one of our test-systems. The test-system currently has 6 Server-Nodes.
    When running the initial Full Sync. Batch Risk Analysis the background job only gets executed by one of the Background Job Worker-Threads in one of the Server-Nodes. All the other ID0 Background Job Worker-Threads in the remaining Server-nodes remain idle.
    Is there any special configuration in RAR needed in order to enable the parallel processing or is this dependent on the configuration of the Java-system itself ? If it's a system-configuration thing what configuration do I need to task the basis team to look into ?
    Regards,
    Benjamin

    >
    S. Pados wrote:
    > In RAR config - optimizaiton you have to set webservice i/o and netweaver lock to NO. These are giving issues when set to yes. It resolved it at our system. don't forget to restart the server after changing these values.
    >
    > Regards,
    > Stefan
    Hello Stefan,
    we are currently on SAP GRC AC 5.3 SP13 (withouth the Patches). The changes we do in the webinterface for the two settings you have mentionened are not written to the database (execute select * from virsa_cc_config in the CC Debugger-Webinterface and look for IDs 250 and 251). This is a known bug.
    For parameter "Store WebService Input/Output parameters objects in database Yes/No?" there seems to be a workaround by inserting the setting into the table directly (SNOTE 1508611). Can you confirm that the entry with CNFGPARAM-ID=251 exists in the table in your system (with value NO)?
    For parameter "Use Net Weaver Logical Lock Yes/No?" there's the same issue. The workaround is described in SNOTE 1528592. Can you please confirm that theres an entry CNFGPARAM-ID=250 with value NO in your table ?
    If you confirm the two settings I will have our IT guys insert those entries into the DB table.
    P.S.: We are not planning to implement any Patches or SPs any time soon because we are doing the Ramp-up for SAP GRC 10.0. We want to check out the functionality in there first and not put too much effort into patching the "old" version right now.
    @Chris Chapman:
    I will have someone check the server logs for the errors that are described in the SNOTE you provided. Thanks for the help.

  • Allowing parallel processing of cube partitions using OWB mapping

    Hi All,
    Iam using an OWB mapping to load a MOLAP cube partitioned on TIME dimension. I configured the OWB mapping by checking the 'Allow parallel processing' option with the no.of parallel jobs to be 2. I then deployed the mapping.The data loaded using the mapping is spread across multiple partitions.
    The server has 4 CPU's and 6 GB RAM.
    But, when i kick off the mapping, i can see only one partition being processed at a time in the XML_LOAD_LOG.
    If i process the same cube in AWM, using parallel processing, i can see that multiple partitions are processed.
    Could you pls suggest if i missed any setting on OWB side.
    Thanks
    Chakri

    Hi,
    I have assigned the OLAP_DBA to the user under which the OWB map is running and the job started off.
    But, it failed soon with the below error:
    ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
    Here is the log :
    Load ID     Record ID     AW     Date     Actual Time     Message Time     Message
    3973     13     SYS.AWXML     12/1/2008 8:26     8:12:51     8:26:51     ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
    3973     12     XPRO_OLAP_NON_AGG.OLAP_NON_AGG     12/1/2008 8:19     8:12:57     8:19:57     Attached AW XPRO_OLAP_NON_AGG.OLAP_NON_AGG in RW Mode.
    3973     11     SYS.AWXML     12/1/2008 8:19     8:12:56     8:19:56     Started Build(Refresh) of XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace.
    3973     1     XPRO_OLAP_NON_AGG.OLAP_NON_AGG     12/1/2008 8:19     8:12:55     8:19:55     Job# AWXML$_3973 to Build(Refresh) Analytic Workspace XPRO_OLAP_NON_AGG.OLAP_NON_AGG Submitted to the Queue.
    Iam using AWM (10.2.0.3 A with OLAP Patch A) and OWB (10.2.0.3).
    Can anyone suggest why the job failed this time ?
    Regards
    Chakri

  • Parallel processing in background

    Hi All,
    I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
    Is there any other solutions using that i can reduce total processing time.
    Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
    Please note that all other performance tunings done.
    Thanks,
    Rajesh.

    Hi Rajesh,
    Refer sample code for <b>Parallel Processing</b>:
    By doing this your <b>processing</b> time will be highly optimized.
    Go thru the description given in the code at each level.
    This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
    Hope it helps.
    REPORT PARAJOB.
    Data declarations
    DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
    "Parallel processing group.
    "SPACE = group default (all
    "servers)
    WP_AVAILABLE TYPE I, "Number of dialog work processes
    "available for parallel processing
    "(free work processes)
    WP_TOTAL TYPE I, "Total number of dialog work
    "processes in the group
    MSG(80) VALUE SPACE, "Container for error message in
    "case of remote RFC exception.
    INFO LIKE RFCSI, C, "Message text
    JOBS TYPE I VALUE 10, "Number of parallel jobs
    SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
    RCV_JOBS TYPE I VALUE 1, "Work packet replies received
    EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
    TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
    "parallel processing work unit)
    BEGIN OF TASKLIST OCCURS 10, "Task administration
    TASKNAME(4) TYPE C,
    RFCDEST LIKE RFCSI-RFCDEST,
    RFCHOST LIKE RFCSI-RFCHOST,
    END OF TASKLIST.
    Optional call to SBPT_INITIALIZE to check the
    group in which parallel processing is to take place.
    Could be used to optimize sizing of work packets
    work / WP_AVAILABLE).
    CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
    EXPORTING
    GROUP_NAME = GROUP
    "Name of group to check
    IMPORTING
    MAX_PBT_WPS = WP_TOTAL
    "Total number of dialog work
    "processes available in group
    "for parallel processing
    FREE_PBT_WPS = <b>WP_AVAILABLE</b>
    "Number of work processes
    "available in group for
    "parallel processing at this
    "moment
    EXCEPTIONS
    INVALID_GROUP_NAME = 1
    "Incorrect group name; RFC
    "group not defined. See
    "transaction RZ12
    INTERNAL_ERROR = 2
    "R/3 System error; see the
    "system log (transaction
    "SM21) for diagnostic info
    PBT_ENV_ALREADY_INITIALIZED = 3
    "Function module may be
    "called only once; is called
    "automatically by R/3 if you
    "do not call before starting
    "parallel processing
    CURRENTLY_NO_RESOURCES_AVAIL = 4
    "No dialog work processes
    "in the group are available;
    "they are busy or server load
    "is too high
    NO_PBT_RESOURCES_FOUND = 5
    "No servers in the group
    "met the criteria of >
    "two work processes
    "defined.
    CANT_INIT_DIFFERENT_PBT_GROUPS = 6
    "You have already initialized
    "one group and have now tried
    "initialize a different group.
    OTHERS = 7..
    CASE SY-SUBRC.
    WHEN 0.
    "Everything’s ok. Optionally set up for optimizing size of
    "work packets.
    WHEN 1.
    "Non-existent group name. Stop report.
    MESSAGE E836. "Group not defined.
    WHEN 2.
    "System error. Stop and check system log for error
    "analysis.
    WHEN 3.
    "Programming error. Stop and correct program.
    MESSAGE E833. "PBT environment was already initialized.
    WHEN 4.
    "No resources: this may be a temporary problem. You
    "may wish to pause briefly and repeat the call. Otherwise
    "check your RFC group administration: Group defined
    "in accordance with your requirements?
    MESSAGE E837. "All servers currently busy.
    WHEN 5.
    "Check your servers, network, operation modes.
    WHEN 6.
    Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
    DESTINATION IN GROUP to call the function module that does the
    work. Make a call for each record that is to be processed, or
    divide the records into work packets. In each case, provide the
    set of records as an internal table in the CALL FUNCTION
    keyword (EXPORT, TABLES arguments).
    DO.
    CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
    "in parallel
    STARTING NEW TASK TASKNAME "Name for identifying this
    "RFC call
    DESTINATION IN GROUP group "Name of group of servers to
    "use for parallel processing.
    "Enter group name exactly
    "as it appears in transaction
    "RZ12 (all caps). You may
    "use only one group name in a
    "particular ABAP program.
    PERFORMING RETURN_INFO ON END OF TASK
    "This form is called when the
    "RFC call completes. It can
    "collect IMPORT and TABLES
    "parameters from the called
    "function with RECEIVE.
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1 MESSAGE msg
    "Destination server not
    "reached or communication
    "interrupted. MESSAGE msg
    "captures any message
    "returned with this
    "exception (E or A messages
    "from the called FM, for
    "example. After exception
    "1 or 2, instead of aborting
    "your program, you could use
    "SPBT_GET_PP_DESTINATION and
    "SPBT_DO_NOT_USE_SERVER to
    "exclude this server from
    "further parallel processing.
    "You could then re-try this
    "call using a different
    "server.
    SYSTEM_FAILURE = 2 MESSAGE msg
    "Program or other internal
    "R/3 error. MESSAGE msg
    "captures any message
    "returned with this
    "exception.
    RESOURCE_FAILURE = 3. "No work processes are
    "currently available. Your
    "program MUST handle this
    "exception.
    YOUR_EXCEPTIONS = X. "Add exceptions generated by
    "the called function module
    "here. Exceptions are
    "returned to you and you can
    "respond to them here.
    CASE SY-SUBRC.
    WHEN 0.
    "Administration of asynchronous RFC tasks
    "Save name of task...
    TASKLIST-TASKNAME = TASKNAME.
    "... and get server that is performing RFC call.
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    APPEND TASKLIST.
    WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
    TASKNAME = TASKNAME + 1.
    SND_JOBS = SND_JOBS + 1.
    "Mechanism for determining when to leave the loop. Here, a
    "simple counter of the number of parallel processing tasks.
    "In production use, you would end the loop when you have
    "finished dispatching the data that is to be processed.
    JOBS = JOBS - 1. "Number of existing jobs
    IF JOBS = 0.
    EXIT. "Job processing finished
    ENDIF.
    WHEN 1 OR 2.
    "Handle communication and system failure. Your program must
    "catch these exceptions and arrange for a recoverable
    "termination of the background processing job.
    "Recommendation: Log the data that has been processed when
    "an RFC task is started and when it returns, so that the
    "job can be restarted with unprocessed data.
    WRITE msg.
    "Remove server from further consideration for
    "parallel processing tasks in this program.
    "Get name of server just called...
    CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
    EXPORTING
    RFCDEST = TASKLIST-RFCDEST
    EXCEPTIONS
    OTHERS = 1.
    "Then remove from list of available servers.
    CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
    IMPORTING
    SERVERNAME = TASKLIST-RFCDEST
    EXCEPTIONS
    INVALID_SERVER_NAME = 1
    NO_MORE_RESOURCES_LEFT = 2
    "No servers left in group.
    PBT_ENV_NOT_INITIALIZED_YET = 3
    OTHERS = 4.
    WHEN 3.
    "No resources (dialog work processes) available at
    "present. You need to handle this exception, waiting
    "and repeating the CALL FUNCTION until processing
    "can continue or it is apparent that there is a
    "problem that prevents continuation.
    MESSAGE I837. "All servers currently busy.
    "Wait for replies to asynchronous RFC calls. Each
    "reply should make a dialog work process available again.
    IF EXCP_FLAG = SPACE.
    EXCP_FLAG = 'X'.
    "First attempt at RESOURCE_FAILURE handling. Wait
    "until all RFC calls have returned or up to 1 second.
    "Then repeat CALL FUNCTION.
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
    ELSE.
    "Second attempt at RESOURCE_FAILURE handling
    WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
    "SY-SUBRC 0 from WAIT shows that replies have returned.
    "The resource problem was therefore probably temporary
    "and due to the workload. A non-zero RC suggests that
    "no RFC calls have been completed, and there may be
    "problems.
    IF SY-SUBRC = 0.
    CLEAR EXCP_FLAG.
    ELSE. "No replies
    "Endless loop handling
    ENDIF.
    ENDIF.
    ENDCASE.
    ENDDO.
    Wait for end of job: replies from all RFC tasks.
    Receive remaining asynchronous replies
    WAIT UNTIL RCV_JOBS >= SND_JOBS.
    LOOP AT TASKLIST.
    WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
    30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
    ENDLOOP.
    This routine is triggered when an RFC call completes and
    returns. The routine uses RECEIVE to collect IMPORT and TABLE
    data from the RFC function module.
    Note that the WRITE keyword is not supported in asynchronous
    RFC. If you need to generate a list, then your RFC function
    module should return the list data in an internal table. You
    can then collect this data and output the list at the conclusion
    of processing.
    FORM RETURN_INFO USING TASKNAME.
    DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
    RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
    IMPORTING RFCSI_EXPORT = INFO
    EXCEPTIONS
    COMMUNICATION_FAILURE = 1
    SYSTEM_FAILURE = 2.
    RCV_JOBS = RCV_JOBS + 1. "Receiving data
    IF SY-SUBRC NE 0.
    Handle communication and system failure
    ELSE.
    READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
    IF SY-SUBRC = 0. "Register data
    TASKLIST-RFCHOST = INFO_RFCHOST.
    MODIFY TASKLIST INDEX SY-TABIX.
    ENDIF.
    ENDIF.
    ENDFORM
    Reward points if that helps.
    Manish
    Message was edited by:
            Manish Kumar

  • Parallel processing in background using Job scheduling...

    (Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
    Hi ABAP Gurus,
    I have read a bit till now about parallel processing. But I have a doubt.
    I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
    Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
    to use parallel processing in SAP.
    Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
    But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
    Can this be also called parallel processing ?
    Please let me know if this sounds wise to you guys...
    Regards,
    Tushar.

    Thanks for your reply...
    So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
    I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
    report ZMMI_MATERIAL_MASTER_TEST
          no standard page heading line-size 255.
    include bdcrecx1.
    parameters: dataset(132) lower case default
                                 '/tmp/testmatfile.txt'.
       DO NOT CHANGE - the generated data section - DO NOT CHANGE    ***
      If it is nessesary to change the data section use the rules:
      1.) Each definition of a field exists of two lines
      2.) The first line shows exactly the comment
          '* data element: ' followed with the data element
          which describes the field.
          If you don't have a data element use the
          comment without a data element name
      3.) The second line shows the fieldname of the
          structure, the fieldname must consist of
          a fieldname and optional the character '_' and
          three numbers and the field length in brackets
      4.) Each field must be type C.
    Generated data section with specific formatting - DO NOT CHANGE  ***
    data: begin of record,
    data element: MATNR
           MATNR_001(018),
    data element: MBRSH
           MBRSH_002(001),
    data element: MTART
           MTART_003(004),
    data element: XFELD
           KZSEL_01_004(001),
    data element: MAKTX
           MAKTX_005(040),
    data element: MEINS
           MEINS_006(003),
    data element: MATKL
           MATKL_007(009),
    data element: BISMT
           BISMT_008(018),
    data element: EXTWG
           EXTWG_009(018),
    data element: SPART
           SPART_010(002),
    data element: PRODH_D
           PRDHA_011(018),
    data element: MTPOS_MARA
           MTPOS_MARA_012(004),
         end of record.
    data: lw_record(200).
    End generated data section ***
    data: begin of t_data occurs 0,
          matnr(18),
          mbrsh(1),
          mtart(4),
          maktx(40),
          meins(3),
          matkl(9),
          bismt(18),
          extwg(18),
          spart(2),
          prdha(18),
          MTPOS_MARA(4),
        end of t_data.
    start-of-selection.
    perform open_dataset using dataset.
    perform open_group.
    do.
    *read dataset dataset into record.
    read dataset dataset into lw_record.
    if sy-subrc eq 0.
    clear t_data.
    split lw_record
       at ','
    into t_data-matnr
          t_data-mbrsh
          t_data-mtart
          t_data-maktx
          t_data-meins
          t_data-matkl
          t_data-bismt
          t_data-extwg
          t_data-spart
          t_data-prdha
          t_data-MTPOS_MARA.
    append t_data.
    else.
    exit.
    endif.
    enddo.
    loop at t_data.
    *if sy-subrc <> 0. exit. endif.
    perform bdc_dynpro      using 'SAPLMGMM' '0060'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'RMMG1-MATNR'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=AUSW'.
    perform bdc_field       using 'RMMG1-MATNR'
                                 t_data-MATNR.
    perform bdc_field       using 'RMMG1-MBRSH'
                                 t_data-MBRSH.
    perform bdc_field       using 'RMMG1-MTART'
                                 t_data-MTART.
    perform bdc_dynpro      using 'SAPLMGMM' '0070'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MSICHTAUSW-DYTXT(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=ENTR'.
    perform bdc_field       using 'MSICHTAUSW-KZSEL(01)'
                                 'X'.
    perform bdc_dynpro      using 'SAPLMGMM' '4004'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '/00'.
    perform bdc_field       using 'MAKT-MAKTX'
                                 t_data-MAKTX.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MARA-PRDHA'.
    perform bdc_field       using 'MARA-MEINS'
                                 t_data-MEINS.
    perform bdc_field       using 'MARA-MATKL'
                                 t_data-MATKL.
    perform bdc_field       using 'MARA-BISMT'
                                 t_data-BISMT.
    perform bdc_field       using 'MARA-EXTWG'
                                 t_data-EXTWG.
    perform bdc_field       using 'MARA-SPART'
                                 t_data-SPART.
    perform bdc_field       using 'MARA-PRDHA'
                                 t_data-PRDHA.
    perform bdc_field       using 'MARA-MTPOS_MARA'
                                 t_data-MTPOS_MARA.
    perform bdc_dynpro      using 'SAPLSPO1' '0300'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=YES'.
    perform bdc_transaction using 'MM01'.
    endloop.
    *enddo.
    perform close_group.
    perform close_dataset using dataset.

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

Maybe you are looking for

  • Connecting a Samsung SyncMaster 932b to an early 2009 iMac

    So I have looked at most of the recent responses to adding the 2nd monitor. I have tried all recommendations. When I turn the SM monitor on while my computer is on, i see the leopard sky background, then the monitor goes black. I am able to turn on a

  • Invalid Leaf Record Count??

    After Software Update installed new versions of QuickTime and iTunes, my computer hung while doing the required restart. It got stuck at the point where the spinning clock thing appears. I had to force a shutdown by pressing and holding the power but

  • Palm Pre Screen

    I have damaged my palm pre screen and it will no longer accept any touch functions.  I have gotten a new pre however i need to get my old one into USB mode so I can retrieve my contacts.  The old pre will boot up and when I plug it into the computer

  • ADP and BSI

    Hi Team, Is BSI  same as ADP ?? ADP is a third-party payroll processing company in US. BSI is also a third-party payroll processing company?? As for my understanding BSI is used to calculate taxes in SAP .And ADP also calculate the net amount which w

  • How do i populate a dropdown box in jsp?

    Can someone write some code that takes 15 string values (passed to the jsp in the URL) and puts them in a dropdown select box (or combo box i think they're called). I've trawlled around looking on the forums but I cant get my code to work the way I n