Parallel Processing...requirements???
Hi,
I understand that implementing Parallel Processing is also dependant on your HW, but to what level? For example, if I have a server with 1 CPU, can I still take advantage of Parallel Processing like when running a large query I use the Parallel hints? But if I have a server with 2 CPU's then can I use the Parallel Processing???
Sorry, but I just read up on this and this is the only part that was unclear to me.
One of the best sources of information on parallel query is Doug Burns' blog. There are a couple of technical papers here: http://oracledoug.com/papers.html and if you dop a search for "Parallel" at the main page of the website there are a number of other informal articles that have appeared over the last two or three years - including some links to articles by other writers.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
PS: It is certainly possible to get some benefit from running parallel queries with a single CPU - other constraints on the action come from the number of disk drives, and the number of controllers, and a single fast CPU could easily keep half a dozen discs busy by controlling a few PX slaves.
The trouble is, the more "useful" parallel queries tend to use multiple layers of PX slaves, and once the disc activity is over the messaging between layers can become CPU intensive - which means a single CPU may do well with some types of parallel query and badly with others.
Similar Messages
-
Hi SDNites,
I have a requiremnet where I have huge amount of data in my internal table and we want that data to be processed using parallel processing. Can you please let me know what are the different ways using which I can meet my objective.
Please provide me details with some example as I
Regards,
AbhiAlso can you please tell me whether the parallel processing requirement can be done using FM (JOB_OPEN / JOB_CLOSE). If yes how can that be done?
Yes, it can be done this way, that's what I was referring to when I mentioned multiple background jobs. If you're not familiar with how to do it, check out the ABAP online help on the [SUBMIT .. VIA JOB|http://help.sap.com/abapdocu_70/en/ABAPSUBMIT_VIA_JOB.htm] statement, which also contains a short example. A more in-depth and good description of your options can be found in the online help [Programming with the Background Processing System|http://help.sap.com/saphelp_nw70ehp2/helpdata/en/fa/096c53543b11d1898e0000e8322d00/frameset.htm]. Here you can also find example programs for the RFC and the background job approach: [Implementing parallel processing|http://help.sap.com/saphelp_nw70ehp2/helpdata/en/fa/096e92543b11d1898e0000e8322d00/frameset.htm].
I personally think the RFC option has the major advantage that you have already a built-in capability for managing resource consumption (using RFC server groups). I'm not aware that you get this for free with background jobs (though if you have long-running tasks you cannot use RFC, because they are executed in dialog processes and thus face the same maximum runtime limit as normal dialog users).
Anyhow, in theory your task is rather simple, though of course the actual technical implementation can be quite challenging: Essentially you're trying to implement a [divide and conquer algorithm|http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm], a real prominent example is Google's [MapReduce framework|http://en.wikipedia.org/wiki/MapReduce] (has nothing to do with ABAP, but the concept is of course valid and can be found nowadays in many applications).
So you just need to find a way to split-up your problem into smaller sub-problems, which you then solve individually. Should you have the need for a final step that combines all individual results (the reduce step in Google's framework), you would need with background jobs an approach for figuring out if all job steps are done (so essentially a [semaphore|http://en.wikipedia.org/wiki/Semaphore_%28programming%29] or [fork-join queue|http://en.wikipedia.org/wiki/Fork-join_queue], however you want to look at it). -
Parallel Processing and Capacity Utilization
Dear Guru's,
We have following requirement.
Workcenter A Capacity is 1000. (Operations are similar)
Workcenter B Capacity is 1500. (Operations are similar)
Workcenter C Capacity is 2000. (Operations are similar)
1) For Product A: Production Order Qty is 4500. Can we use all workcenter as a parallel processing through Routing.
2) For Product B: Production Order Qty is 2500. Can we use only W/C A and B as a parallel processing through Routing.
If yes, plz explain how?
Regards,
Rashid MasoodMay be you can create a virtual WC VWCA=ABC (connected with a hierarchy with transaction CR22) and another VWCB=A+B and route your products to each VWC
-
How to do parallel processing with dynamic internal table
Hi All,
I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
Any help will be highly appreciated.
Thanks and regards,
Ashintry the below code...
DATA: w_subrc TYPE sy-subrc.
DATA: w_infty(5) TYPE c.
data: w_string type string.
FIELD-SYMBOLS: <f1> TYPE table.
FIELD-SYMBOLS: <f1_wa> TYPE ANY.
DATA: ref_tab TYPE REF TO data.
CONCATENATE 'P' infty INTO w_infty.
CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
ASSIGN ref_tab->* TO <f1>.
* Create dynamic work area
CREATE DATA ref_tab TYPE (w_infty).
ASSIGN ref_tab->* TO <f1_wa>.
IF begda IS INITIAL.
begda = '18000101'.
ENDIF.
IF endda IS INITIAL.
endda = '99991231'.
ENDIF.
CALL FUNCTION 'HR_READ_INFOTYPE'
EXPORTING
pernr = pernr
infty = infty
begda = '18000101'
endda = '99991231'
IMPORTING
subrc = w_subrc
TABLES
infty_tab = <f1>
EXCEPTIONS
infty_not_found = 1
OTHERS = 2.
IF sy-subrc <> 0.
subrc = w_subrc.
ELSE.
ENDIF. -
Hi All,
I have multipe messages in a multiline container in a BPM.
Now i want to send each message individually to a webservice and get the response.Want to acheive this requirment in parallel process so that the total time is less.
Please let me know how to implement this requirement in BPM.
Regards,
Srinivas.Hi Adish,
>> I want to send message to syn webservice
From which system you are sending data and what manner(synchronous or asynchronous).
>>Where I dont know exactly what will be the value for n in transformations step. It can vary depend upon input message that BPM will receive.
For this you can use a container variable of type xsd:integer and store the value in it and later you can use it for the next steps.
but give a arrow diagram and brief your exact requirement.(e.g., File->XI->HTTP)
Thanks,
Gujjeit -
Parallel Process Option for Optimization in Background.
Hi,
I am testing the SNP Optimizer with various settings this week on demo version from SAP for a client. I am looking for information that anyone might have on the SNP Parallel Processing Option in the execution of the Optimizer in the background. The information that I could find is very thin. I would be interested in any documentation or experience that you have.
Sincerely,
Michael M. Stahl
[email protected]Hello,
While running the transaction /SAPAPO/SNPOP - Supply Network Optimization in the background, In the variant of it you can enter Parallel Processing profile in the field Paral. Proc. Profile.
This profile you will require to define in the Customization(SPRO) before use it in the variant.
Path to maintain it is as below Use transaction SPRO
Advanced Planning and Optimization --> Supply Chain Planning -->Supply Network Planning (SNP) --> Profiles --> Define Parallel Processing Profile
Here you will require to define your profile... e.g. as below
Paral. Proc. Profile SNP_OPT
Description SNP OPTIMIZER PP PROFILE
Appl. (Parallel Pr.) : Optimization
Parallel Processes 2
Logical system :
Server Group :
Block Size:
You will require to take Basis team's help to enter value for Server Group and Block size.
I hope, above information is helpful for you.
Regards,
Anjali -
Parallel processing in quotation approval
In approval process
1) no of approvers are determined at run time i.e w.r.t discount value( 2 or 3).
2) By using FM i got the approvers in my workflow container as (app1,app2,app3) (3 container elements).
3) Here i need to trigger the workitem at a time with information in send mail step for all approvers at a time ( if 3 of them is approved the status of the quotation should change to status released.
4) if anyone is rejected among them then no status change required.
problem here is how to send the workitem to 2 or 3 approvers at a time and how do we know all of them are approved or rejected thatHi SAUMYA,
I GOT THE DATA IN MULTILINE CONTAINER BY USING METHOD, I HAVE CREATED A METHOD WHICH GIVES POP-UP WITH APPROVE OR REJECT BUTTONS , I PUT THE MULTILINE CONTAINER ELEMENT IN Miscellaneous TAB OF APPROVAL STEP THAT USER PRESSES APPROVE RESULT ELEMENT IS '0' IF HE PRESSES REJECT VALUE '4' IS PASSED TO RESULT ELEMENT, MY REQUIREMENT IS IF ALL APPROVERS GET THE WORKITEM AT A TIME BY USING DYNAMIC PARALLEL PROCESSING , BUT HOW TO KNOW ALL OF THEM ARE APPROVED AND REJECTED , THE WORKFLOW SHOULD EXECUTE NEXT STEP TILL ALL OF THEM ARE EITHER APPROVED OR REJECTED. IF ANY ONE OF THEM IS REJECTED THEN WORKFLOW WILL BE COMPLETED AT THAT POINT. -
Parallel processing in background
Hi All,
I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
Is there any other solutions using that i can reduce total processing time.
Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
Please note that all other performance tunings done.
Thanks,
Rajesh.Hi Rajesh,
Refer sample code for <b>Parallel Processing</b>:
By doing this your <b>processing</b> time will be highly optimized.
Go thru the description given in the code at each level.
This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
Hope it helps.
REPORT PARAJOB.
Data declarations
DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
"Parallel processing group.
"SPACE = group default (all
"servers)
WP_AVAILABLE TYPE I, "Number of dialog work processes
"available for parallel processing
"(free work processes)
WP_TOTAL TYPE I, "Total number of dialog work
"processes in the group
MSG(80) VALUE SPACE, "Container for error message in
"case of remote RFC exception.
INFO LIKE RFCSI, C, "Message text
JOBS TYPE I VALUE 10, "Number of parallel jobs
SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
RCV_JOBS TYPE I VALUE 1, "Work packet replies received
EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
"parallel processing work unit)
BEGIN OF TASKLIST OCCURS 10, "Task administration
TASKNAME(4) TYPE C,
RFCDEST LIKE RFCSI-RFCDEST,
RFCHOST LIKE RFCSI-RFCHOST,
END OF TASKLIST.
Optional call to SBPT_INITIALIZE to check the
group in which parallel processing is to take place.
Could be used to optimize sizing of work packets
work / WP_AVAILABLE).
CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
EXPORTING
GROUP_NAME = GROUP
"Name of group to check
IMPORTING
MAX_PBT_WPS = WP_TOTAL
"Total number of dialog work
"processes available in group
"for parallel processing
FREE_PBT_WPS = <b>WP_AVAILABLE</b>
"Number of work processes
"available in group for
"parallel processing at this
"moment
EXCEPTIONS
INVALID_GROUP_NAME = 1
"Incorrect group name; RFC
"group not defined. See
"transaction RZ12
INTERNAL_ERROR = 2
"R/3 System error; see the
"system log (transaction
"SM21) for diagnostic info
PBT_ENV_ALREADY_INITIALIZED = 3
"Function module may be
"called only once; is called
"automatically by R/3 if you
"do not call before starting
"parallel processing
CURRENTLY_NO_RESOURCES_AVAIL = 4
"No dialog work processes
"in the group are available;
"they are busy or server load
"is too high
NO_PBT_RESOURCES_FOUND = 5
"No servers in the group
"met the criteria of >
"two work processes
"defined.
CANT_INIT_DIFFERENT_PBT_GROUPS = 6
"You have already initialized
"one group and have now tried
"initialize a different group.
OTHERS = 7..
CASE SY-SUBRC.
WHEN 0.
"Everythings ok. Optionally set up for optimizing size of
"work packets.
WHEN 1.
"Non-existent group name. Stop report.
MESSAGE E836. "Group not defined.
WHEN 2.
"System error. Stop and check system log for error
"analysis.
WHEN 3.
"Programming error. Stop and correct program.
MESSAGE E833. "PBT environment was already initialized.
WHEN 4.
"No resources: this may be a temporary problem. You
"may wish to pause briefly and repeat the call. Otherwise
"check your RFC group administration: Group defined
"in accordance with your requirements?
MESSAGE E837. "All servers currently busy.
WHEN 5.
"Check your servers, network, operation modes.
WHEN 6.
Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
DESTINATION IN GROUP to call the function module that does the
work. Make a call for each record that is to be processed, or
divide the records into work packets. In each case, provide the
set of records as an internal table in the CALL FUNCTION
keyword (EXPORT, TABLES arguments).
DO.
CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
"in parallel
STARTING NEW TASK TASKNAME "Name for identifying this
"RFC call
DESTINATION IN GROUP group "Name of group of servers to
"use for parallel processing.
"Enter group name exactly
"as it appears in transaction
"RZ12 (all caps). You may
"use only one group name in a
"particular ABAP program.
PERFORMING RETURN_INFO ON END OF TASK
"This form is called when the
"RFC call completes. It can
"collect IMPORT and TABLES
"parameters from the called
"function with RECEIVE.
EXCEPTIONS
COMMUNICATION_FAILURE = 1 MESSAGE msg
"Destination server not
"reached or communication
"interrupted. MESSAGE msg
"captures any message
"returned with this
"exception (E or A messages
"from the called FM, for
"example. After exception
"1 or 2, instead of aborting
"your program, you could use
"SPBT_GET_PP_DESTINATION and
"SPBT_DO_NOT_USE_SERVER to
"exclude this server from
"further parallel processing.
"You could then re-try this
"call using a different
"server.
SYSTEM_FAILURE = 2 MESSAGE msg
"Program or other internal
"R/3 error. MESSAGE msg
"captures any message
"returned with this
"exception.
RESOURCE_FAILURE = 3. "No work processes are
"currently available. Your
"program MUST handle this
"exception.
YOUR_EXCEPTIONS = X. "Add exceptions generated by
"the called function module
"here. Exceptions are
"returned to you and you can
"respond to them here.
CASE SY-SUBRC.
WHEN 0.
"Administration of asynchronous RFC tasks
"Save name of task...
TASKLIST-TASKNAME = TASKNAME.
"... and get server that is performing RFC call.
CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
EXPORTING
RFCDEST = TASKLIST-RFCDEST
EXCEPTIONS
OTHERS = 1.
APPEND TASKLIST.
WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
TASKNAME = TASKNAME + 1.
SND_JOBS = SND_JOBS + 1.
"Mechanism for determining when to leave the loop. Here, a
"simple counter of the number of parallel processing tasks.
"In production use, you would end the loop when you have
"finished dispatching the data that is to be processed.
JOBS = JOBS - 1. "Number of existing jobs
IF JOBS = 0.
EXIT. "Job processing finished
ENDIF.
WHEN 1 OR 2.
"Handle communication and system failure. Your program must
"catch these exceptions and arrange for a recoverable
"termination of the background processing job.
"Recommendation: Log the data that has been processed when
"an RFC task is started and when it returns, so that the
"job can be restarted with unprocessed data.
WRITE msg.
"Remove server from further consideration for
"parallel processing tasks in this program.
"Get name of server just called...
CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
EXPORTING
RFCDEST = TASKLIST-RFCDEST
EXCEPTIONS
OTHERS = 1.
"Then remove from list of available servers.
CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
IMPORTING
SERVERNAME = TASKLIST-RFCDEST
EXCEPTIONS
INVALID_SERVER_NAME = 1
NO_MORE_RESOURCES_LEFT = 2
"No servers left in group.
PBT_ENV_NOT_INITIALIZED_YET = 3
OTHERS = 4.
WHEN 3.
"No resources (dialog work processes) available at
"present. You need to handle this exception, waiting
"and repeating the CALL FUNCTION until processing
"can continue or it is apparent that there is a
"problem that prevents continuation.
MESSAGE I837. "All servers currently busy.
"Wait for replies to asynchronous RFC calls. Each
"reply should make a dialog work process available again.
IF EXCP_FLAG = SPACE.
EXCP_FLAG = 'X'.
"First attempt at RESOURCE_FAILURE handling. Wait
"until all RFC calls have returned or up to 1 second.
"Then repeat CALL FUNCTION.
WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
ELSE.
"Second attempt at RESOURCE_FAILURE handling
WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
"SY-SUBRC 0 from WAIT shows that replies have returned.
"The resource problem was therefore probably temporary
"and due to the workload. A non-zero RC suggests that
"no RFC calls have been completed, and there may be
"problems.
IF SY-SUBRC = 0.
CLEAR EXCP_FLAG.
ELSE. "No replies
"Endless loop handling
ENDIF.
ENDIF.
ENDCASE.
ENDDO.
Wait for end of job: replies from all RFC tasks.
Receive remaining asynchronous replies
WAIT UNTIL RCV_JOBS >= SND_JOBS.
LOOP AT TASKLIST.
WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
ENDLOOP.
This routine is triggered when an RFC call completes and
returns. The routine uses RECEIVE to collect IMPORT and TABLE
data from the RFC function module.
Note that the WRITE keyword is not supported in asynchronous
RFC. If you need to generate a list, then your RFC function
module should return the list data in an internal table. You
can then collect this data and output the list at the conclusion
of processing.
FORM RETURN_INFO USING TASKNAME.
DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
IMPORTING RFCSI_EXPORT = INFO
EXCEPTIONS
COMMUNICATION_FAILURE = 1
SYSTEM_FAILURE = 2.
RCV_JOBS = RCV_JOBS + 1. "Receiving data
IF SY-SUBRC NE 0.
Handle communication and system failure
ELSE.
READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
IF SY-SUBRC = 0. "Register data
TASKLIST-RFCHOST = INFO_RFCHOST.
MODIFY TASKLIST INDEX SY-TABIX.
ENDIF.
ENDIF.
ENDFORM
Reward points if that helps.
Manish
Message was edited by:
Manish Kumar -
CAn anybody tell me the use of parallel processing in MD01 screen...
SAP help tells the following..
But I cant understand that..
PLease help me get a clarification.
Karthick PDear
The parallel processing use in total planning run in MD01 or MD40 thats meas planning run for number of plant at atime. By using parallel processing procedures, you can significantly improve the total planning run. The parallel processing procedures can either run on several servers or in several sessions. So we define this setting in OMIQ.
Scope of planning in OMIZ here we combine the number of plant/ MRP are for total palnning run which use in MD01 to run MRP for all plant at same time.
So if you want to run the MRP for all palnt or MRP Area as total planning run then you need to define scope of planning and parallel processing to improve the system performance and if you don't define it then also you can excetue the MRP.
Summery is :
1) If you want to run MRP viw MD 01 online you require scope of planning.
Reason:- With scope of planning only you can combine no of plant / MRP area.etc.,
If you want to run MRP in back ground this is not required.
2) Always parallel processing will improve the system speed by acessing data base layer with multiple application layer.
setup : go to SPRO>material management>Consumption based planning-->Define parallel processing in MRP.
Refer : the parallel process for mrp.
Hope this will be useful
Regards
JH -
Number of parallel process definition during data load from R/3 to BI
Dear Friends,
We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI. I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
3) How system works and what will be net result of increasing or decreasing the number of parallel process.
Expecting Experts help.
Regards,
M.MDear Des Gallagher,
Thank you very much for the useful information provided. The following was my observation.
From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
Can you kindly explain about the above mentioned point. i.e.,
1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained -> can you explain in detail
Can you calrify my doubt and provide solution?
Regards,
M.M -
Hi Experts,
We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
- What are the ways we can optimize process chain run time.
- Special points we need to take care of in case of parallel processing profiles used in process chain.
- Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
- Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
Any help will be really appreciated.
RegardsHI Neelesh,
There are many ways to optimize performance of the process chains (background jobs) in APO system.
Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
Check the below link for more info
Performance problems in DP mass processing
Let me know if you require further info.
Regards,
Raj -
Problem in Dynamic Parallel processing using Blocks
Hi All,
My requirement is to have parallel approvals so I am trying to use the dynamic parallel processing through Blocks. I cant use Forks since the agents are being determined at runtime.
I am using the ParForEach block. I am sending the &agents& as a multiline container in 'Parallel Processing' tab. I have user decision in the block. Right now i am hardcoding 2 agents in the 'agents' multiline element. It is working fine..but i am getting 2 instances of the block.I understand that is because I am sending &AGENTS& in the parallel processing tab.. I just need one instance of the block going to all the users in the &AGENTS& multiline element.
Please let me know how to achieve the same. I have already searched the forum but i couldnt find anything that suits my req.
Pls help!
Regards,
SoumyaYes that's true when ever you try to use ParForEach block then for each value entry in the table a separate workitem ID is created, i.e. a separate instance is created that paralle processing is not possible like that
Instead of that what you can do is create a fork with 3 branches and define a End Condition such that until all 3 branches are executed .
Before to the fork step determine all the agents and store them in a internal table , you can access the one internal table entry by using the index value check this [wiki|https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/accessingSingleEntryfromMulti-line+Element] to access single entry from a internal table.
For each task in the fork assgin a agent
So, as we have defined the condition that until all the three branches are executed you don't want to come out of the fork step so, it will wait until all the stpes are completed. -
Dynamic Parallel Processing using Rule
Hello,
I am using a User Decision within a Block (ParForEach type) step to send work-items to multiple Approvers parallelly.
For this I have created a Multi-line container LI_APPROVERS and bound &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& to &_LI_APPROVERS_LINE& in the "Parallel Processing" tab of the Block.
Now in User Decision I am using Agent as Expression = &_LI_APPROVERS_LINE&. This is working perfectly fine if I fetch the values in LI_APPROVERS via a background method before "Block" step is executed.
I want to know if we can do this using a "Rule" within the User Decision? Meaning approvers are determined by the Rule(through a FM) at the run time instead of fetching them beforehand. I created a custom Rule and tried passing it under Agents but it didn't work. I do not know what bindings need to be done and how each line will be passed to User Decision to create a work-item for each user.
Or
I should remove the Block step completely and directly use the User Decision Task with Parallel Processing option under Miscellaneous tab?
Can someone please explain how to achieve this using a Rule and exactly what bindings are required.
Thanks.Hi Anjan,
Yes, that's exactly what I want to know. I saw your below response in one of the threads but could not understand exactly how to do it. Can you please explain it.
You have all your agents in one multiline container element in workflow.
Then you take a block step with perforeach.
Then create a custom rule which will import multiline element of agents , and a line_no. Then in the rule you populate the actor_tab with agents from that multiline contaier elemens of agent. The logic will take the agent from the multiline container[line_no].
Then you take a activity step . In agent use your custom rule usin prpoer bindin of multiline element of agents and for line_no you pass _***_line from block container. Then workitem will sent to n no of people parrallaly.
This is my current design:
Activity returns agents in LI_APPROVERS.
At Block: I have binding &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& --> &_LI_APPROVERS_LINE&
At UD: I have Agents as Expression = &_LI_APPROVERS_LINE&
I want to remove the Activity step (to get Agents in background) and replace with Rule within UD. What binding do I need from Rule to Workflow? How to get the "Line_no" from rule as you mentioned above.
Thanks for your response. -
Parallel processing using BLOCK step..
Hi,
I have used parallel processing with a BLOCK step. I have put a multiline container element. In the BLOCK step, I have visibily to another container element generated because of the block step (multilne container_LINE). Thus the number of parallel processes are getting created as in the requirement, but the problem is the value in multilne container_LINE is not getting passed to the send mail step. I have checked the binding, everything is ok. Please
Sukumar.Hi
When I am sure that I am doing properly a binding but it doesn't work then:
1. I activate workflow template definition (take a joke).
2. I write the magic word
/$sync
in the command line, to refresh buffers.
3. I delete the strange binding defined with drag&drop and define it one more time using old method from former binding editors in R/3 4.6c systems. I take container elements from the lists of possible entries or I write their name directly. I don't use drag&drop.
Regards
Mikolaj
There are no problems, just "issues" and "improvement opportunities". -
How to get BI background jobs to utilize parallel processing
Each step in our BI process chains creates exactly 1 active batch job (SM37) with in turn utilizes only 1 background process (SM50).
How do we get the active BI batch job to use more than 1 background process similar to parallel processing (RZ20) in an ERP system?Hi there,
Have you checked the number of background and parallel processes. Take a look in SAP Note 621400 - Number of required BTC processes for process chains. This may be helpful ...
Minimum (with this setting, the chain runs more or less serially):
Number of parallel SubChains at the widest part of the chains + 1.
Recommended:
Number of parallel processes at the widest part of the chain + 1.
Optimal:
Number of parallel processes at the widest part of the chain + number of
parallel SubChains at the widest part + 1.
The optimal settings just avoids a delay if several SubChains are
started in parallel at the same time. In case of such a Process Chain
implementation and using the recommended number of background processes
there can be a short delay at the start of each SubChain (depends on the
frequency of the background scheduler, in general ~1 minute only).
Attention: Note that a higher degree of parallel processing and
therefore more batch processes only make sense if the system has
sufficient hardware capacity.
I hope this helps or it may lead you to further checks to make .
Cheers,
Karen
Maybe you are looking for
-
Since downloading ios5 on my iphone 4 a few days ago, the capacity guage in itunes shows "other" as over 21Gb and i can no longer fit any music on my iphone. How do I get rid of the other stuff? capacity available on 32 Gb iphone is 28.49Gb i previou
-
Can you prevent a block of memory from being swapped?
I'm wondering if there's a way to "lock" a section of memory in Java so that it will not be written out to the swap file by the OS. Let's say you're writing a client application that will connect to a server, and allow the user (after authentication,
-
When I try and re-download some of them from the ITUNES store...I get an error that the file already exists on the computer. However, when I open Itunes. The file can not be found. This problem just started about 1 1/2 ago...prior to that did not
-
External hard drive won't leave
So here's the deal, my external hard drive refuses to eject. I keep getting this message that says the hard drive is being used by a program but that is impossible because I restarted the computer and have no program running except finder and dashboa
-
Importing URLs using tagged text
I'm trying to import tagged text with URLs into InDesign. InDesign sees the links, but does not seem to know that the links are URLs (rather than text anchors or page links). I have nearly 1500 hyperlinked pieces of text I want to import, so I can't