Types of Parallel Processing
Hi SDNites,
I have a requiremnet where I have huge amount of data in my internal table and we want that data to be processed using parallel processing. Can you please let me know what are the different ways using which I can meet my objective.
Please provide me details with some example as I
Regards,
Abhi
Also can you please tell me whether the parallel processing requirement can be done using FM (JOB_OPEN / JOB_CLOSE). If yes how can that be done?
Yes, it can be done this way, that's what I was referring to when I mentioned multiple background jobs. If you're not familiar with how to do it, check out the ABAP online help on the [SUBMIT .. VIA JOB|http://help.sap.com/abapdocu_70/en/ABAPSUBMIT_VIA_JOB.htm] statement, which also contains a short example. A more in-depth and good description of your options can be found in the online help [Programming with the Background Processing System|http://help.sap.com/saphelp_nw70ehp2/helpdata/en/fa/096c53543b11d1898e0000e8322d00/frameset.htm]. Here you can also find example programs for the RFC and the background job approach: [Implementing parallel processing|http://help.sap.com/saphelp_nw70ehp2/helpdata/en/fa/096e92543b11d1898e0000e8322d00/frameset.htm].
I personally think the RFC option has the major advantage that you have already a built-in capability for managing resource consumption (using RFC server groups). I'm not aware that you get this for free with background jobs (though if you have long-running tasks you cannot use RFC, because they are executed in dialog processes and thus face the same maximum runtime limit as normal dialog users).
Anyhow, in theory your task is rather simple, though of course the actual technical implementation can be quite challenging: Essentially you're trying to implement a [divide and conquer algorithm|http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm], a real prominent example is Google's [MapReduce framework|http://en.wikipedia.org/wiki/MapReduce] (has nothing to do with ABAP, but the concept is of course valid and can be found nowadays in many applications).
So you just need to find a way to split-up your problem into smaller sub-problems, which you then solve individually. Should you have the need for a final step that combines all individual results (the reduce step in Google's framework), you would need with background jobs an approach for figuring out if all job steps are done (so essentially a [semaphore|http://en.wikipedia.org/wiki/Semaphore_%28programming%29] or [fork-join queue|http://en.wikipedia.org/wiki/Fork-join_queue], however you want to look at it).
Similar Messages
-
Parallel processing of mass data : sy-subrc value is not changed
Hi,
I have used the Parallel processing of mass data using the "Start New Task" . In my function module I am handling the exceptions and finally raise the application specific old exception to be handled in my main report program. Somehow the sy-subrc is not getting changed and always returns 0 even if the expection is raised.
Can anyone help me about the same.
Thanks & Regards,
NitinHi Silky,
I've build a block of code to explain this.
DATA: ls_edgar TYPE zedgar,
l_task(40).
DELETE FROM zedgar.
COMMIT WORK.
l_task = 'task1'.
ls_edgar-matnr = '123'.
ls_edgar-text = 'qwe'.
CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
EXPORTING
line = ls_edgar.
l_task = 'task2'.
ls_edgar-matnr = 'abc'.
ls_edgar-text = 'def'.
CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
EXPORTING
line = ls_edgar.
l_task = 'task3'.
ls_edgar-matnr = '456'.
ls_edgar-text = 'xyz'.
CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
EXPORTING
line = ls_edgar.
*& Form f_go
FORM f_go USING p_c TYPE ctype.
RECEIVE RESULTS FROM FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' EXCEPTIONS err = 2.
IF sy-subrc = 2.
*this won't affect the LUW of the received function
ROLLBACK WORK.
ELSE.
*this won't affect the LUW of the received function
COMMIT WORK.
ENDIF.
ENDFORM. "f_go
and the function is:
FUNCTION z_edgar_commit_rollback.
*"*"Interface local:
*" IMPORTING
*" VALUE(LINE) TYPE ZEDGAR
*" EXCEPTIONS
*" ERR
MODIFY zedgar FROM line.
IF line-matnr CP 'a*'.
*comment raise or rollback/commit to test
* RAISE err.
ROLLBACK WORK.
ELSE.
COMMIT WORK.
ENDIF.
ENDFUNCTION.
ok.
In your main program you have a Logical Unit of Work (LUW), witch consists of an application transaction and is associated with a database transaction. Once you start a new task, your creating an independent LUW, with it's own database transaction.
So if you do a commit or rollback in your function the effect is only on the records your processing in the function.
There is a way to capture the event when this LUW concludes in the main LUW. That is the PERFORMING whatever ON END OF TASK. In there you can get the result of the function but you cannot commit or rollback the LUW from the function since it already have implicitly happened at the conclusion of the funtion. You can test it by correctly comment the code I've supplied.
So, if you want to rollback the LUW of the function you better do it inside it.
I don't think it matches exactly your question, maybe it lead you on the right track. Give me more details if it doesn't.
Hope it helps,
Edgar -
How to do parallel processing with dynamic internal table
Hi All,
I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
Any help will be highly appreciated.
Thanks and regards,
Ashintry the below code...
DATA: w_subrc TYPE sy-subrc.
DATA: w_infty(5) TYPE c.
data: w_string type string.
FIELD-SYMBOLS: <f1> TYPE table.
FIELD-SYMBOLS: <f1_wa> TYPE ANY.
DATA: ref_tab TYPE REF TO data.
CONCATENATE 'P' infty INTO w_infty.
CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
ASSIGN ref_tab->* TO <f1>.
* Create dynamic work area
CREATE DATA ref_tab TYPE (w_infty).
ASSIGN ref_tab->* TO <f1_wa>.
IF begda IS INITIAL.
begda = '18000101'.
ENDIF.
IF endda IS INITIAL.
endda = '99991231'.
ENDIF.
CALL FUNCTION 'HR_READ_INFOTYPE'
EXPORTING
pernr = pernr
infty = infty
begda = '18000101'
endda = '99991231'
IMPORTING
subrc = w_subrc
TABLES
infty_tab = <f1>
EXCEPTIONS
infty_not_found = 1
OTHERS = 2.
IF sy-subrc <> 0.
subrc = w_subrc.
ELSE.
ENDIF. -
Parallel Processing in CRM 5.0 for Customer download
Hi
I was referring to OSS note 350176 for implementing parallel processing. I wanted to achieve parallel processing for request loads for Customer download.
Is it possible to achieve it by setting the parameter CRM_MAX_QUEUE_NUMBER_INITIAL in CRMPAROLTP or it can be achieved only through implementing exits?
Thanks for your help in anticipation.
Regards
KarthikHI
Karthik in Parralllel processing first you need to set up diffent filters for diffrent types of data basic or business data, customizing and condition data R3 AC1, R3 AC4, R3 Ac5 separately then start down loading the objects to CRM , remaning objects left over can be bypassed , then set the Satus of down loading Yellow red and Blue depends up on percentage of down load y for more than 100% R for more than 75%, B for less than 50%
Reward if helpful
Venkat -
Parallel Processing in CRM 5.0
Hi
I was referring to OSS note 350176 for implementing parallel processing. I wanted to achieve parallel processing for request loads for Customer download.
Is it possible to achieve it by setting the parameter CRM_MAX_QUEUE_NUMBER_INITIAL in CRMPAROLTP or it can be achieved only through implementing exits?
Thanks for your help in anticipation.
Regards
KarthikHI
Karthik in Parralllel processing first you need to set up diffent filters for diffrent types of data basic or business data, customizing and condition data R3 AC1, R3 AC4, R3 Ac5 separately then start down loading the objects to CRM , remaning objects left over can be bypassed , then set the Satus of down loading Yellow red and Blue depends up on percentage of down load y for more than 100% R for more than 75%, B for less than 50%
Reward if helpful
Venkat -
Using BAPI_PO_CREATE1 in parallel processing
Hi experts,
I am using BAPI_PO_CRATE1 in parallel processing to create multiple Purchase Orders. The Programs is creating only one PO (the Ist one) however it should create multiple PO as per the program and data.
For rest of the data it is returning error message saying "No instance of object type PurchaseOrder has been created".
Please suggest how can I fix this issue.
Points are sure.
Thanks & Regards.
AnirudhWithout looking at some poriton of the code around this call and your parallel processing logic, it will be difficult to say.
-
Hi All,
I have multipe messages in a multiline container in a BPM.
Now i want to send each message individually to a webservice and get the response.Want to acheive this requirment in parallel process so that the total time is less.
Please let me know how to implement this requirement in BPM.
Regards,
Srinivas.Hi Adish,
>> I want to send message to syn webservice
From which system you are sending data and what manner(synchronous or asynchronous).
>>Where I dont know exactly what will be the value for n in transformations step. It can vary depend upon input message that BPM will receive.
For this you can use a container variable of type xsd:integer and store the value in it and later you can use it for the next steps.
but give a arrow diagram and brief your exact requirement.(e.g., File->XI->HTTP)
Thanks,
Gujjeit -
How to define "leading" random number in Infoset fpr parallel processing
Hello,
in Bankanalyzer we use an Infoset which consists of a selection across 4 ODS tables to gather data.
No matter which PACKNO fields we check or uncheck in the infoset definition screen (TA RSISET), the parallel frameworks always selects the same PACKNO field from one ODS table.
Unfortunately, the table that is selected by the framework is not suitable, because our
"leading" ODS table which holds most of our selection criteria is another one.
How to "convince" the parallel framework to select our leading table for the specification
of the PACKNO in addition (this would be times 20 faster due to better select options).
We even tried to assign "alternate characteristics" to the packnos we do not liek to use,
but it seems that note 999101 just fixes this for non-system-fields.
But for the random number a diffrent form routine is used in /BA1/LF3_OBJ_INDEX_READF01
fill_range_random instead of fill_range.
Has anyone managed to assign the PACKNO of his choice to the infoset selection?
How?
Thanks in advance
VolkerWell, it is a bit more complicated
ODS one, that the parallel framework selects for being the one to deliver the PACKNO
is about equal in size (~120GB each) to ODS two which has two significant field which cuts down the
amount of data to be retreived.
Currently we execute the generated SQL in the best possible manner (by faking some stats )
The problem is, that I'd like to have a Statement that has the PACKNO in the very same table.
PACKNO is a generated random number esp. to be used for parallel processing.
The job starts about 100 slaves
Each slave gets a packet to be processed from the framework, which is internaly represented
by a BETWEEN clause on this PACKNO. This is joined against ODS2 and then the selective fields
can be compared resultin in 90% of the already fetched rowes can be discarded.
Basicly it goes like
select ...
from
ods1 T_00,
ods2 T_01,
ods3 T_02,
ods4 T_03
where
... some key equivalence join-conditions ...
AND T_00.PACKNO BETWEEN '000000' and '000050' -- very selective on T_00
AND T_01.TYPE = '202' -- selective Value 10% on second table
I'd trying to change this to
AND T_01.PACKNO BETWEEN '000000' and '000050'
AND T_01.TYPE = '202' -- selective Value 10%
so I can use a combined Index on T_01 (TYPE;PACKNO)
This would be times 10 more selective on the driving table and due to the fact,
that T_00 would be joined for just the rows I need, about a calculated time 20-30 faster.
It really boosts when I do this in sqlplus
Hope this clearyfies a bit.
Problem is, that I can not change the code either for doing the
build of the packets or the one that executes the application.
I need to change the Inofset, so that the framework decides to build
proper SQL with T_01.PACKNO instead of T_00.PACKNO.
Thanks a lot
Volker -
How to map expdp parallele process to output file
How to map expdp parallel process to its output file while running...
say i use expdp dumpfile=test_%U.dmp parallel=5 ..
Each parallele process writing to its related output file.. i want to know the mapping in run time...I'm not sure if this information is reported in the status command but it's worth a shot. You can get to the status command 2 ways:
if you are running a datapump job from a terminal window, then while it is running, type ctl-c and you will get the datapump prompt. Either IMPORT> or EXPORT>
IMPORT> status
If you type status, it will tell you a bunch of information about the job and then each process. It may have dumpfile information in there.
If you run it interactively, then you need to attach to the job. To do this, you need to know the job name. If you don't know, you can look at sys.dba_datapump_jobs if prived, or sys.user_datapump_jobs if not prived. You will see a job name and a schema name. Once you have that, you can:
expdp user/password attach=schema.job_name
This will bring you to the EXPORT>/iMPORT> prompt. Type status there.
Like I said, I'm not sure if file name information is specified, but it might be. If it is not there, then I don't know of any other way to get it.
Dean -
ABAP OO and parallel processing
Hello ABAP community,
I am trying to implement a ABAP OO scenario where i have to take into account parallel processing and processing logic in the sense of update function modules (TYPE V1).
The szenario is definied as follows:
Frame class X creates a instance of class Y and a instance of class Z.
Classes Y and Z sould be processed in parallel, so class X calls classes Y and Z.
Classes Y and Z call BAPIS and do different database changes.
If classes Y or Z have finished, the status of processing is written into a status table by caller class X.
The processing logic within class Y and class Z should be a SAP LUW in the sense of a update function module (TYP V1).
Can i use events?
(How) Should i use "call function in upgrade task"?
(How) Should i use "call function starting new task"?
What is the best method to realise that behaviour?
Many thanks for your suggestions.Hallo Christian,
I will describe you in detail, whow I have solved this
problem. May be there is a newer way ... but it works.
STEPS:
I asume you have splitt your data in packages.
1.) create a RFC-FM: Z_WAIT
It return OK or NOT OK.
This FM: does following:
DO.
call function TH_WPINFO -> until the WPINFO has more
than a certain number of lines. (==> free tasks)
ENDDO.
If it is OK ==> free tasks are available
call your FM (RFC!) like this:
CALL FUNCTION <FM>
STARTING NEW TASK ls_tasknam " Unique identifier!
DESTINATION IN GROUP p_group
PERFORMING return_info ON END OF TASK
EXPORTING
TABLES
IMPORTING
EXCEPTIONS
*:--- Take care of the order of the exceptions!
COMMUNICATION FAILURE = 3
SYSTEM_FAILURE = 2
UNFORCED_ERROR = 4
RESOURCE_FAILURE = 5
OTHERS = 1.
*:--- Then you must check the difference between
*:--- the started Calls and the received calls.
*:--- If the number increases a certain value limit_tasks.
wait until CALLED_TASK < LIMIT_TASKS up to '600' seconds.
The value should be not greater then 20!
DATA-Description:
parameters: p_group like bdfields-rfcgr default 'Server_alle'. " For example. Use the F4 help
if you have defined the report-parameter as above.
ls_tasknam ==> Just the increasing number of RFC-Calls
as Character.
RETURN_INFO is a form routine in which You can check the results. Within this Form you must call:
RECEIVE RESULTS FROM FUNCTION <FM>
TABLES: ... " The tables of your <FM> exactly the same order!
EXCEPTIONS
COMMUNICATION FAILURE = 3
SYSTEM_FAILURE = 2
UNFORCED_ERROR = 4
NO_ACTIVATE_INFOSTRUCTURE = 1.
Her eyou must count the received Calls!
And you can save them into a internal table for checking!
I hope I could help you a little bit
God luck
Michael -
Parallel process(ora_pnnn)
ps -ef|grep ora_|grep DCC로 background process를 찾아보니,
ora_pnnn_DCC가 8개가 나오더군요.(물론, init file에 parallel_min_servers=0 , parallel_max_servers=8 로 지정되어 있습니다.)
ora_pnnn은 parallel process이고, init file에서 parallel_min_servers와 parallelserver_idle_time값을 확인해보면 된다라고 하는데...
현재 사용 DB가 8i 버전으로 parallel_server_idle_time이 hidden parameter가 되었더라구요... 이 hidden parameter 값 확인을 어떻게 하나요?
한가지 더, dba_tables/dba_indexes 컬럼 중 degree값은 무엇을 의미하는 것인가요? degree > 1 인 경우, parallel process와 관련이 있는 것 같은데, 잘 모르겠습니다.
글 수정:
sglim79ps -ef|grep ora_|grep DCC로 background process를 찾아보니,
ora_pnnn_DCC가 8개가 나오더군요.(물론, init file에
parallel_min_servers=0 , parallel_max_servers=8 로
지정되어 있습니다.)
ora_pnnn은 parallel process이고, init file에서
parallel_min_servers와 parallelserver_idle_time값을
확인해보면 된다라고 하는데...
현재 사용 DB가 8i 버전으로 parallel_server_idle_time이 hidden
parameter가 되었더라구요... 이 hidden parameter 값 확인을 어떻게
하나요?
한가지 더, dba_tables/dba_indexes 컬럼 중 degree값은 무엇을 의미하는
것인가요? degree > 1 인 경우, parallel process와 관련이 있는 것
같은데, 잘 모르겠습니다.
글 수정:
sglim79일반적인 parameter 확인하는 것과 동일합니다.
SQL> show parameter parallelserver_idle_time
로 확인하시면 됩니다.
참고로, sys user 로 접속해서 아래 sql문을 실행하면
사용할 수 있는 hidden parameter 리스트들을 모두 확인하실 수 있습니다.
select ksppinm
from x$ksppi
where substr(ksppinm,1,1) = ‘_’
degree 값은 degree of parallelism 를 말하며
이 값이 degree > 1 인 경우
table에 대해 설정되어 있기 때문에
sql문에서 parallel hint 를 설정하지 않더라도
parallel 로 실행됩니다.
하지만, parallel hint 를 사용하게 되면 table에 설정된
degree of parallelism 를 override 합니다.
아래 내용은 테이블 생성 시에 parallel_clause 사용할 때
주의해야 될 사항입니다.
Notes on the parallel_clause
If table contains any columns of LOB or user-defined object type, this statement as well as subsequent INSERT, UPDATE, or DELETE operations on table are executed serially without notification. Subsequent queries, however, will be executed in parallel.
For partitioned index-organized tables, CREATE TABLE ... AS SELECT is executed serially, as are subsequent DML operations. Subsequent queries, however, will be executed in parallel.
A parallel hint overrides the effect of the parallel_clause.
DML statements and CREATE TABLE ... AS SELECT statements that reference remote objects can run in parallel. However, the "remote object" must really be on a remote database. The reference cannot loop back to an object on the local database (for example, by way of a synonym on the remote database pointing back to an object on the local database).
참고로, 테이블 생성시에 degree of parallelism 를 사용하는 건
바람직하지 않고, 필요한 sql문에 hint 를 사용하는 것을 권장합니다. -
Is pga is being used in Parallel process
HI
is pga is being used in parallel processing. if yes then increasing the size of pga can increase performance
thankFirstofall we need to clarify the "PGA" in Oracle.
PGA is a type of memory which contains data and control information for a server process. When server process starts, a PGA value is assinged to a single user (if i don't remember wrong, it's default is 5M). The collection of PGA's is defined as total instance PGA. PGA_AGGREGATE_TARGET defines the maximum value which can get.
Regarding to your concept, in my opinion it's basically wrong. Caching happens in buffer_cache which is a component of the SGA. When you first execute your sql text, Oracle caches it to reduce the physical I/O. Afterwards, if you need to execute it again, this will be a logical I/O.
If i got your question wrong, please expend it more for further help. -
Parallel Processing through ABAP program
Hi,
We are trying to do the parallel processing through ABAP. As per SAP documentation we are using the CALL FUNCTION STARTING NEW TASK DESTINATION.
We have one Z function Module and as per SAP we are making this Function module (FM)as Remote -enabled module.
In this FM we would like to process data which we get it from internal table and would like to send back the processed data(through internal table) to the main program where we are using CALL FUNCTION STARTING NEW TASK DESTINATION.
Please suggest how to achieve this.
We tried out EXPORT -IMPORT option meaning we used EXPORT internal table in the FM with some memory ID and in the main program using IMPORT internal table with the same memory ID. But this option is not working even though ID and name of the internal table is not working.
Also, SAP documentation says that we can use RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
IMPORTING RFCSI_EXPORT = INFO in conjunction with CALL FUNCTION STARTING NEW TASK DESTINATION. Documentation also specifies that "RECEIVE is needed to gather IMPORTING and TABLE returns of an asynchronously executed RFC Function module". But while creating the FM remote-enabled we cant have EXPORT or IMPORT parameters.
Please help !
Thanks in advance
Santosh<i>We tried out EXPORT -IMPORT option meaning we used EXPORT internal table in the FM with some memory ID and in the main program using IMPORT internal table with the same memory ID. But this option is not working even though ID and name of the internal table is not working</i>
I think that this is not working because that memory does not work across sessions/tasks. I think that the
IMPORT FROM SHARED BUFFER and EXPORT TO SHARED BUFFER would work. I have used these in the past and it works pretty good.
Also,
here is a quick sample of the "new task" and "recieve" functionality. You can not specify the importing parameters when call the FM. You specify them at the recieving end.
report zrich_0001 .
data: session(1) type c.
data: ccdetail type bapi0002_2.
start-of-selection.
* Call the transaction in another session...control will be stop
* in calling program and will wait for response from other session
call function 'BAPI_COMPANYCODE_GETDETAIL'
starting new task 'TEST' destination 'NONE'
performing set_session_done on end of task
exporting
companycodeid = '0010'
* IMPORTING
* COMPANYCODE_DETAIL = ccdetails
* COMPANYCODE_ADDRESS =
* RETURN =
* wait here till the other session is done
wait until session = 'X'.
write:/ ccdetail.
* FORM SET_session_DONE
form set_session_done using taskname.
* Receive results into messtab from function.......
* this will also close the session
receive results from function 'BAPI_COMPANYCODE_GETDETAIL'
importing
companycode_detail = ccdetail.
* Set session as done.
session = 'X'.
endform.
Hope this helps.
Rich Heilman -
Parallel processing in background
Hi All,
I am processing 1 million of records in background, which takes approximately around 10 hrs. I wanted to reduce the time to less than 1 hr and tried using parallel processing. But the tasks run in Dialog workprocesses and giving abap short dumps due to time out.
Is there any other solutions using that i can reduce total processing time.
Please note that i cannot split. I am getting 1 million records from a select query and after processing all those records in SAP, I am sending to XI and XI will post in legacy system.
Please note that all other performance tunings done.
Thanks,
Rajesh.Hi Rajesh,
Refer sample code for <b>Parallel Processing</b>:
By doing this your <b>processing</b> time will be highly optimized.
Go thru the description given in the code at each level.
This code Checks available WORK PROCESSes and assigns data in packets for processing. This way you save a lot of time esp when data is in Millions.
Hope it helps.
REPORT PARAJOB.
Data declarations
DATA: GROUP LIKE RZLLITAB-CLASSNAME VALUE ' ',
"Parallel processing group.
"SPACE = group default (all
"servers)
WP_AVAILABLE TYPE I, "Number of dialog work processes
"available for parallel processing
"(free work processes)
WP_TOTAL TYPE I, "Total number of dialog work
"processes in the group
MSG(80) VALUE SPACE, "Container for error message in
"case of remote RFC exception.
INFO LIKE RFCSI, C, "Message text
JOBS TYPE I VALUE 10, "Number of parallel jobs
SND_JOBS TYPE I VALUE 1, "Work packets sent for processing
RCV_JOBS TYPE I VALUE 1, "Work packet replies received
EXCP_FLAG(1) TYPE C, "Number of RESOURCE_FAILUREs
TASKNAME(4) TYPE N VALUE '0001', "Task name (name of
"parallel processing work unit)
BEGIN OF TASKLIST OCCURS 10, "Task administration
TASKNAME(4) TYPE C,
RFCDEST LIKE RFCSI-RFCDEST,
RFCHOST LIKE RFCSI-RFCHOST,
END OF TASKLIST.
Optional call to SBPT_INITIALIZE to check the
group in which parallel processing is to take place.
Could be used to optimize sizing of work packets
work / WP_AVAILABLE).
CALL FUNCTION <b>'SPBT_INITIALIZE'</b>
EXPORTING
GROUP_NAME = GROUP
"Name of group to check
IMPORTING
MAX_PBT_WPS = WP_TOTAL
"Total number of dialog work
"processes available in group
"for parallel processing
FREE_PBT_WPS = <b>WP_AVAILABLE</b>
"Number of work processes
"available in group for
"parallel processing at this
"moment
EXCEPTIONS
INVALID_GROUP_NAME = 1
"Incorrect group name; RFC
"group not defined. See
"transaction RZ12
INTERNAL_ERROR = 2
"R/3 System error; see the
"system log (transaction
"SM21) for diagnostic info
PBT_ENV_ALREADY_INITIALIZED = 3
"Function module may be
"called only once; is called
"automatically by R/3 if you
"do not call before starting
"parallel processing
CURRENTLY_NO_RESOURCES_AVAIL = 4
"No dialog work processes
"in the group are available;
"they are busy or server load
"is too high
NO_PBT_RESOURCES_FOUND = 5
"No servers in the group
"met the criteria of >
"two work processes
"defined.
CANT_INIT_DIFFERENT_PBT_GROUPS = 6
"You have already initialized
"one group and have now tried
"initialize a different group.
OTHERS = 7..
CASE SY-SUBRC.
WHEN 0.
"Everythings ok. Optionally set up for optimizing size of
"work packets.
WHEN 1.
"Non-existent group name. Stop report.
MESSAGE E836. "Group not defined.
WHEN 2.
"System error. Stop and check system log for error
"analysis.
WHEN 3.
"Programming error. Stop and correct program.
MESSAGE E833. "PBT environment was already initialized.
WHEN 4.
"No resources: this may be a temporary problem. You
"may wish to pause briefly and repeat the call. Otherwise
"check your RFC group administration: Group defined
"in accordance with your requirements?
MESSAGE E837. "All servers currently busy.
WHEN 5.
"Check your servers, network, operation modes.
WHEN 6.
Do parallel processing. Use CALL FUNCTION STARTING NEW TASK
DESTINATION IN GROUP to call the function module that does the
work. Make a call for each record that is to be processed, or
divide the records into work packets. In each case, provide the
set of records as an internal table in the CALL FUNCTION
keyword (EXPORT, TABLES arguments).
DO.
CALL FUNCTION 'RFC_SYSTEM_INFO' "Function module to perform
"in parallel
STARTING NEW TASK TASKNAME "Name for identifying this
"RFC call
DESTINATION IN GROUP group "Name of group of servers to
"use for parallel processing.
"Enter group name exactly
"as it appears in transaction
"RZ12 (all caps). You may
"use only one group name in a
"particular ABAP program.
PERFORMING RETURN_INFO ON END OF TASK
"This form is called when the
"RFC call completes. It can
"collect IMPORT and TABLES
"parameters from the called
"function with RECEIVE.
EXCEPTIONS
COMMUNICATION_FAILURE = 1 MESSAGE msg
"Destination server not
"reached or communication
"interrupted. MESSAGE msg
"captures any message
"returned with this
"exception (E or A messages
"from the called FM, for
"example. After exception
"1 or 2, instead of aborting
"your program, you could use
"SPBT_GET_PP_DESTINATION and
"SPBT_DO_NOT_USE_SERVER to
"exclude this server from
"further parallel processing.
"You could then re-try this
"call using a different
"server.
SYSTEM_FAILURE = 2 MESSAGE msg
"Program or other internal
"R/3 error. MESSAGE msg
"captures any message
"returned with this
"exception.
RESOURCE_FAILURE = 3. "No work processes are
"currently available. Your
"program MUST handle this
"exception.
YOUR_EXCEPTIONS = X. "Add exceptions generated by
"the called function module
"here. Exceptions are
"returned to you and you can
"respond to them here.
CASE SY-SUBRC.
WHEN 0.
"Administration of asynchronous RFC tasks
"Save name of task...
TASKLIST-TASKNAME = TASKNAME.
"... and get server that is performing RFC call.
CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
EXPORTING
RFCDEST = TASKLIST-RFCDEST
EXCEPTIONS
OTHERS = 1.
APPEND TASKLIST.
WRITE: / 'Started task: ', TASKLIST-TASKNAME COLOR 2.
TASKNAME = TASKNAME + 1.
SND_JOBS = SND_JOBS + 1.
"Mechanism for determining when to leave the loop. Here, a
"simple counter of the number of parallel processing tasks.
"In production use, you would end the loop when you have
"finished dispatching the data that is to be processed.
JOBS = JOBS - 1. "Number of existing jobs
IF JOBS = 0.
EXIT. "Job processing finished
ENDIF.
WHEN 1 OR 2.
"Handle communication and system failure. Your program must
"catch these exceptions and arrange for a recoverable
"termination of the background processing job.
"Recommendation: Log the data that has been processed when
"an RFC task is started and when it returns, so that the
"job can be restarted with unprocessed data.
WRITE msg.
"Remove server from further consideration for
"parallel processing tasks in this program.
"Get name of server just called...
CALL FUNCTION 'SPBT_GET_PP_DESTINATION'
EXPORTING
RFCDEST = TASKLIST-RFCDEST
EXCEPTIONS
OTHERS = 1.
"Then remove from list of available servers.
CALL FUNCTION 'SPBT_DO_NOT_USE_SERVER'
IMPORTING
SERVERNAME = TASKLIST-RFCDEST
EXCEPTIONS
INVALID_SERVER_NAME = 1
NO_MORE_RESOURCES_LEFT = 2
"No servers left in group.
PBT_ENV_NOT_INITIALIZED_YET = 3
OTHERS = 4.
WHEN 3.
"No resources (dialog work processes) available at
"present. You need to handle this exception, waiting
"and repeating the CALL FUNCTION until processing
"can continue or it is apparent that there is a
"problem that prevents continuation.
MESSAGE I837. "All servers currently busy.
"Wait for replies to asynchronous RFC calls. Each
"reply should make a dialog work process available again.
IF EXCP_FLAG = SPACE.
EXCP_FLAG = 'X'.
"First attempt at RESOURCE_FAILURE handling. Wait
"until all RFC calls have returned or up to 1 second.
"Then repeat CALL FUNCTION.
WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '1' SECONDS.
ELSE.
"Second attempt at RESOURCE_FAILURE handling
WAIT UNTIL RCV_JOBS >= SND_JOBS UP TO '5' SECONDS.
"SY-SUBRC 0 from WAIT shows that replies have returned.
"The resource problem was therefore probably temporary
"and due to the workload. A non-zero RC suggests that
"no RFC calls have been completed, and there may be
"problems.
IF SY-SUBRC = 0.
CLEAR EXCP_FLAG.
ELSE. "No replies
"Endless loop handling
ENDIF.
ENDIF.
ENDCASE.
ENDDO.
Wait for end of job: replies from all RFC tasks.
Receive remaining asynchronous replies
WAIT UNTIL RCV_JOBS >= SND_JOBS.
LOOP AT TASKLIST.
WRITE:/ 'Received task:', TASKLIST-TASKNAME COLOR 1,
30 'Destination: ', TASKLIST-RFCDEST COLOR 1.
ENDLOOP.
This routine is triggered when an RFC call completes and
returns. The routine uses RECEIVE to collect IMPORT and TABLE
data from the RFC function module.
Note that the WRITE keyword is not supported in asynchronous
RFC. If you need to generate a list, then your RFC function
module should return the list data in an internal table. You
can then collect this data and output the list at the conclusion
of processing.
FORM RETURN_INFO USING TASKNAME.
DATA: INFO_RFCDEST LIKE TASKLIST-RFCDEST.
RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
IMPORTING RFCSI_EXPORT = INFO
EXCEPTIONS
COMMUNICATION_FAILURE = 1
SYSTEM_FAILURE = 2.
RCV_JOBS = RCV_JOBS + 1. "Receiving data
IF SY-SUBRC NE 0.
Handle communication and system failure
ELSE.
READ TABLE TASKLIST WITH KEY TASKNAME = TASKNAME.
IF SY-SUBRC = 0. "Register data
TASKLIST-RFCHOST = INFO_RFCHOST.
MODIFY TASKLIST INDEX SY-TABIX.
ENDIF.
ENDIF.
ENDFORM
Reward points if that helps.
Manish
Message was edited by:
Manish Kumar -
Parallel processing in background using Job scheduling...
(Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
Hi ABAP Gurus,
I have read a bit till now about parallel processing. But I have a doubt.
I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
to use parallel processing in SAP.
Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
Can this be also called parallel processing ?
Please let me know if this sounds wise to you guys...
Regards,
Tushar.Thanks for your reply...
So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
report ZMMI_MATERIAL_MASTER_TEST
no standard page heading line-size 255.
include bdcrecx1.
parameters: dataset(132) lower case default
'/tmp/testmatfile.txt'.
DO NOT CHANGE - the generated data section - DO NOT CHANGE ***
If it is nessesary to change the data section use the rules:
1.) Each definition of a field exists of two lines
2.) The first line shows exactly the comment
'* data element: ' followed with the data element
which describes the field.
If you don't have a data element use the
comment without a data element name
3.) The second line shows the fieldname of the
structure, the fieldname must consist of
a fieldname and optional the character '_' and
three numbers and the field length in brackets
4.) Each field must be type C.
Generated data section with specific formatting - DO NOT CHANGE ***
data: begin of record,
data element: MATNR
MATNR_001(018),
data element: MBRSH
MBRSH_002(001),
data element: MTART
MTART_003(004),
data element: XFELD
KZSEL_01_004(001),
data element: MAKTX
MAKTX_005(040),
data element: MEINS
MEINS_006(003),
data element: MATKL
MATKL_007(009),
data element: BISMT
BISMT_008(018),
data element: EXTWG
EXTWG_009(018),
data element: SPART
SPART_010(002),
data element: PRODH_D
PRDHA_011(018),
data element: MTPOS_MARA
MTPOS_MARA_012(004),
end of record.
data: lw_record(200).
End generated data section ***
data: begin of t_data occurs 0,
matnr(18),
mbrsh(1),
mtart(4),
maktx(40),
meins(3),
matkl(9),
bismt(18),
extwg(18),
spart(2),
prdha(18),
MTPOS_MARA(4),
end of t_data.
start-of-selection.
perform open_dataset using dataset.
perform open_group.
do.
*read dataset dataset into record.
read dataset dataset into lw_record.
if sy-subrc eq 0.
clear t_data.
split lw_record
at ','
into t_data-matnr
t_data-mbrsh
t_data-mtart
t_data-maktx
t_data-meins
t_data-matkl
t_data-bismt
t_data-extwg
t_data-spart
t_data-prdha
t_data-MTPOS_MARA.
append t_data.
else.
exit.
endif.
enddo.
loop at t_data.
*if sy-subrc <> 0. exit. endif.
perform bdc_dynpro using 'SAPLMGMM' '0060'.
perform bdc_field using 'BDC_CURSOR'
'RMMG1-MATNR'.
perform bdc_field using 'BDC_OKCODE'
'=AUSW'.
perform bdc_field using 'RMMG1-MATNR'
t_data-MATNR.
perform bdc_field using 'RMMG1-MBRSH'
t_data-MBRSH.
perform bdc_field using 'RMMG1-MTART'
t_data-MTART.
perform bdc_dynpro using 'SAPLMGMM' '0070'.
perform bdc_field using 'BDC_CURSOR'
'MSICHTAUSW-DYTXT(01)'.
perform bdc_field using 'BDC_OKCODE'
'=ENTR'.
perform bdc_field using 'MSICHTAUSW-KZSEL(01)'
'X'.
perform bdc_dynpro using 'SAPLMGMM' '4004'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'MAKT-MAKTX'
t_data-MAKTX.
perform bdc_field using 'BDC_CURSOR'
'MARA-PRDHA'.
perform bdc_field using 'MARA-MEINS'
t_data-MEINS.
perform bdc_field using 'MARA-MATKL'
t_data-MATKL.
perform bdc_field using 'MARA-BISMT'
t_data-BISMT.
perform bdc_field using 'MARA-EXTWG'
t_data-EXTWG.
perform bdc_field using 'MARA-SPART'
t_data-SPART.
perform bdc_field using 'MARA-PRDHA'
t_data-PRDHA.
perform bdc_field using 'MARA-MTPOS_MARA'
t_data-MTPOS_MARA.
perform bdc_dynpro using 'SAPLSPO1' '0300'.
perform bdc_field using 'BDC_OKCODE'
'=YES'.
perform bdc_transaction using 'MM01'.
endloop.
*enddo.
perform close_group.
perform close_dataset using dataset.
Maybe you are looking for
-
My PC Wont Let Me Download Any Flash Players
I'm a computer wiz but I cannot download flash player 9 on my pc......and I think you need it for youtube and certain pics dont come up etc....etc....I need someones help
-
I keep getting pop up ads on all my browsers since I installed mackeeper
I installed mackeeper yesterday thinking it was an apple-made app. Upon realising it wasn't I promptly uninstalled it following instructions online. I think I managed to delete the whole thing (using the uninstaller), but now I'm getting heaps of pop
-
Question on "How to Handle Inv management Scenarios in BW" docuemnt.
Hi all, I read the document "How to Handle inventory management scenarios in BW" and I did not understood what a snapshot scenario is? Can anyone tell me what is the difference between snapshop and non-cumulative scenario's. thanks, Sabrina.
-
Read Timeout on non-blocking sockets
Hi, I was wondering if there is a way to specify a read timeout (like setSoTimeout for synchronous sockets) when using a non-blocking socket. I'd like to have the select() method return is a sockets timeout expires, puting in the selected key set the
-
Red dates for birthdays only in January all the rest are in black
I have set my calendar up and I can't find a way to make all the birthdays in red for the rest of the months. I can change the text but not the date..