Parallel processing of prg - avoid duplication.

Basically my program submits the file path into another program and file get executed there.  The program basically creates the Sales order for the corresponding PO in the file. 
Parallel processing :
I opened my program in two parallel sessions .
I used the same file path/test file which creates sales order for the corresponding PO number in the file.
And ran two parallel sessions for the same program with this same file path/test file yet a time.  And two sales orders were created for the same PO number.
  But we donu2019t need this. We should make sure that duplicate sales orders for one corresponding should not get created.  For that reason, we are checking  in the middle of the program vbak entry and if there is no entry in vbak table then we are collecting SO data using bapi to create SO and at the end we are writing commit which creates SO. But this is not fulfilling the purpose of duplicate SO creation.
Because incase of parallel processing the program in each sessions checking the vbak entry where there is no entry u2026then creating sales order info and finally both sessions are u201Ccommitu201D executed and so that two Sales orders are getting creatred.
In this case, can I put file lock
Is there any way by which we can set up a lock at the file so that it cannot be used by another session if it is already in use by one session?
Thanks in ADVANCE.

To lock
- the program (one job processed at a time) - use FM [ENQUEUE_ESRDIRE|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=enqueue_esrdire&adv=false&sortby=cm_rnd_rankvalue],
- the purchase order (one job per purchase order at a time) - use FM [ENQUEUE_EMEKKOE|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=enqueue_emekkoe&adv=false&sortby=cm_rnd_rankvalue]
For lock concept look at [The SAP Lock Concept (BC-CST-EQ)|http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCCSTEQ/BCCSTEQ_PT.pdf] or [Function Modules for Lock Requests|http://help.sap.com/saphelp_nwpi71/helpdata/en/cf/21eebf446011d189700000e8322d00/frameset.htm], both locks will be released by COMMIT/ROLLBACK WORK or explicit call of DEQUEUE FM,
Regards

Similar Messages

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Too  many parallel processes

    Hi
    I will have to build process chain to cube 0SD_C03 and the datasources are
    2LIS_12_VCITM
    2LIS_12_VCHDR
    2LIS_12_V_ITM
    2LIS_12_VDHDR
    2LIS_12_VAITM
    2LIS_12_VDITM
    2LIS_12_VADHDR
    Now the question is after providing the links between " Delete index" process and individual  loading process (Infopackages),the message I am getting in the checking view is " Too many parallel processes for chosen server " and furthe,r the suggested procedure by system is " Reduce the number of parallel processes in the chain or include sub-chains :
    How can I reduce the processes? Is there any alterante method of building this flow to avoid warning messages..
    Though these are warning messages ,what is the correct flow of building process chain for this without getting any warning messages.

    Hi,
    Based on dependency better you can go for 3 parellel process at a time as what we are doing in our project. 
    check schedule time for each your process chain which fetchs data from source system (Info Package) and re schedule them which should not execute at a time (make it max 3) and try again
    Regards
    BVR

  • Parallel processing in OSB

    Hi everyone,
    I need to implement parallel processing in the OSB. I have a proxy service which implements a request reply pattern over JMS. This service needs to invoke three such services (request-reply over JMS) in parallel and aggregate the result and send it as a reply. How can I do this in OSB?
    I have looked at the Split-Join element but it seems to only support web services. Is that correct?
    Best regards,
    Dimo

    >
    Thank you for the fast reply. Wrapping in web services has probably relatively high performance impact - I need to wrap my payload in SOAP, send it via HTTP, receive it over HTTP, parse the SOAP envelope, extract the payload and then send it via JMS. And the same steps in the reverse order to send the response. Seems like a lot of overhead to me.
    >
    You don't have to do all of that. If you wrap your JMS-based services using WSDL-based service with local transport, then you avoid all HTTP communication because processing stays inside OSB.
    >
    Isn't there any other way to parallelize the execution - the application is processing synchronous requests from a voice frontend and it should be very responsive. Does anyone know how much overhead exactly goes into the whole SOAP wrapping and unwrapping?
    >
    I don't know of any other way for this case. Maybe someone else will bring something better.
    I have never measured SOAP overhead since for my business services it is only insignificant fraction of whole processing time. I would expect SOAP overhead to be not far from JMS overhead. Especially when XML is used as a payload.

  • How to get BI background jobs to utilize parallel processing

    Each step in our BI process chains creates exactly 1 active batch job (SM37) with in turn utilizes only 1 background process (SM50).
    How do we get the active BI batch job to use more than 1 background process similar to parallel processing (RZ20) in an ERP system?

    Hi there,
    Have you checked the number of background and parallel processes. Take a look in SAP Note 621400 - Number of required BTC processes for process chains. This may be helpful ...                                                                               
    Minimum (with this setting, the chain runs more or less serially):        
    Number of parallel SubChains at the widest part of the chains + 1.        
    Recommended:                                                              
    Number of parallel processes at the widest part of the chain + 1.         
    Optimal:                                                                  
    Number of parallel processes at the widest part of the chain + number of  
    parallel SubChains at the widest part + 1.                               
    The optimal settings just avoids a delay if several SubChains are         
    started in parallel at the same time. In case of such a Process Chain     
    implementation and using the recommended number of background processes   
    there can be a short delay at the start of each SubChain (depends on the  
    frequency of the background scheduler, in general ~1 minute only).                                                                               
    Attention: Note that a higher degree of parallel processing and           
    therefore more batch processes only make sense if the system has          
    sufficient hardware capacity.                                                                               
    I hope this helps or it may lead you to further checks to make .
    Cheers,
    Karen

  • The parallel process for mrp.

    hi exports
    we plan to do the scope of planning for the total planning as a background job.
    while doing that system ask for the parallal processing for mrp
    what is customize step and procedure to do the parallel process for mrp.

    Dear Raj,
    With the help of parallel processing procedures, you can significantly improve the runtime of the total planning run.
    To process in parallel, you can either select various sessions on the application server or various servers.
    Parallel processing runs according to packages using the low-level code logic:
    The work package, with a fixed number of materials that are internally defined in the program, is distributed over the individual servers/sessions. Once a server/session has finished processing a package, it starts processing the next package.
    If a low-level code is being planned, the servers/sessions that have finished must wait until the last server/session has finished its package to avoid inconsistencies. Then the next low-level code is processed per packages.
    The parallel processing procedure is switched on in the initial screen of total planning.
    Activities
    Define the application server with the number of sessions that can be used:
    If you want to define various servers for parallel processing, enter the server with the number of sessions.
    If you only want to use one server, but several sessions, enter the application server and the appropriate number of sessions.
    Further notes
    Parallel processing shortens the time required for calculation, however, it cannot shorten the database time as the system still only operates using one database.
    The Customizing Transaction is   OMIQ
    Regards
    PSV

  • Idocs parallel processing

    Hi,
    We have some goods movement idocs(MBGMCR/MBGMCR03)getting posted simultaneosly.(Std ALE idocs no customization)We dont want to collect it and process it(timing issue).When we allow it to process immediately material batch combination gets locked out and one idoc goes through and others fail...i know we can collect it and process it through a batch prog setting RBDAPP01 variant with no prallel processing but is there any way i can switch on immediately and allow each idocs to process one by one avoiding parallel processing..?
    Thanks,
    Larry

    Hi Larry,
    Please check your partner profile (WE20) and set to trigger immediately under processing by function module area.
    Regards,
    Ferry Lianto

  • Parallel Processing - Timeout issue

    Hi All,
    I have implemented "Parallel Processing" in one of my application. I am not executing this processes in background but instead in online mode (i can not do it in background mode because of how this application works).
    Each of my parallel process starts in a new dialog. Functionality wise my program is working fine but I have an issue when I am processing large amount of data. And this is also not in my program but in one of the standard SAP FM i am calling in my application. Some of my processes reaches the timeout limit (600 secs) and expires. I can not increase the timeout limit since time it takes to complete the process varies depending on the amount of data it is processing.
    Does anybody know how to resolve this issue?
    Thanks in advance,
    RS

    Hi,
    I am calling this FM with lowest level object, that is WBS element in our case. The issue here is not how the FM is called but amount of data posted to this WBS element. So, when i call this FM with the WBS having so many charges posted to it it times out (even in parallel processing since it basically generate another dialog process).
    Is there any ways to avoid this timeout without changing the system timeout parameter?
    Thanks,
    RS

  • Parallel Processing : Unable to capture return results using RECIEVE

    Hi,
    I am using parallel processing in one of my program and it is working fine but I am not able to collect return results using RECIEVE statement.
    I am using
      CALL FUNCTION <FUNCTION MODULE NAME>
             STARTING NEW TASK TASKNAME DESTINATION IN GROUP DEFAULT_GROUP
             PERFORMING RETURN_INFO ON END OF TASK
    and then in subroutine RETURN_INFO I am using RECEIVE statement.
    My RFC is calling another BAPI and doing explicit commit as well.
    Any pointer will be of great help.
    Regards,
    Deepak Bhalla
    Message was edited by: Deepak Bhalla
    I used the wait command after rfc call and it worked additionally I have used Message switch in Receive statement because RECIEVE statement was returing sy-subrc 2.

    Not sure what's going on here. Possibly a corrupt drive? Or the target drive is full?
    Try running the imagex command manually from a F8 cmd window (in WinPE)
    "\\OCS-MDT\CCBShare$\Tools\X64\imagex.exe" /capture /compress maximum C: "\\OCS-MDT\CCBShare$\Captures\CCB01-8_15_14.wim" "CCB01CDrive" /flags ENTERPRISE
    Keith Garner - Principal Consultant [owner] -
    http://DeploymentLive.com

  • Parallel processing of mass data : sy-subrc value is not changed

    Hi,
    I have used the Parallel processing of mass data using the "Start New Task" . In my function module I am handling the exceptions and finally raise the application specific old exception to be handled in my main report program. Somehow the sy-subrc is not getting changed and always returns 0 even if the expection is raised.
    Can anyone help me about the same.
    Thanks & Regards,
    Nitin

    Hi Silky,
    I've build a block of code to explain this.
      DATA: ls_edgar TYPE zedgar,
            l_task(40).
      DELETE FROM zedgar.
      COMMIT WORK.
      l_task = 'task1'.
      ls_edgar-matnr = '123'.
      ls_edgar-text = 'qwe'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task2'.
      ls_edgar-matnr = 'abc'.
      ls_edgar-text = 'def'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
      l_task = 'task3'.
      ls_edgar-matnr = '456'.
      ls_edgar-text = 'xyz'.
      CALL FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' STARTING NEW TASK l_task PERFORMING f_go ON END OF TASK
        EXPORTING
          line = ls_edgar.
    *&      Form  f_go
    FORM f_go USING p_c TYPE ctype.
      RECEIVE RESULTS FROM FUNCTION 'Z_EDGAR_COMMIT_ROLLBACK' EXCEPTIONS err = 2.
      IF sy-subrc = 2.
    *this won't affect the LUW of the received function
        ROLLBACK WORK.
      ELSE.
    *this won't affect the LUW of the received function
        COMMIT WORK.
      ENDIF.
    ENDFORM.                    "f_go
    and the function is:
    FUNCTION z_edgar_commit_rollback.
    *"*"Interface local:
    *"  IMPORTING
    *"     VALUE(LINE) TYPE  ZEDGAR
    *"  EXCEPTIONS
    *"      ERR
      MODIFY zedgar FROM line.
      IF line-matnr CP 'a*'.
    *comment raise or rollback/commit to test
    *    RAISE err.
        ROLLBACK WORK.
      ELSE.
        COMMIT WORK.
      ENDIF.
    ENDFUNCTION.
    ok.
    In your main program you have a Logical Unit of Work (LUW), witch consists of an application transaction and is associated with a database transaction. Once you start a new task, your creating an independent LUW, with it's own database transaction.
    So if you do a commit or rollback in your function the effect is only on the records your processing in the function.
    There is a way to capture the event when this LUW concludes in the main LUW. That is the PERFORMING whatever ON END OF TASK. In there you can get the result of the function but you cannot commit or rollback the LUW from the function since it already have implicitly happened at the conclusion of the funtion. You can test it by correctly comment the code I've supplied.
    So, if you  want to rollback the LUW of the function you better do it inside it.
    I don't think it matches exactly your question, maybe it lead you on the right track. Give me more details if it doesn't.
    Hope it helps,
    Edgar

  • Parallel Processing and Capacity Utilization

    Dear Guru's,
    We have following requirement.
    Workcenter A Capacity is 1000.   (Operations are similar)
    Workcenter B Capacity is 1500.   (Operations are similar)
    Workcenter C Capacity is 2000.   (Operations are similar)
    1) For Product A: Production Order Qty is 4500. Can we use all workcenter as a parallel processing through Routing.
    2) For Product B: Production Order Qty is 2500. Can we use only W/C A and B as a parallel processing through Routing.
    If yes, plz explain how?
    Regards,
    Rashid Masood

    May be you can create a virtual WC VWCA=ABC (connected with a hierarchy with transaction CR22) and another VWCB=A+B and route your products to each VWC

  • Parallel processing open items (FPO4P)

    Hello,
    I have a question about transaction FPO4p (parallel processing of open items).
    When saving the parameters the following message always appears : "Report cannot be evaluated in parallel". The information details tells that when you use a specific parallel processing object, you also need to use that field to sort on.
    I my case I use the object GPART for parallel processing (see tab technical settings). In the tab output control I selected a line layout which is sorted by business partner (GPART). Furthermore no selection options are used.
    Does anyone know why the transaction cannot save the parameters and shows the error message specified above. I really don't know what goes wrong.
    Thank you in advance.
    Regards, Ramon.

    Ramon
    Apply note 1115456.
    Maybe that note can help you
    Regards
    Arcturus

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • How to avoid duplication of mails on Mac book and I phone

    Am using Apple mail on Mac Book Lion version & I phone 4 with IOS.
    Am using corporate mail server ie     [email protected]
    Despite having done the mail settngs with the option -     Delete mail immediately on removing the mail from Inbox, the same mail gets downloaded on the other device. That is, i receive all mails on both my devices Mac Book and I phone even if its deleted from Inbox in one device.
    i am looking for an option of avoiding duplication of mails - that is mails deleted shouldnt come again on the other device.
    pl help.
    thanks

    Depends on what passwords you are talking about. Most passwords you have the option to change whenever you want.

  • How to achieve parallel processing in a single request?

    Hi all,
    I have a method in a Session EJB that will perform some business logic before it returns an answer to the client. The logic it will perform is to collect data from the applications database and two external systems, before sending all data to a third external system to get a response and send it back to the client. Each external system is quite slow so I would like to do all the collecting of data concurrent, parallel processing. How should I handle this? I'm not allowed to create my own threads in EJB's. Can I use MDB in some way? To the calling client this should be a synchronous call...
    Greatfull for any suggestions
    Cheers
    Anders =)

    Usually, the request is received by a component located in the web container, such as by an HTTP request (including Web Services). This component is able to start threads to allow parallel processing. Now, if for some reason the request arrives directly at EJB level and that you cannot move its receiver to web component, I think JMS is not a viable solution because you will switch to asynchronous processing and you have no way to make your EJB wait for the responses while preserving the client request (waiting implies programmatic life cycle management, which is forbidden in EJB container). Maybe a resource adapter (JCA) can bring a solution. A resource adapter acts as a datasource (a datasource is a specialization of a resource adapter) and thus it is a logical way to implement an adapter to an external, eventually non-J2EE, resource, as the name implies :) But I don't have enough knowledge in JCA to be sure of this.
    Hope it helps.
    Bruno Collet
    http://www.practicalsoftwarearchitect.com

Maybe you are looking for

  • Vi works in LabVIEW, but not when executed by TestStand.

    Hi everybody.  I'm using LabVIEW 8 with the Sound and Vibration Toolkit, and TestStand2.  I am planning on using TestStand to execute a series of performance tests on an audio processing board.  When run independently, the vi (which I have attached h

  • Electronic bank statement and new General ledger

    Hi all, we want to implement electronic bank statement and new G/L is active. While uploading the file, we receive an error: No Profit Center could be found. For us it is clear, because in the uplaod file there is no Profit Center. But how can we sol

  • Smart forms with barcode

    Hi Experts, I have requirement to create bar code in smart form, the barcode format will be 39 full ASCII . I have tried to create a character format with this font but it is not available in smartstyles. Can you please help me how Can I create this

  • OIM-OID provisionning issue with external plug in with AD

    Hi OIM/OID Guru's, We are using OIM with OID connector and having external authentication plug-in feature of OID with AD. Here we are using OID for user profile storage and doing password validation by using external plugin through AD however we have

  • Inability to revert to original in iPhoto

    I edited a JPEG photo in iPhoto (cropped).  However later and after editing some other photos, I decided to revert the first photo to its original version. However, the option Photo>Revert to original button was not lit. In the Quick Fixes tab of the