Parallel processing issues

First of all, I dont know if the problem I am having is suitable for this forum group as I cannot find a better forum group to posted on.
I have written a program which it has a web part and a backend part. As soon as the program is deployed onto and oc4j, it will starts retrieving data from the database and perform it task at the backend part. The web will allow me to check on some basic info about application and retrieve log from the log directory.
As long as I deployed onto a single non-clustered server with only 1 oc4j instance, I am all good, but whenever I have a clustered server environment consists of 4 hosts with the following configurations, I am toasted.
     Each host will have its own oc4j instance with a copy of the program deployed onto.
     2 of the 4 oc4j will be turn on immediately and the other 2 oc4j will serves as backup. Whenever the 2 active oc4j shuts down, the backup oc4j will become active and take over the job.
     The 2 active oc4j will be running the program in parallel for sharing the work load.
The problem arises when the 2 active copies of the program are running in parallel. Since the program will retrieve a transaction id from the db, and then perform some necessary action and save a new id number that is larger than the retrieved id back to the db. These actions from the program will be performed repeatedly with 1 minute time interval between them. If 2 copies of the program are running in different oc4j instance in parallel, then there is a chance that both copies of the program will retrieve the same transaction id at the beginning which this should be strictly forbidden due to some business logic issues.
So my question is this, how should the program be designed so the 2 copies of the program reside on 2 different oc4j can have some mutual agreement and knowing that they are working on some duplicate transaction id. I will need to have an exact copy of the program deployed onto each oc4j instances.

Hello,
you need to lock the database record which contains the session id you are working on. Use a select for update statement. If one application has a lock on the record, than the second one will wait until the lock is released (the transaction is commited or rolled back) before his select for update statement will return the record. Or you can specify to skip the locked records, so the second application won't get blocked. Anyway only one application will be able to read the record in question.
You can see examples here:
[http://www.techonthenet.com/oracle/cursors/for_update.php]
Zsom

Similar Messages

  • Parallel Processing Issue : call function starting new task

    Hi
    I am using the parallel processing functionality using the call function new task  destination in group default  Performing    on end of task  inside a loop ( i am splitting the internal table )
    However when i am debugging the code, i am able to see the function module( it opens a new session in debugging ) ,  and i see that new task is started after the RFC Function module is executed .
    How is this parallel processing ? I mean the new task starts after control returns back to main program , after the execution of the RFC FM.
    I thought the idea was to have the same FM executing in multi threads. So that time is saved.

    Thanks for the answers.
    There were too many  complications in the 'call function in new task '  option.
    So we are trying it with the job submit option. So we are splitting up the data into smaller tables and submiting it with this statement : Submit  'prog' with selection-table 's_sel' via job 'job' and return .
    Thanks

  • Parallel processing issue withing same server

    hi,
    i need to perform parallel processing withing same server using work processes available in same server.
    suggest if this can be accomplished and explain the design if possible.

    Hello Venkata,
    You can achieve parallel processing by using CALL FUNCTION .... STARTING NEW TASK <task name>.
    In this case function module runs in asynchronous mode without stopping calling program.
    For more details you can refer following link:
    https://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing
    Thanks,
    Augustin.

  • SAVE BAPI Issue in Parallel Processing

    Hello Friends,
    I have used this BAPI  'BAPI_POSRVAPS_SAVEMULTI3' to Save Some Orders in SAP APO. I have given option to run the program both in Normal Mode and Parallel Processing mode..
    In Normal mode it is able to save the data with Class Characteristics, But in Parallel Processing it is unable to save the  Class Characteristics values.
    We are passing same set of data both in Normal and Parallel Processing...
    But we are encountering this issue only in Parallel Processing... During this Save the POSEX in getting cleared in the receipts table t_orders_consolidated after Commit.... due to this it is unable to save the characteristics information stored in t_CFGH, T_CFGI and T_CFGV.
    We have also raised an OSS Message.
    But this is working fine as expected in Normal mode..
    Please advise... Its little critical to close this week...attached is some screen shots of the debug mode values...
    CALL FUNCTION 'BAPI_POSRVAPS_SAVEMULTI3'
                STARTING NEW TASK v_pp_taskname
                DESTINATION IN GROUP as_processing_options-server_group
                CALLING receive_update_orders_parallel ON END OF TASK
                EXPORTING
                  logical_system        = im_v_logsys
                  ext_number_assignment = abap_false
                  plng_version          = im_v_vrsio
                  no_create            = abap_true
                TABLES
                  receipts              = t_orders_consolidated
                  receipts_x            = t_orders_consolidated_x
                  cfg_headers          = t_cfgh
                  cfg_instances        = t_cfgi
                  cfg_values            = t_cfgv
                  return                = t_return
                  extension_in          = t_extension_in
                EXCEPTIONS
                  system_failure        = 1
                  communication_failure = 2
                  resource_failure      = 3.

    Hi Kunal,
    Check this links:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b06c3f96-ed4f-2a10-1693-f2c76a39988f
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90886731-21e4-2a10-2ebf-901c2c2b4e3d
    Basically, what you do is:
    - Define a source: in your case will be a query.
    - Define a target: you'll need to create a transactional DSO to save information.
    - Connect both: you can simply assign fields if the query is just what you need to download, or you can have routines.
    Transaction for creating APD is RSANWB. It's pretty easy and it's a graphical interface, so you should find your way easily.
    Hope this helps.
    Regards,
    Diego

  • Parallel Processing - Timeout issue

    Hi All,
    I have implemented "Parallel Processing" in one of my application. I am not executing this processes in background but instead in online mode (i can not do it in background mode because of how this application works).
    Each of my parallel process starts in a new dialog. Functionality wise my program is working fine but I have an issue when I am processing large amount of data. And this is also not in my program but in one of the standard SAP FM i am calling in my application. Some of my processes reaches the timeout limit (600 secs) and expires. I can not increase the timeout limit since time it takes to complete the process varies depending on the amount of data it is processing.
    Does anybody know how to resolve this issue?
    Thanks in advance,
    RS

    Hi,
    I am calling this FM with lowest level object, that is WBS element in our case. The issue here is not how the FM is called but amount of data posted to this WBS element. So, when i call this FM with the WBS having so many charges posted to it it times out (even in parallel processing since it basically generate another dialog process).
    Is there any ways to avoid this timeout without changing the system timeout parameter?
    Thanks,
    RS

  • Issue in completing the block step for parallel processing

    Hi,
    i have created a workflow where in i have used a block step to send workitems to multiple agents.  I have used parallel processing in block step. Number of agents are determined in the runtime. Lets say i have two items in my multiple line container( two agent id). Now inside block statement i have put user decesion step. So at the same time workitems goes to the two approver for approval. When both approver take the decesion after that also command is not coming out of the block step. I want the command to be out of block step after this and goes to the next step of workflow.
    Please suggest any helpful solution for it.
    Regards,
    Smit Shah

    I think theremust be a binding problem , the binding must be some thing like the below
    &USERID[&_WF_PARFOREACH_INDEX&]& ----->&_USERID_LINE&
    of the block step from WF container to Block Conatiner. Because when I checked in my system it is behaving as you want., I also include one Decision step inside the block, and then hard coded the userid values int the table USERID and made the above binding and it ia working fine and in the Decision Agent I mentiond the EXPRESSION and assigned the value &_USERID_LINE&

  • FORK is Not happening Parallel processing- It's working sequential

    Hi,
       we are into PI 7.O and SP 13.
       I am trying to test Parallel processing using Fork step. (With Two branches)
    My problem is sxm_moni both branches are not executed simultenously and it's executing one after the other.
    Did any body done in XI parallel processing using BPM...both calls has to finish at the same time. I mean first call 10 min and second call aslo has to finish first 10 min ..not other 10 min.
    I heard this problem from XI 3.0 and PI 7.O. But PI 7.1 did any body test the Parallel processing using Fork step.
       Pls help me is this issue will resolve if I go to PI 7.1.
    Regards,
    Venu.

    Hi Henrique,
    they would not necessarily start at the same time but shouldnt also be queued - Customer expecting the response within a 17 sec or 20 Sec but coming response 34 sec will not ok for the customer..tomorrow need add some more target again 17 sec will take...How PI can handle the Multi threading they are checking...I am not sure this problem fixed in PI 7.1 or not.
    there're # of connection restrictions in your system? Check that - Where can I check connections restrictions...If you know pls through some light on this.
    Also, how's your BPM transactional behavior (did you flag the create new transaction steps)?
    - I did not checked the flag for create new transaction step..once my server is up I can check the flag and I can test.
    Regards,
    Venu.

  • Using BAPI_PO_CREATE1  in parallel processing

    Hi experts,
    I am using BAPI_PO_CRATE1 in parallel processing to create multiple Purchase Orders. The Programs is creating only one PO (the Ist one)  however it should create multiple PO as per the program and data.
    For rest of the data it is returning error message saying "No instance of object type PurchaseOrder has been created".
    Please suggest how can I fix this issue.
    Points are sure.
    Thanks & Regards.
    Anirudh

    Without looking at some poriton of the code around this call and your parallel processing logic, it will be difficult to say.

  • Parallel Processing in Oracle 10g

    Dear Oracle Experts,
    I would like to use the Parallel Processing feature on My production database running on Unix Box.
    No: of CPU in each node is 8 and its RAC database
    Before going for this option i would like to certain things regarding Parallel Processing.
    1. According to my server specification how much DOP i can specify.
    2. Which option for Setting Parallel is good
    a. Using the 'alter table A parallel 4' or passing the parallel hints in the sql statements
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.
    4. Query or DML - which one will be perform best with parallel option.
    5. What are the negative issue if parallel option is enabled.
    6. what are the things to be taken care while enabling the parallel option.
    Thanks in Advance
    Edited by: user585870 on Jun 7, 2009 12:04 PM

    Hi,
    first of all, you should read [Using Parallel Execution|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#DWHSG024] in documentation for your version - almost all of these topics are covered there.
    1. According to my server specification how much DOP i can specify.It depends not only on number of CPU. More important factors are settings of PARALLEL_MAX_SERVERS and PARALLEL_ADAPTIVE_MULTI_USER.
    2. Which option for Setting Parallel is good - Using the 'alter table A parallel 4' or passing the parallel hints in the sql statementsIt depends on your application. When setting PARALLEL on a table, all SQL dealing with that table would be considered for parallel execution. So if it is normal for your app to use parallel access to that table, it's OK. If you want to use PX on a limited set of SQL, then hints or session settings are more appropriate.
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.Yes, refer to documentation.
    4. Query or DML - which one will be perform best with parallel option.Both may take advantages of using PX (with some restrictions to Parallel DML) and both may run slower than non-PX versions.
    5. What are the negative issue if parallel option is enabled.1) Object checkpoint happens before starting parallel FTS (true for >=10gR2, before that version tablespace checkpoint was used)
    2) More CPU and memory resources are used with PX - it may be both benefit and an issue, especially with concurrent PX.
    6. what are the things to be taken care while enabling the parallel option.Read the documentation - it contains almost all you need to know. Since you are using RAC, you sould not forget about method of PX slaves load balancing between nodes. If you are on 10g, refer to INSTANSE_GROUPS/PARALLEL_INSTANCE_GROUPS parameters, if you are using 11g then properly configure services.

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Error while using RSDRI_INFOPROV_READ : parallel processing error

    Hi
    I am also facing parallel processing error while using the function module RSDRI_INFOPROV_READ in transformation.
    when only one data package is there, the load happens without any issue.  But when multiple data packages are involved the load fails with an error "Exception in parallel processing".

    Hi Lijo,
    I got the following information from the function module documentation of the FM RSDRI_INFOPROV_READ.
    If neither I_SAVE_IN_FILE nor I_SAVE_IN_TABLE are set, then the return takes place in the form of packages (that is an internal table), of value I_PACKAGESIZE. A negative value means that the return should be in one package.
    Prathish.

  • Java Proxy Generation not working - Support for Parallel Processing

    Hi Everyone,
    As per SAP Note 1230721 - Java Proxy Generation - Support for Parallel Processing, when we generate a java proxy from an interface we are supposed to get 2 archives (one for serial processing and another suffixed with "PARALLEL" for parallel processing of jaav proxies in JPR).
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1230721
    We are on the correct patch level as per the Note, however when we generate java proxy from the IR for an outbound interface, it genrates only 1 zip archive (whose name we ourselves provide in the craete new archive section). This does not enable the parallel processsing of the messages in JPR.
    Could you please help me in this issue, and guide as to how archives can be generated for parallel processing.
    Thanks & Regards,
    Rosie Sasidharan.

    Hi,
    Thanks a lot for your reply, Prateek.
    I have already checked SAP Note 1142580 - "Java Proxy is not processing messages in parallel" where they ask to modify the ejb-jar.xml. However, on performing the change in ejb-jar.xml and while building the EAR, I get the following error:
    Error! The state of the source cache is INCONSISTENT for at least one of the request DCs. The build might produce incorrect results.
    Then, on going through the SAP Note 1142580 again, I realised that the SAP Note 1230721 also should be looked onto which will be needed for generating the Java proxy from Message Interfaces in IR for parallel processing.
    Kindly help me if any of you have worked on such a scenario.
    Thanks in advance,
    Regards,
    Rosie Sasidharan.

  • BPM Parallel Process with Exclusive Gateway

    Hi,
    I am facing issue with Exclusive Gateway in Parallel Process.
    Issu is, process always in In-Progress state at parallel Join. I mean process stops at Parallel Join and more over there are no errors in the process. If I delete Exclusive Gateway in Parallel process, the process is going to next level human task through Parallel Join. It means working fine.
    I have designed my process in such a way that, 1st task is Human Task ---> then Parallel Split with 2 Human tasks, out of one task performing throught Excusive Gateway and another one is just simple approval. Finally I am merging these two Human tasks using Parallel Join then finally triggerting Approval Human task, and closing the Process.
    Appreciate your quick suggestions to fix this issue.
    Thanks in advance,
    Dev...

    Hi Unni,
    Thanks for your reply.
    I have checked all the parallel tasks and all are in completed state. No errors.
    If I delete Exclusive Gateway it is working fine. I have checked step by step tasks in NWA, and every thing goes well.
    Please let me know If I missout anything.
    Thanks in advance,
    Dev

  • Parallel process run independently, but do not stop when vi completes

    So I posted a problem yesterday about getting 'elapsed time' express vi to provide updates from a sub-vi to the calling vi. It was suggested that I create a parallel process in the sub-vi that will run the elapsed time function at the same time the other processes are running. I tried to implement this idea, but ran into a problem. The elapsed time process is a While loop paralleled with the main While loop in the sub-vi, It updates every second, based on my time delay, and when I view the running sub-vi, the elapsed time indicator is updating as it should. The calling vi, however, is not seeing these updates. I have wired an indicator to the sub-vi icon, but does not change until the sub-vi finishes.
    Also, the other problem with the parallel process is that it runs forever, regardless of the other loop finishing. I have tried to wire an or'd bool to the stop inside the While loop, but when I do that, the elapsed time process does not start. 
    I have tried data binding a shared variable in the project, and then dragging that to my calling vi, but again I get no updates on the elapsed time.
    Any ideas????

    A VI must finish looping before it has an available output (the terminal on the sub-vi icon).   Research how to communicate between loops. What you are doing can be accomplished with Notifiers (that's what I'd use), Queues, or Global/Shared Variables.
    Your issues appear to be due to a lack of understanding of the LabVIEW data-flow paradigm. Check out the Producer / Consumer example. Post code here and see if one of us can give more guidance.
    Richard

  • Need help with parallel process in background; not able to call FM in bgnd

    Hello,
      I am trying since 2 days to solve the issue of parallel process in background without using FPP.
    For which I want to call function module of class method in new task but to be processed by background process and not dialog.
    I searched so many websites but everyone has suggesteed to 'call function in background task' . But the fact is the processing of function happens by dailog process even in this case.
    I want to loop at table and call FM or class method inside each loop.
    Kindly suggest me how can I call function or class method in new task in everycall and prcoess it in background.
    thanks

    Balaji,
    Is the name of the button between single or double quotes?
    Regards,
    Dan
    Blog: http://DanielMcGhan.us/
    Work: http://SkillBuilders.com/

Maybe you are looking for

  • Sqlplus – spool data to a flat file

    Hi, Does any oracle expert here know why the sqlplus command could not spool all the data into a flat file at one time. I have tried below command. It seems like every time I will get different file size :( a) sqlplus -s $dbUser/$dbPass@$dbName <<EOF

  • How do I transfer info from my old time capsule to a new one

    How do I transfer all data from my old time capsule to a new? (latest model)

  • Jai som.vistech compilation problem

    Hello! The problem is that when I try to import a class from com.vistech, I get a "packave com.vistech.jai.imageio does not exist". The strange thing is that I can import classes form com.sun and javax.media so I suppose the installation has been mad

  • Fixing iPhoto for modified originals

    Hey all...I'm in a bit of a pickle. A while back, I got it into my head that I should start shooting RAW, even though I rarely adjust any of my photos. For more than a year, I tore through every last megabyte on my iMac's hard drive. All my photos ar

  • Setting DISPLAY enviroment variable

    Hi, I want to run oracle wallet manager from my windows client. My application server is installed on the SUSE Linux Enterprise Server 9. I have given the following command to set the DISPLAY variable. export DISPLAY=192.168.100.159:0. Here is the ou