OSB Parallel Processing using OSB

Hi All,
I am working on a proxy service based on WSDL-A.
Stage-1
I have to do a service callout (sc1) to another business service with a particular parameter.
From the response of that sc1 i need to pick two variables var1 and var2 and use it the request of stage2 and stage 3.
Stage- 2
I have to do another service callout (sc2) to the same business service with a particular parameters including var1 returned by sc1
Stage- 3
I have to do another service callout (sc3) to the same business service with a particular parameters including var2 returned by sc1
Stage-4
I need to Use the response of sc1, sc2 and sc3 and transform that to the desired response.
My only concern is I have to wait for stage 2 to complete to start with stage 3 when I have no real dependency on stage 2.
How can I ensure that My stage 2 and stage 3 can be processed in parallel. ?
Can any one please explain How I can achieve that?

Hi Edwin,
I was trying the example on your blog I am getting this issue that I am not sure how you initialize the response.
It always gives me an exception that the response is used before its is initialised.
If possible can you send me the configuration jar on my email [email protected] ?
Cheers Nitin

Similar Messages

  • Parallel processing in OSB

    Hi everyone,
    I need to implement parallel processing in the OSB. I have a proxy service which implements a request reply pattern over JMS. This service needs to invoke three such services (request-reply over JMS) in parallel and aggregate the result and send it as a reply. How can I do this in OSB?
    I have looked at the Split-Join element but it seems to only support web services. Is that correct?
    Best regards,
    Dimo

    >
    Thank you for the fast reply. Wrapping in web services has probably relatively high performance impact - I need to wrap my payload in SOAP, send it via HTTP, receive it over HTTP, parse the SOAP envelope, extract the payload and then send it via JMS. And the same steps in the reverse order to send the response. Seems like a lot of overhead to me.
    >
    You don't have to do all of that. If you wrap your JMS-based services using WSDL-based service with local transport, then you avoid all HTTP communication because processing stays inside OSB.
    >
    Isn't there any other way to parallelize the execution - the application is processing synchronous requests from a voice frontend and it should be very responsive. Does anyone know how much overhead exactly goes into the whole SOAP wrapping and unwrapping?
    >
    I don't know of any other way for this case. Maybe someone else will bring something better.
    I have never measured SOAP overhead since for my business services it is only insignificant fraction of whole processing time. I would expect SOAP overhead to be not far from JMS overhead. Especially when XML is used as a payload.

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Problem in Dynamic Parallel processing using Blocks

    Hi All,
    My requirement is to have parallel approvals so I am trying to use the dynamic parallel processing through Blocks. I cant use Forks since the agents are being determined at runtime.
    I am using the ParForEach block. I am sending the &agents& as a multiline container in 'Parallel Processing' tab. I have user decision in the block. Right now i am hardcoding 2 agents in the 'agents' multiline element. It is working fine..but i am getting 2 instances of the block.I understand that is because I am sending &AGENTS& in the parallel processing tab.. I just need one instance of the block going to all the users in the &AGENTS& multiline element.
    Please let me  know how to achieve the same. I have already searched the forum but i couldnt find anything that suits my req.
    Pls help!
    Regards,
    Soumya

    Yes that's true when ever you try to use  ParForEach block then for each value entry in the table a separate workitem ID is created, i.e. a separate instance is created that paralle processing is not possible like that
    Instead of that what you can do is create a fork with 3 branches and define a End Condition such that until all 3 branches are executed .
    Before to the fork step determine all the agents and store them in a internal table , you can access the one internal table entry by using the index value check this [wiki|https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/accessingSingleEntryfromMulti-line+Element] to access single entry from a internal table.
    For each task in the fork assgin a agent
    So, as we have defined the condition that until all the three branches are executed you don't want to come out of the fork step so, it will wait until all the stpes are completed.

  • Dynamic Parallel Processing using Rule

    Hello,
    I am using a User Decision within a Block (ParForEach type) step to send work-items to multiple Approvers parallelly.
    For this I have created a Multi-line container LI_APPROVERS and bound &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& to &_LI_APPROVERS_LINE& in the "Parallel Processing" tab of the Block.
    Now in User Decision I am using Agent as Expression = &_LI_APPROVERS_LINE&. This is working perfectly fine if I fetch the values in LI_APPROVERS via a background method before "Block" step is executed.
    I want to know if we can do this using a "Rule" within the User Decision? Meaning approvers are determined by the Rule(through a FM) at the run time instead of fetching them beforehand. I created a custom Rule and tried passing it under Agents but it didn't work. I do not know what bindings need to be done and how each line will be passed to User Decision to create a work-item for each user.
    Or
    I should remove the Block step completely and directly use the User Decision Task with Parallel Processing option under Miscellaneous tab?
    Can someone please explain how to achieve this using a Rule and exactly what bindings are required.
    Thanks.

    Hi Anjan,
    Yes, that's exactly what I want to know. I saw your below response in one of the threads but could not understand exactly how to do it. Can you please explain it.
    You have all  your agents in one multiline container element in workflow.
    Then you take a block step with perforeach.
    Then create a custom rule which will import multiline element of agents , and a line_no. Then in the rule you populate the actor_tab with agents from that multiline contaier elemens of agent. The logic will take the agent from the multiline container[line_no].
    Then you take a activity step . In agent use your custom rule usin prpoer bindin of multiline element of agents and for line_no you pass _***_line from block container. Then workitem will sent to n no of people parrallaly.
    This is my current design:
    Activity returns agents in LI_APPROVERS.
    At Block: I have binding &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& --> &_LI_APPROVERS_LINE&
    At UD: I have Agents as Expression = &_LI_APPROVERS_LINE&
    I want to remove the Activity step (to get Agents in background) and replace with Rule within UD. What binding do I need from Rule to Workflow? How to get the "Line_no" from rule as you mentioned above.
    Thanks for your response.

  • Dynamic parallel processing using a multiline container element

    Hi All ,
      I just wanted to how things work when we use "Dynamic parallel processing" for a decision step . I came across a situation wherein a Rule gets the approving user(s) and the work item should be sent to all those users . After getting an approval from all the users , the workflow should proceed or else it should terminate .
       I was just wondering whether "Dynamic parallel processing" will do this job or not . I had also thought of using forks but as the number of approvers are  decided at runtime , i dont think it is possible .
       Any inputs ?
      Edit : We are working on CRM 5.0
    Thanks ,
    Shounak M.
    Message was edited by: Shounak  M

    Hi Shounak,
    Just do as Mike says:
    use the multiline element for a subflow.
    The subflow consists of your user decision, if someone rejects it, remember it (could be done by updating a small table using a method, or use an event, or what mike suggested, updating appending a table )
    In the top flow, after the multiline element step determine if someone rejected it (wait for event, or reading the table).
    Kind regards, Rob Dielemans
    Message was edited by: Rob Dielemans

  • Parallel processing using flow activity

    Hey, I have two input directories in the same process for two different partnerlinks and I want to use these two directories to input two flat files to two tables simultaneously.I tried using flow to do this parallel process. But the process when uploaded onto the console takes one input file and it fails. It changes the state of the process to 'off'. Can anyone let me know as to what could be the reason? thanks and regards

    Hi can someone help me please

  • Parallel processing using BLOCK step..

    Hi,
    I have used parallel processing with a BLOCK step. I have put a multiline container element. In the BLOCK step, I have visibily to another container element generated because of the block step (multilne container_LINE). Thus the number of parallel processes are getting created as in the requirement, but the problem is the value in multilne container_LINE is not getting passed to the send mail step. I have checked the binding, everything is ok. Please
    Sukumar.

    Hi
    When I am sure that I am doing properly a binding but it doesn't work then:
    1. I activate workflow template definition (take a joke).
    2. I write the magic word
    /$sync
    in the command line, to refresh buffers.
    3. I delete the strange binding defined with drag&drop and define it one more time using old method from former binding editors in R/3 4.6c systems. I take container elements from the lists of possible entries or I write their name directly. I don't use drag&drop.
    Regards
    Mikolaj
    There are no problems, just "issues" and "improvement opportunities".

  • Parallel processing using arfc.

    Hi experts,
    Does any have a presntation on how to proceed with parallel processing with ARFC?
    Thanks in Advance.

    look at the piece of code ....
    FORM start_onhand_extract_task .
      DO.
        IF g_num_running < g_avail_wps.
          EXIT.
        ENDIF.
        WAIT UP TO 5 SECONDS.
      ENDDO.
    Creating the file name with task number
      ADD 1 TO g_task_num.
      CONCATENATE p_file1 g_task_num INTO g_task_name.
      CONCATENATE g_filename g_task_name  INTO task_tab-filename1.
      CONDENSE task_tab-filename1 NO-GAPS.
      task_tab-task_name    = g_task_name.
      APPEND task_tab.
      CLEAR g_msg_text.
      CALL FUNCTION 'ZMIO_GET_MARD_DATA'
        STARTING NEW TASK g_task_name
        DESTINATION IN GROUP p_grp
        PERFORMING decrease_wp ON END OF TASK
        EXPORTING
          i_filename = task_tab-filename1
        TABLES
          i_matnr    = r_matnr
          i_werks    = r_werks.
      CASE sy-subrc.
        WHEN 0.
          ADD 1 TO g_num_running.
          g_num_submitted = g_num_submitted + 1.
        WHEN 1.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
          g_hold_num = g_num_running.
          WAIT UNTIL g_num_running < g_hold_num OR
                     g_hold_num = 0
                     UP TO 5 SECONDS.
        WHEN OTHERS.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1..
          error_rec-msg_text     = g_msg_text.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
      ENDCASE.
    ENDFORM.                    " start_onhand_extract_task

  • Dynamic Split Join:Parallel flow using OSB

    Hi,
    I am implementing a dynamic split join in OSB,where in the for -loop i am invoking a database adapter.
    If I select parallel='yes' in the for-loop settings, I get a random number of records from the database as an output.
    While on the other hand, on running the for-loop sequentially, i get the exact number of records i am expecting.
    Could anybody figure out why is this happening?
    Regards,
    Tarun

    If I understand correctly, this is what you are doing:
    Configure a For loop based on a repeating element in source XML (for ex. for each customer id in xml message).
    Inside each loop you are making a DB lookup and getting one record for each element (for ex. customer details of each customer id)
    Finally you are aggregating them in the same variable (join)
    The problem that you see is that if you do it in parallel then you do not see details of all customers in the out variable?
    If yes, then can you please let me know how you are updating the details of each record in the out variable.

  • Parallel processing using NSOperation slows down application!

    For the "fun" of it, I'm creating a little application which matches images in iPhoto to my GPS logs (in GPX format) to geocode the photos.
    Currently, I've got it to the point where it will go through my images (about 1000, but only around 250 with matching GPS data) and gets the right data from the GPX files.
    The process was taking around 9 minutes to do, but with some alterations (using caching to eliminate GPX files outside the relevant date range) I've got it down to 3 minutes. The problem is that when I use multiple cores (Mac Pro 2x QX) it actually slows the process down to 3.5 minutes.
    All the processors are running with 50% load (were running at 100% before my other alterations that improved performance), but analysis shows 95% of the time being spent waiting in the XQuery routines.
    I imagined that this application would be perfect for multiple processors, as the matching section was processor intensive (complex XQuery expression) and appeared to be a very independent process.
    The shared resources passed to the function are:
    1) the path to the image being processed
    2) an array of GPX files (subclass of NSXMLDocument)
    Additionally, during the function it updates a dictionary related to the image (adds in the location data found).
    I've tried to eliminate the block by sending a copy of the GPX array in case the system is locking access to the array (although it is not mutable) while it is being enumerated (using Obj-C 2.0 enumeration).
    Can anybody give me a hint what would be causing these supposedly independent processes to be waiting? My only guess is that the XQuery library isn't thread safe, but can't find confirmation.
    Memory usage is being kept low, and there aren't any page swaps showing up.
    Thanks.
    Ray.

    etresoft wrote:
    Ray A wrote:
    Each photo is independent of the others, so I can't see any reason why multiple images couldn't be processed simultaneously.
    But there is only a single disk where all these images reside. But then, you said there was no disk activity. How are you accessing these images?
    I was a little inaccurate with my earlier posts. The images I speak of are at this stage is just the iPhoto albumdata.xml file, from which I obtain the date information for the image.
    The iPhoto file is read into memory and decomposed into the array of image data, from which the date is obtained and passed through to the xQuery methods.
    Perhaps not processor intensive (in terms of performing calculations) but reading from memory and comparing.
    Read from memory into where? Either it is on disk and you have serial access only or it has already been read into RAM.
    The GPX files are read into memory (i.e. RAM) when the programme launches, and converted into NSXMLDocuments. So there will be lots of memory access, but not disk access.
    As mentioned earlier, I've duplicated the array containing the files… (+realisation happening+)…
    Perhaps it is access to the NSXMLDocument files that is causing the lock. I've duplicated the array, but not a deep copy down to the actual XML objects. I'll see if that makes a difference.
    You shouldn't have to do even that. If you are just querying, there is no modification needed. They can all share the same data.
    That's what I figured too, especially since it is not a mutable array. I didn't have any luck with deep copying as my first attempt caused the programme to run out of available memory.
    But in reply to your question, when running as a single thread the bottleneck is also the XQuery execution. It takes about 1 second to execute a query on each relevant GPX file for a match. About 10 seconds when done in parallel! :-o
    How big are those files? How many GPX files do you have.
    Will have around 100 when it's running properly, but at the moment I'm using about a dozen. Each file is around 1MB.
    I would think you could merge all the data from all the GPX files first. Then start one thread to go through all the image files and pull out the search values and put them into a queue. Then have all the cores create an operator and grab a data element from the queue to process.
    My initial thought was to simplify things by merging them all, but the issue with that is the computer would need to scan one huge file for each image, rather than being able to omit the individual files that fall out of the date range relevant.
    Or, you could swap the problem space axes. You could divvy up the GPX data between all the cores. Then, for each image, look for a match in each subset - there should be only one. If NSXMLDocument is the bottleneck, this would solve it.
    May not be effective as it's only the relevant GPX file that takes a long time. The rest of the files are checked against to see if they cover the date in question, and skipped if not. Images without GPX data for their date (most of them since I'm using a test set of GPX files) get processed in the blink of an eye.
    But I appreciate the effort you're putting in to come up with possible solutions.
    While I'd like to know the reasons it's not working, I've got it to the point where it's processing images (with matching GPX files) in less than half a second on single thread, and putting the whole matching routine in separate thread has given me UI responsiveness. So if gets no better, at least the user (i.e. me) is kept entertained by the feedback while it runs.
    Ray.

  • Question on parallel processing using the STARTING NEW TASK keyword

    I have the following code in a program on my development system:
    call function 'FUNCTION_NAME'
        starting new task ld_taskname
        performing collect_output on end of task
          exporting
            pd_param1       = ld_param1
          tables
            pt_packet       = lt_packet.
    You'll notice in the code above I left out the following part of the function call:
    DESTINATION IN GROUP group
    In my one-server development system the topmost code executes fine, but I'm getting a 'REMOTE FUNCTION CALL ERROR' when I execute this in a multi-server Q&A system. 
    My question: Is the missing 'DESTINATION' keyword required in order for this technique to work in a multi-server environment or is it optional keyword?  I made the assumption that since it worked in my development environment that without the 'DESTINATION' addition the system simply used the current logged on system to start the new processes.
    Any input appreciated.
    Thanks,
    Lee

    Hi Lee,
    Just take F1 help on CALL FUNCTION key word and go to the variant
    CALL FUNCTION func STARTING NEW TASK task
                  [DESTINATION {dest|{IN GROUP {group|DEFAULT}}}]
                  parameter_list
                  [{PERFORMING subr}|{CALLING meth} ON END OF TASK].
    This gives you very clear information about this Key word.
    No one would give you better information better than this, i hope
    Cheers
    Ram

  • Parallel Processing using common ssis variable

    Hi Guys,
    I am stuck in a scenario where I have to execute 2 script tasks in  parallel in such a way that task A initializes value to variable X and the other script task B reads it.
    Since I have assigned a default wait time in the script B before it actually tries to read the variable X, script A should be completed by that time and value should be assigned to variable X successfully before it is actually read by Script B.
    Both these scripts are inside a Sequence container.
    But the actual problem is that when I actually execute the package, the script A is actually waiting for completion for script B which is actually the opposite to what I thought.
    I tried moving one script out of the sequence container as well but that doesn't help.
    Any suggestions on this behaviour??
    Regards,
    Harsh

    "execute 2 script tasks in  parallel in such a way that task A initializes value to variable X and the other script task B reads it"
    To make them truly parralel
    Do not use the sequence container and do not connect the Scripts but then there is no control what starts 1st.
    Sharing a variable is thus difficult.
    But based on what you do there is a dependency on task B to wait for A. So it is a mutually exclusive design.
    Arthur
    MyBlog
    Twitter

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • FORK is Not happening Parallel processing- It's working sequential

    Hi,
       we are into PI 7.O and SP 13.
       I am trying to test Parallel processing using Fork step. (With Two branches)
    My problem is sxm_moni both branches are not executed simultenously and it's executing one after the other.
    Did any body done in XI parallel processing using BPM...both calls has to finish at the same time. I mean first call 10 min and second call aslo has to finish first 10 min ..not other 10 min.
    I heard this problem from XI 3.0 and PI 7.O. But PI 7.1 did any body test the Parallel processing using Fork step.
       Pls help me is this issue will resolve if I go to PI 7.1.
    Regards,
    Venu.

    Hi Henrique,
    they would not necessarily start at the same time but shouldnt also be queued - Customer expecting the response within a 17 sec or 20 Sec but coming response 34 sec will not ok for the customer..tomorrow need add some more target again 17 sec will take...How PI can handle the Multi threading they are checking...I am not sure this problem fixed in PI 7.1 or not.
    there're # of connection restrictions in your system? Check that - Where can I check connections restrictions...If you know pls through some light on this.
    Also, how's your BPM transactional behavior (did you flag the create new transaction steps)?
    - I did not checked the flag for create new transaction step..once my server is up I can check the flag and I can test.
    Regards,
    Venu.

Maybe you are looking for

  • Xorg.conf ---good file required

    Just newly installed OELinux4.7 & dispaly is very big, trying to set it if someone has good xorg.conf or fix this can help.... [root@ora-lab2 /]# cd /etc/X11 [root@ora-lab2 X11]# cat xorg.conf # Xorg configuration created by system-config-display Sec

  • How do I convert a String to a DateTime in SQL ??

    I have retreived the current date and time and place into String data type. but my table colmun is in datetime data type. How do I convert ?? Thanks in advance.

  • Sales Document: Delivery: POD Data

    Hi gurus, I have to extract data from Sales Document: Delivery: which is on POD(Proof of delivery).Datain R/3 is stored in TVPOD table, i searched for the extractor for this but i could not find. Can anyone please tell what is the data source for TVP

  • Do I have to use one of the "themes"?

    I made a movie in Imovie and I don't want any of the theme stuff, just to burn the movie the way that I made it. Am I missing the completely obvious?

  • Fullscreen Flash Air: Remove Dock and Topbar/menu

    Hey, How do i remove the topbar/menu and the dock when i'm making a fullscreen air-application in flash? I know that it's something like "NativeWindowType.LIGHTWEIGHT" - but i don't know how to use it. Sincerly, Mads Viktor.