Parallel processing and throttle

Hi experts,
              where this parallel processing and throttle used exactly in XI.

You can develop scenarios in BPM for parallel processing. Simple use cases involve like splitting incomming file into multiple files and sending them out in parallel. A block with par for each would be needed in this case.
Search in help for more information.
VJ

Similar Messages

  • Parallel Processing and Capacity Utilization

    Dear Guru's,
    We have following requirement.
    Workcenter A Capacity is 1000.   (Operations are similar)
    Workcenter B Capacity is 1500.   (Operations are similar)
    Workcenter C Capacity is 2000.   (Operations are similar)
    1) For Product A: Production Order Qty is 4500. Can we use all workcenter as a parallel processing through Routing.
    2) For Product B: Production Order Qty is 2500. Can we use only W/C A and B as a parallel processing through Routing.
    If yes, plz explain how?
    Regards,
    Rashid Masood

    May be you can create a virtual WC VWCA=ABC (connected with a hierarchy with transaction CR22) and another VWCB=A+B and route your products to each VWC

  • Parallel processing and time-out

    Hi all,
    I've got a prob with doing a great number of postings.
    While the time elapsed for these postings is too long, I tried to do it with an function module and "IN BACKGROUND TASK". Well, there is also a alternative "STARTING NEW TASK".
    But I figured out, that these both variants are starting dialog work processes. I think there is a time out for dialog WP's of 300 seconds in standard.
    Will this timeout kill the processes or not??
    And witch alternative is the best to do some parallel processing??
    thanx in advanced
    regards
    Olli

    Hi Oliver,
    Some solutions here:
    1. You could increase the value of the dialog time-out (allthough this can only go to a maximum of 600 seconds). This parameter is in the SAP profiles (parameter name = rdisp/max_wprun_time).
    2. As suggested by Christian, decrease the amount of work within one LUW. You can do this by inserting (from time to time) a COMMIT WORK. This COMMIT WORK also resets the timeslice counter of the running dialog process (thus giving again an extra timeslice to work). The downside is, that if you have many related objects to modify, your ROLLBACK options become limited.
    3. Split the proces in several tasks and put the to work in the background (by scheduling jobs for them).
    4. Program your own parallel handler (see sample code). With this you could process document by document (as if each is done separately). The number of dialog processes (minus 2) is the limit you could use.
    Sample code:
    * Declarations
    CONSTANTS:
      opcode_arfc_noreq TYPE x VALUE 10.
    DATA:
       server       TYPE msname,
       reason       TYPE i,
       trace        TYPE i VALUE 0,
       dia_max      TYPE i,
       dia_free     TYPE i,
       taskid       TYPE i VALUE 0,
       taskname(20) TYPE c,
       servergroup  TYPE rzlli_apcl.
    * Parallel processes free check
    CALL 'ThSysInfo' ID 'OPCODE' FIELD opcode_arfc_noreq
                     ID 'SERVER' FIELD server
                     ID 'NOREQ'  FIELD dia_free
                     ID 'MAXREQ' FIELD dia_max
                     ID 'REASON' FIELD reason
                     ID 'TRACE'  FIELD trace.
    IF dia_free GT 1.
      SUBTRACT 2 FROM dia_free.
      SUBTRACT 2 FROM dia_max.
    ENDIF.
    * You must leave some dialogs free (otherwise no one can logon)
    IF dia_free LE 1.
      MESSAGE e000(38)
         WITH 'Not enough processes free'.
    ENDIF.
    * Prepare your run
    ADD 1 TO taskid.
    WRITE taskid DECIMALS 0 TO taskname LEFT-JUSTIFIED.
    CONDENSE taskname.
    * Run your pay load
    CALL FUNCTION 'ZZ_YOUR_FUNCTION'
      STARTING NEW TASK taskname
      DESTINATION IN GROUP servergroup
      EXPORTING
    *   Your exporting parameters come here
      EXCEPTIONS
        communication_failure  = 1
        system_failure         = 2
        RESOURCE_FAILURE       = 3
        OTHERS                 = 4.
    Of course you would put this within a loop and let your "payload" function fire off for each document.
    You MUST check the number of free processes just before you run the payload.
    And as last reminder: Do NOT use the ABAP statement WAIT (this will disrupt the counting of free processes).
    Hope this will help you,
    Regards,
    Rob.

  • ABAP OO and parallel processing

    Hello ABAP community,
    I am trying to implement a ABAP OO scenario where i have to take into account parallel processing and processing logic in the sense of update function modules (TYPE V1).
    The szenario is definied as follows:
    Frame class X creates a instance of class Y and a instance of class Z.
    Classes Y and Z sould be processed in parallel, so class X calls classes Y and Z.
    Classes Y and Z call BAPIS and do different database changes.
    If classes Y or Z have finished, the status of processing is written into a status table by caller class X.
    The processing logic within class Y and class Z should be a SAP LUW in the sense of a update function module (TYP V1).
    Can i use events?
    (How) Should i use "call function in upgrade task"?
    (How) Should i use "call function starting new task"?
    What is the best method to realise that behaviour?
    Many thanks for your suggestions.

    Hallo Christian,
    I will describe you in detail, whow I have solved this
    problem. May be there is a newer way ... but it works.
    STEPS:
    I asume you have splitt your data in packages.
    1.) create a RFC-FM: Z_WAIT
    It return OK or NOT OK.
    This FM: does following:
    DO.
      call function TH_WPINFO -> until the WPINFO has more
    than a certain number of lines. (==> free tasks)
    ENDDO.
    If it is OK ==> free tasks are available
    call your FM (RFC!) like this:
    CALL FUNCTION <FM>
    STARTING NEW TASK ls_tasknam " Unique identifier!
    DESTINATION IN GROUP p_group
    PERFORMING return_info ON END OF TASK
    EXPORTING
    TABLES
    IMPORTING
    EXCEPTIONS
    *:--- Take care of the order of the exceptions!
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    RESOURCE_FAILURE = 5
    OTHERS = 1.
    *:--- Then you must check the difference between
    *:--- the started Calls and the received calls.
    *:--- If the number increases a certain value limit_tasks.
    wait until CALLED_TASK < LIMIT_TASKS up to '600' seconds.
    The value should be not greater then 20!
    DATA-Description:
    parameters: p_group like bdfields-rfcgr default 'Server_alle'. " For example. Use the F4 help
    if you have defined the report-parameter as above.
    ls_tasknam ==> Just the increasing number of RFC-Calls
    as Character.
    RETURN_INFO is a form routine in which You can check the results. Within this Form you must call:
    RECEIVE RESULTS FROM FUNCTION <FM>
    TABLES: ... " The tables of your <FM> exactly the same order!
    EXCEPTIONS
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    NO_ACTIVATE_INFOSTRUCTURE = 1.
    Her eyou must count the received Calls!
    And you can save them into a internal table for checking!
    I hope I could help you a little bit
    God luck
    Michael

  • Parallel Process in a Process chain design

    Hi
    BAsed on what factors can we make a decision on How many parallel data loads (process) we can include while designing a Process chains
    Thanks

    Hi Maxi,
    There is no hard and fast rule for that, for trial purpose you can add specific no. of parallel processes and schedule the chain, if there are not enough background processes available to fulfill your request then SAP will give you warning there you can see how many processes are available.
    But if you go for maximum parallel process then it actually depends on how many processes are available at the time of process chain scheduling. Though your server have enough process but if they are utilized by other processes then still you will get warning while executing process chain.
    So just check in your server how many background processes are there and then take some optimum decision.
    Regards,
    Durgesh.

  • Parallel Processing

    Hi,
    I am trying to implement parallel processing and am facing a problem where in the Function Module contains the statement :
    submit (iv_repid) to sap-spool
                        with selection-table it_rspar_tmp
                        spool parameters iv_print_parameters
                        without spool dynpro
                        via job iv_name
                        number iv_number
                        and return.
    I call the Function Module as such :
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    But I keep getting the error Output Device "" unknown.
    Kindly advise.
    Thanks.

    I need the output of a report to be generated in the spool and then I am retreiving it later on from the spool and displaying ti along with another ALV in my current program.
    I have called the Job Open and Job Close function modules. Between these 2 FM calls, I have written the code for the parallel processing.
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    After this, I retrieve the data from Function Module : RSPO_RETURN_SPOOLJOB.
    All the above steps work while I am in debugging mode.At the RFC call, it opens a new session and i execute the session completely, return to the main program execution, and then execute to the end , and I get the desired output.
    But in debug mode, if i reach the RFC, a new session opens.I do not execute the FM, instead if i go back to the main program and execute it directly, i can replicate the error : Output device "" unknown.
    So i guess it has got something to do with the Submit statment in the RFC.
    Any assistance would be great !
    Thanks !!

  • How to get BI background jobs to utilize parallel processing

    Each step in our BI process chains creates exactly 1 active batch job (SM37) with in turn utilizes only 1 background process (SM50).
    How do we get the active BI batch job to use more than 1 background process similar to parallel processing (RZ20) in an ERP system?

    Hi there,
    Have you checked the number of background and parallel processes. Take a look in SAP Note 621400 - Number of required BTC processes for process chains. This may be helpful ...                                                                               
    Minimum (with this setting, the chain runs more or less serially):        
    Number of parallel SubChains at the widest part of the chains + 1.        
    Recommended:                                                              
    Number of parallel processes at the widest part of the chain + 1.         
    Optimal:                                                                  
    Number of parallel processes at the widest part of the chain + number of  
    parallel SubChains at the widest part + 1.                               
    The optimal settings just avoids a delay if several SubChains are         
    started in parallel at the same time. In case of such a Process Chain     
    implementation and using the recommended number of background processes   
    there can be a short delay at the start of each SubChain (depends on the  
    frequency of the background scheduler, in general ~1 minute only).                                                                               
    Attention: Note that a higher degree of parallel processing and           
    therefore more batch processes only make sense if the system has          
    sufficient hardware capacity.                                                                               
    I hope this helps or it may lead you to further checks to make .
    Cheers,
    Karen

  • Troubleshooting the lockwaits for parallel processing jobs

    Hi Experts,
    I am facing difficulty tracing the job which is interfering with a business critical job.
    The job in discussion is using parallel processing and the other jobs running at the time are also using the same.
    If I see a lockwait for some process which may be a dailog process or update process spawned by these jobs, I am having difficulty knowing which one holding the lock and which one is waiting.
    So, Is there any way we could identify the dailog or update processes which are used for parallel processing for a particular backgorund job.
    Please help me as this is business critical and we have a high visibility in this area.
    Any suggestions will be appreciated.......
    Regards
    Raj

    Hi Raj,
    First of all, please indicate if you are using SAP Business One.  If yes, then you need to check those locks under SQL Management Studio.
    Thanks,
    Gordon

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Explain Plan - Parallel Processing Degree of 2 and CPU_Cost

    Explain Plan - Parallel Processing Degree of 2 and CPU_Cost
    When I use a hint to use parallel processing with a degree of 2
    the I/O cost seems consistently divided by (1.8) but the cpu cost
    adjustment is inconsistent(between 2.17 and 2.62).
    Any ideas on why the cpu cost varies with each table ?
    Is there a formula to adjust the cpu_cost ?
    Thanks,
    Summary:
    The i/o cost reduction is consistent (divide by 1.8)
    Table 1: 763/424 = 1.8
    Table 2: 18774/10430 = 1.8
    Table 3(not shown): 5/1.8 = 3
    But the cpu cost reduction varies:(between 2.17 and 2.62)
    Table 1: 275812018/122353500 = 2.25
    Table 2: 7924072407/3640755000 = 2.17
    Table 3(not shown): 791890/301467 = 2.62
    Example:
    Oracle Database 10.2.0.4.0
    Table 1:
    1.) Full table scan on Table 1 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,1)*/
    * from table_1
    SQL> select cost,io_cost,cpu_cost from plan_table;
    IO_COST CPU_COST
    763 275812018
    2.) Process Table 1 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,2)*/
    * from table_1
    IO_COST CPU_COST
    424 122353500
    Table 2:
    3.) Full table scan on Table 2 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,1)*/
    * from table_2
    IO_COST CPU_COST
    18774 7924072407
    4.) Process Table 2 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,2)*/
    * from table_2
    IO_COST CPU_COST
    10430 3640755000

    The COST value is for the benefit of the CBO; not YOU.
    What should be more important to you is the elapsed run time for the SQL

  • Issues with parallel processing in Logical Database PCH and PNP

    Has anyone encountered issues when executing programs in parallel  that utilizes the logical database PCH or PNP?
    Our scenario is the following:
    We having have 55 concurrent jobs that execute a program that use the logical database PCH at a given time.  We load the the PCHINDEX table with the code below.
          wa_pchindex-plvar = '01'.
          wa_pchindex-otype = 'S'.
          wa_pchindex-objid_low = index_objid.
          APPEND wa_pchindex TO pchindex.
    We have seen instances where when the program is executed in parallel, with each process having its own range of positions id's, that some positions are dropped or some are added that is outside the range of the given process.
    For example:
    process 1 has a range of positions ID's 1-10
    process 2 has a range of positions ID's 11-20
    process 3 has a range of positions ID's 21-30
    Process 3 drops position 25 and adds position 46.
    Has anyone faced a similar issue?
    Thanks for your help.
    Best Regards,
    Duke

    Hi,
    first of all, you should read [Using Parallel Execution|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#DWHSG024] in documentation for your version - almost all of these topics are covered there.
    1. According to my server specification how much DOP i can specify.It depends not only on number of CPU. More important factors are settings of PARALLEL_MAX_SERVERS and PARALLEL_ADAPTIVE_MULTI_USER.
    2. Which option for Setting Parallel is good - Using the 'alter table A parallel 4' or passing the parallel hints in the sql statementsIt depends on your application. When setting PARALLEL on a table, all SQL dealing with that table would be considered for parallel execution. So if it is normal for your app to use parallel access to that table, it's OK. If you want to use PX on a limited set of SQL, then hints or session settings are more appropriate.
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.Yes, refer to documentation.
    4. Query or DML - which one will be perform best with parallel option.Both may take advantages of using PX (with some restrictions to Parallel DML) and both may run slower than non-PX versions.
    5. What are the negative issue if parallel option is enabled.1) Object checkpoint happens before starting parallel FTS (true for >=10gR2, before that version tablespace checkpoint was used)
    2) More CPU and memory resources are used with PX - it may be both benefit and an issue, especially with concurrent PX.
    6. what are the things to be taken care while enabling the parallel option.Read the documentation - it contains almost all you need to know. Since you are using RAC, you sould not forget about method of PX slaves load balancing between nodes. If you are on 10g, refer to INSTANSE_GROUPS/PARALLEL_INSTANCE_GROUPS parameters, if you are using 11g then properly configure services.

  • Dynamic Parallel Approval for HCM Process and Forms

    Hi everyone,
    I have a scenario where I need to use the "Dynamic Parallel Approval" (or to keep it simple, initially I tried using the "Parallel Approval" wizard)for a workflow used in the HCM Process and Forms.
    The standard task for approval in process and forms is TS17900101. I have mentioned a multiline container in the Miscellaneous tab of this task. However,I was unable to use this task in the wizard. There are no results attahced to this task unlike any other standard approval task (like TS30200147). I need to use the task TS17900101 in the workflow assigned to process and forms, but not sure how to handle this scenario (parallel approval).
    If this is not the right way of doing it, Is there any workaround for "Parallel Approval" in HCM Process and Forms.
    Could anybody throw some light around this area.
    Thanks for your help.
    - MM

    Thanks Anuj. But I believe, the container element that I add in the miscellaneous tab does not necessarily have to be used in the agent assignment. The multiline container is just to instantiate the workitem 'n' number of times. Correct me if I am wrong.
    My concern is that I was unable to use this approval task (TS17900101) in the workflow wizard for dynamic paralle/parallel approval.
    Arghadip - Thanks for your suggestion. I have seen some your nice contributions in the WF forum.
    I actually tried using the 'Blocks'. But this is what I ran into. When I send multiple approval requests (say 3), if one person has approved it and the second has rejected it,I need to take out the workitem from the third person's list (because it has been rejected by someone in the group). I am not sure if this is possible using Blocks. And in my case the third person is still having the workitem, but gets a dump/error when he tries to open it.
    Also, if any one has rejected the request, I do not have to wait for the rest to take any action on the workitem and proceed further. But I guess in 'Blocks' it will not let you go out unless every workitem has been processed.
    To summarize,here's what I need - I need to come out of the block for two conditions. One, if everyone has approved, comeout of the block with an apprval flag. Two, if anyone has rejected (even if some have not processed their workitem), delete the workitems from others inbox and come out of the block with a rejection flag.
    So, any kind of input or suggestions on how this could be handled would be highly appreciated.
    Thanks
    MM

  • Batch processing and parallelism

    I have recently taken over a project that is a batch application that processes a number of reports. For the most part, the application is pretty solid from the perspective of what it needs to do. However, one of the goals of this application is to achieve good parallelism when running on a multi CPU system. The application does a large number of calculations for each report and each report is broken down into a series of data units. The threading model is such that only say 5 report threads are running with each report thread processing say 9 data units at a time. When the batch process executes on a 16-CPU Sun box running Solaris 8 and JDK 1.4.2, the application utilizes on average 1 to 2 CPUs with some spikes to around 5 or 8 CPUs. Additionally, the average CPU utilization hovers around 8% to 22%. Another oddity of the application is that when the system is processing the calculations, and not reading from the database, the CPU utilization drops rather increase. So goal of good parallelism is not too good right now.
    There is a database involved in the app and one of the things that does concern me is that the DAOs are implemented oddly. For one thing, these DAO's are implemented as either Singletons or classes with all static methods. Some of these DAO's also have a number of synchronized methods. Each of the worker threads that process a piece of the report data does make calls to many of these static and single instance DAO's. Furthermore, there is what I'll call a "master DAO" that handles the logic of what work to process next and write the status of the completed work. This master DAO does not handle writing the results of the data processing. When each data unit completes, the "Master DAO" is called to update the status of the data unit and get the next group of data units to process for this report. This "Master DAO" is both completely static and every method is synchronized. Additionally, there are some classes that perform data calculations that are also implemented as singletons and their accessor methods are synchronized.
    My gut is telling me that in order to achieve, having each thread call a singleton, or a series of static methods is not going to help you gain good parallelism. Being new to parallel systems, I am not sure that I am right in even looking there. Additionally, if my gut is right, I don't know quite how to articulate the reasons why this design will hinder parallelism. I am hoping that anyone with an experience is parallel system design in Java can lend some pointers here. I hope I have been able to be clear while trying not to reveal much of the finer details of the application :)

    There is a database involved in the app and one of
    the things that does concern me is that the DAOs are
    implemented oddly. For one thing, these DAO's are
    implemented as either Singletons or classes with all
    static methods. Some of these DAO's also have a
    number of synchronized methods. Each of the worker
    threads that process a piece of the report data does
    make calls to many of these static and single
    instance DAO's. Furthermore, there is what I'll call
    a "master DAO" that handles the logic of what work to
    process next and write the status of the completed
    work. This master DAO does not handle writing the
    results of the data processing. When each data unit
    completes, the "Master DAO" is called to update the
    status of the data unit and get the next group of
    data units to process for this report. This "Master
    DAO" is both completely static and every method is
    synchronized. Additionally, there are some classes
    that perform data calculations that are also
    implemented as singletons and their accessor methods
    are synchronized. What I've quoted above suggests to me that what you are looking at may actually be good for parallel processing. It could also be a attempt that didn't come off completely.
    You suggest that these synchronized methods do not promote parallelism. That is true but you have to consider what you hope to achieve from parallelism. If you have 8 threads all running the same query at the same time, what have you gained? More strain on the DB and the possiblility of inconistencies in the data.
    For example:
    Senario 1:
    say you have a DAO retrieval that is synchronized. The query takes 20 seconds (for the sake of the example.) Thread A comes in and starts the retrieval. Thread B comes in and requests the same data 10 seconds later. It blocks because the method is synchronized. When Thread A's query finishes, the same data is given to Thread B almost instantly.
    Senario 2:
    The method that does the retrieval is not synchronized. When Thread B calls the method, it starts a new 20 second query against the DB.
    Which one gets Thread B the data faster while using less resources?
    The point is that it sounds like you have a bunch of queries where the results of those queries are bing used by different reports. It may be that the original authors set it up to fire off a bunch of queries and then start the threads that will build the reports. Obviously the threads cannot create the reports unless the data is there, so the synchrionization makes them wait for it. When the data gets back, the report thread can continue on to get the next piece of data it needs if that isn't back it waits there.
    This is actually an effective way to manage parallelism. What you may be seeing is that the critical path of data retrieval must complete before the reports can be generated. The best you can do is retrieve the data in parallel and let the report writers run in parallel once the data the need is retrieved.
    I think this is what was suggest above by matfud.

  • Esb parallel processing problems with jms adapter and bpel

    Hi,
    I have esb project which dequeues from a jms adapter and then the esb router calls a bpel process. This bpel process takes about 10 seconds. The first step this bpel does is to return true ( to the client) to the esb router so the ESB thinks he is ready. This does not work.
    Now if I use in the esb router asyn routing rule to the bpel ws then the esb dequeus all the messages from the queue. All the esb entries are green. But the esb starts one for one the bpel process and then updates the esb entry with a new start time ( very strange) .
    If I use a sync routing rule then it dequeues and call the bpel process one for one.
    How I can parallel proces the jms messages ,because I have 2 quad cores cpu's . Async routing rules looks like the solution.
    How can the esb detects the bpel is still running even with the first action in bpel is to return true. I expected when the esb retrieves true from the bpel ws it ends the current esb entry and goes on with the next. So the total esb time takes a few seconds and not so long as the last the bpel proces finishes.
    And why is he updating the starting time.
    Thanks Edwin
    Message was edited by:
    biemond

    I have esb project which dequeues from a jms adapter
    and then the esb router calls a bpel process. This
    bpel process takes about 10 seconds. The first step
    this bpel does is to return true ( to the client) to
    the esb router so the ESB thinks he is ready. This
    does not work. I am not sure if it can ever work, if you reply and then still proceed with your BPEL process.
    Here is something I would like to suggest:
    Can you try ESB -- (sync) --> BPEL, where BPEL process itself is Async. In that case, esb will consume all the messages very fast and BPEL will get that message and put in Delivery Queue and reply back to ESB. Now BPEL will process them simultaniously based Receiver/Worker threads defined in your env.
    HTH,
    Chintan

Maybe you are looking for

  • Photoshop elements 11 Download Assistant

    I want to download the Adobe Download Assistant for photoshop Elements 11 Trial and the website to do this does not have a "Download" button.  Please help.

  • URLDecoder / URLEncoder don't like non-ASCII, even with UTF-8 explicit

    Here's my code: import java.net.*; import java.io.*; public class Decoder     public static void main(String[] args)      if (args.length != 1) {          System.err.println("Usage: Decoder UTF-8-URL-encoded-string");          System.exit(1);      St

  • Job ERROR in imp..

    Hi, While importing a database from imp utility with the system schema we're not able to get the scheduled jobs of various schemas of the exported database in the same schemas of the imported database but comes in the system schema from which we are

  • How to effect parameter changes

    Hi Friends, I just changes parameter open_cursors to 500 in oracle EM and click apply button. However it does not affect system. How to apply changes in system. we can not stop database server. Thanks for help!! jim

  • Preserve current filename in metadata

    A really useful feature of Bridge is the ability to "preserve current filename in XMP metadata" when doing a Batch Rename - could this be incorporated in LR please, ideally at the import point, and for both DNG and backup (original raw)? Thanks for c