Thread Pool design

I have one service that processes messages, now I want it to run it in multi threads so it can execute faster. Now my question is how can I make thread pool- what is best design. Since I dont want to create n number of thereads to process n number of messages, I should create thread pool with predefined number of threads.
So my main program will do something like this
1)Check thread pool - if available
2)If available create tak and execute
3)If not wait (how to wait???)
4)Once thread has done it should be available for next processing if requred.
Thanks in advance.

I am not using put, I am not directly using array queue, what I understand is like its for its internal use. Here is what I am trying with,
Thread pool  class
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class MyThreadPool extends ThreadPoolExecutor {
     public MyThreadPool(int corePoolSize, int maximumPoolSize, long keepAliveTime,
               TimeUnit unit, BlockingQueue<Runnable> queue) {
          super(corePoolSize, maximumPoolSize, keepAliveTime, unit, queue);
Thread- Runnable object
public class MyThread implements Runnable{
     public void run() {
          System.out.println("This is MyThread running...");
          try {
               Thread.currentThread().sleep(1*5*1000);
               System.out.println("Complted...");
          } catch (InterruptedException e) {
               e.printStackTrace();
Main program
public class MyClient {
     public static void main(String[] args) {
          ArrayBlockingQueue queue = new ArrayBlockingQueue(5, false);
          MyThreadPool myThreadPool = new MyThreadPool(5, 5, 1, TimeUnit.SECONDS, queue);
          for (int i = 0; i < 6; i++) {
               MyThread t = new MyThread();
               if(myThreadPool.getTaskCount() != 1)
                    myThreadPool.submit(t);
}now see my queue size is 5 and I am submitting 6 jobs to execute. When I run main class, it runs all jobs and program never terminates.
Output:
This is MyThread running...
This is MyThread running...
This is MyThread running...
This is MyThread running...
This is MyThread running...
Complted...
This is MyThread running...
Complted...
Complted...
Complted...
Complted...
Complted...
Edited by: Miral on Apr 8, 2009 2:16 PM
Edited by: Miral on Apr 8, 2009 2:17 PM
Edited by: Miral on Apr 8, 2009 2:17 PM

Similar Messages

  • A good design for a single thread pool manager using java.util.concurrent

    Hi,
    I am developing a client side project which in distinct subparts will execute some tasks in parallel.
    So, just to be logorroic, something like that:
    program\
                \--flow A\
                           \task A1
                           \task A2
                \--flow B\
                            \task B1
                            \task B2
                            \...I would like both flow A and flow B (and all their launched sub tasks) to be executed by the same thread pool, because I want to set a fixed amount of threads that my program can globally run.
    My idea would be something like:
    public class ThreadPoolManager {
        private static ExecutorService executor;
        private static final Object classLock = ThreadPoolManager.class;
         * Returns the single instance of the ExecutorService by means of
         * lazy-initialization
         * @return the single instance of ThreadPoolManager
        public static ExecutorService getExecutorService() {
            synchronized (classLock) {
                if (executor != null) {
                    return executor;
                } else {
                    // TODO: put the dimension of the FixedThreadPool in a property
                    executor = Executors.newFixedThreadPool(50);
                return executor;
         * Private constructor: deny creating a new object
        private ThreadPoolManager() {
    }The tasks I have to execute will be of type Callable, since I expect some results, so you see an ExecutorService interface above.
    The flaws with this design is that I don't prevent the use (for example) of executor.shutdownNow(), which would cause problems.
    The alternative solution I have in mind would be something like having ThreadPoolManager to be a Singleton which implements ExecutorService, implementing all the methods with Delegation to an ExecutorService object created when the ThreadPoolManager object is instantiated for the first time and returned to client:
    public class ThreadPoolManager implements ExecutorService {
        private static ThreadPoolManager pool;
        private static final Object classLock = ThreadPoolManager.class;
        private ExecutorService executor;
         * Returns the single instance of the ThreadPoolManager by means of
         * lazy-initialization
         * @return the single instance of ThreadPoolManager
        public static ExecutorService getThreadPoolManager() {
            synchronized (classLock) {
                if (pool !=null) {
                    return pool;
                } else {
                    // create the real thread pool
                    // TODO: put the dimension of the FixedThreadPool in a property
                    // file
                    pool = new ThreadPoolManager();
                    pool.executor = Executors.newFixedThreadPool(50);
                    // executor = Executors.newCachedThreadPool();
                    return pool;
         * Private constructor: deny creating a new object
        private ThreadPoolManager() {
        /* ======================================== */
        /* implement ExecutorService interface methods via delegation to executor
         * (forbidden method calls, like shutdownNow() , will be "ignored")
          // .....I hope to have expressed all the things, and hope to receive an answer that clarifies my doubts or gives me an hint for an alternative solution or an already made solution.
    ciao
    Alessio

    Two things. Firstly, it's better to use     private static final Object classLock = new Object();because that saves you worrying about whether any other code synchronises on it. Secondly, if you do decide to go for the delegation route then java.lang.reflect.Proxy may be a good way forward.

  • Pattern for Thread Pool?

    Hi
    i want to build a kind of download manager. The application should be able to handle some concurrent threads, each representing a download in progress.
    I thought i might be more efficient to reuse a download thread after the download has ended as to create a new thread each time (like the connection object for db queries). Is this right? If yes, i thought to build a thread pool that serves a limited number of threaded download objects as requested (am I on the right way?).
    Now, I have to basic problems: (a) is it right, that, if the run() method of a thread has ended, the whole thread gets destroved? if yes, how should i prevent the thread from being destroyed, so i can reuse it later on? Second (b) how would that pool mechnism look like, means, there must be some kind of vector where i put in and take out the threads.
    As you see, these are basic "pool" technique questions. So, I thought, maybe there is a design pattern that would give me the basic mechanism, Does anyone know such a pattern?
    Thanks for your help
    josh

    I thought i might be more efficient to reuse a
    download thread after the download has ended as to
    create a new thread each time (like the connection
    object for db queries). Is this right? If yes, iIt may be right, if creating new threads is wasting enough CPU cycles to justify the complication of a thread pool. Maybe for a high-load server it would be more efficient. You'll have to figure that out for your own specific application.
    Another good use for thread pools is to avoid putting time-consuming operations in ActionListeners, etc. Instead you can have them pass the task off to a thread pool, keeping the GUI responsive.
    Now, I have to basic problems: (a) is it right, that,
    if the run() method of a thread has ended, the whole
    thread gets destroved? if yes, how should i prevent
    the thread from being destroyed, so i can reuse it
    later on? Second (b) how would that pool mechnism look
    like, means, there must be some kind of vector where i
    put in and take out the threads. (a) You are right. Therefore, the worker threads should not exit their run() methods until interrupted. (b) Worker threads could check a job queue (containing Runnables, perhaps) and if there are none, they should wait() on some object. When another thread adds a new job to the queue, it should call notify() on the same object, thus waking up one of the worker threads to perform the task.
    I wrote a thread pool once, just as an exercise. You will run into a number of problems and design issues (such as, what should the worker threads do when interrupted, exit immediately or clear the job queue and then exit?) If you have any more questions, ask in this thead.
    Krum

  • Using AsyncEventHandlers as a RTJ buildin thread pool

    Hi David
    1. I would like to use AsyncEventHandler mechanism as a thread pool. Would there be a way in future version to access the AsyncEventHandler's RT Thread pool API ?
    2. I am using the BoundAsyncEventHandler for long duration task (since the BoundAsyncEventHandler is bound to a specific Thread) - Would there be a way in future version to bound a specific RT Thread to this handler ? - for example a RT Thread that was created by my own Thread pool ?
    Thanks

    Gabi wrote:
    1.We need our code to access the AEH's "thread pool" and check the pool's availability; increase/decrease the pool size dynamically; check pool size for enough free threads to run any given number of tasks and so on...There are no API's for this. How a VM supports AEH execution is an implementation detail, and while a "pool" is the logical form of implementation the details can vary significantly so there's no real way to define an API to control it without defining what form it must take. For example in Java RTS the pool threads are initially at the highest system priority and then drop down to the priority of the AEH they execute. In a different design you might have a pool of threads per priority level (or set of priority levels) so even the notion of increasing/decreasing the pool size is not necessarily straight-forward. In Java RTS the pool will grow as needed to service outstanding AEH activations. You can control the initial number of threads - and there are two pools: one of heap-using and one for non-heap AEH.
    2. I was thinking about adding the BAEH class with the next method:
    void setBoundThread(RTThread t) throws IllegalArgumentExceptionand also in the Ctor add the RTThread as a parameter.
    Inside this method, the thread should be check if started and is so should throw an IllegalArgumentException.Sure, but why do you need to do this? What is different about the RTT that you pass in? The thread executing an AEH acquires the characteristics of the AEH.
    An API like this raises some questions such as: when would the thread be started? It also has a problem in that the run() method of the thread has to know how to execute an AEH, which means we'd (the RTSJ expert group) probably have to define a class BoundAEHThread that has a final run() method and you'd then subclass it.
    3. Additional question, what do you think about using lock (CountdownLatch) during the AEH's executions ?
    I've seen somewhere that AEH should only be used for short live tasks - is it true ? (in the JavaDoc it says there is no problem doing so but I want to be sure)If an AEH blocks for any length of time and additional AEH are activated then the pool management overhead may increase (unless you already sized it appropriately). But otherwise there's no restriction on what the code in an AEH can do. But note that none of the java.util.concurrent synchronization tools support priority-inversion avoidance, so you need to be careful if using them (ie only use them across threads of the same priority and don't expect FIFO access).
    David Holmes

  • Thread Pool with Min,Max. and other thread parameters

    hello,
    I found an Implementation of ThreadPool in the java.sun.com,but it didn't have the minimum,maximum no. of threads implemented in the code.
    Could you help me find out an implementation of Thread Pool,that keeps track of minimum no. of threads,max. no. of threads in the pool,the increment size of threads,number of idle threads,idle time allowable for a thread,etc...
    I also need to know if it is possible to have >5000 threads in a ThreadPool,beacuse I get OutofMemoryError,when I used a ThreadPool( which didn't have any of the above-mentioned parameters!)
    Thanks!!!

    Having greater then 5000 threads is a sign of a problem with your design. Threads are relativly costly
    beasties. The idea of using a pool for your threads is that you won't need to use 5000 threads
    simultaniously. Most OS's will have problems trying to create this many threads per process (unless you
    tune the OS itself (which is normally not too difficult))
    matfud

  • Thread Pool in JRT

    Hi All,
    1) the first question is what is the recommended way to allocate thread poll in rt system??
    2) what is the best way to do:
    -if my mission take a long time can i just suspend the thread and than resume it again (is it possible at all??)
    or the other option is to return it to the pool and get it again and again.
    im asking the second question from the view of performance and low latency .
    i assumed that each time i return the thread to the pool and than active it again it will "cost" cpu time.
    hope my question is clear enough.
    TIA
    Gabi

    Hi Gabi,
    I'm not completely clear on your questions. You can create a thread pool in a RT system the same basic way as in non-RT but you need to deal with priorities correctly. One way to do this would be to have the pool threads run at maximum priority normally and then drop to the actual priority of a submitted work task. Another model is to split the pool into groups based on priority, so effectively there is a work queue per priority-level and a group of threads to service each pool. This second model is used in the Real-time CORBA "lanes" approach. Designing an effective thread pool is a non-trivial task (take a look at the java.util.concurrent Thread PoolExecutor) and dealing with RT priorities and RT latency issues makes it even more complex.
    Alternatively rather than use an explicit thread pool consider whether you can convert the design to one using AsyncEvents and AsyncEventHandlers.
    Question 2 I don't really understand. Pool threads typically block on a queue waiting for work tasks to appear. But you can also have a non-terminating task that effectively picks up its own work from elsewhere - eg a task that reads from a socket to get the next "job" to process. Any interaction between threads is going to cost CPU time, whether a pool is involved or not, but yes there will be overhead associated with a pool, just as there is starting and terminating a thread, or performing any kind of thread coordination/synchronization.
    Regards,
    David Holmes

  • Optimization of Thread Pool Executor

    Hi,
    I am using ThreadPoolexecutor by replacing it with legacy Thread.
    I have created executor as below:
    pool = new ThreadPoolExecutor(coreSize, size, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(coreSize), new CustomThreadFactory(name),new CustomRejectionExecutionHandler());
           pool.prestartAllCoreThreads();
    here core size is maxpoolsize/5
    and i have prestarted all the core threads on start up of application roughly around 160 threads.
    in legacy design we were creating and starting around 670 threads.
    But the point is even after using Executor and creating and replacing  legacy design we are not getting much better results.
    For results memory management we are using top command to see memory usage
    and for time we have placed loggers of System.currentTime in millis to check the usage.
    Please tell how to optimize this design
    Thanks & Regards,
    tushar

    The first step is to decide what you want to optimize. I'm guessing you want to minimize the total elapsed time for the job to run, but anyway that's your first step. Determine what you're optimizing. Then measure the performance of the job, with respect to that metric, for various pool sizes, and see what happens.
    However I'm not sure why you expected that rewriting the application to use the ThreadPoolExecutor would improve the runtime performance of the job. I would expect it might simplify the code and make it easier for future programmers to understand, but it's possible that your thread pool implementation performs just as well.

  • Blog Post on a Worker Pool Design Pattern without VI Server

    Sometimes you think there's nothing left to discover and then someone shows you something you never thought of...
    This was fascinating. Tomi, a frequent poster over at the Lava forums, posted a new blog entry on his ExpressionFlow blog on a Worker Pool design pattern that doesn't use VI Server to spawn the worker threads, but instead uses the new recursion functionality in LV8.6 and specifically 2009! This is truly an interesting premise, because it not only uses recursion, which people have wanted to use natively in LabVIEW for years, but it uses recursion in a way that standard imperative programming languages would not do well, even though they've supported recursion from the start!
    Check it out.
    Jarrod S.
    National Instruments

    I am sorry you feel uncomfortable about the license terms. Being a software company owner I know the difficulty of figuring out the license restrictions of various software components. However that is exactly I provide ExpressionFlow example code with creative commons attribution license. The creative commons licenses do not restrict where the example code can be used and the terms are rather widely known. Shoud example code have no license terms, you would not know how you would be allowed to use or share the code. Now you have it black-on-white.
    Human-readable summary of the Creative Commons Attribution 3.0 Unported license:
    You are free:
    to Share — to copy, distribute and transmit the work
    to Remix — to adapt the work
    Under the following conditions:
    Attribution —
    You
    must attribute the work in the manner specified by the author or
    licensor (but not in any way that suggests that they endorse you or
    your use of the work).
    With the understanding that:
    Waiver
    Any of the above conditions can be waived if you get permission from the copyright holder.
    Other Rights
    In no way are any of the following rights affected by the license:
    Your fair dealing or fair use rights;
    The author's moral rights;
    Rights other persons may have either in the work itself or in how the work is used, such as publicity or privacy rights.
    Notice
    — For any reuse or distribution, you must make clear to others the
    license terms of this work. The best way to do this is with a link to
    this web page.
    Tomi Maila

  • Implementing FSM's with Threads, soliciting design opinions

    I'm currently designing a system that has some unusual requirements and was hoping to solicit the esteemed opinions available at this forum regarding a potential design that may be interpreted as an abuse of Threads.
    This system consists of a large collection of Finite State Machines that transition from state to state based on events originating from both within and without the system.
    Each event coming into the system is targeted to a particular FSM. It is critical to the system that the events be serviced by the FSM instantaneously. The system should consume and process events as fast as they come in and not get backlogged.
    I would like to optimize the number of FSM's that can be hosted by a single instance of the system before backlogging occurs. Ideally the system should be adaptive and detect when it is not servicing events as quickly as possible and begin to shutdown FSM's until it detects that it is operating at peak efficiency.
    The design I'm considering entails using an instance of a Thread and a Queue for each FSM instance in the system. There may be as many as 3000 or so FSM's altogether. Additionally I would have a Thread servicing each event source, of which there are only a handful, and it would be this Thread's job to receive and dispatch an event to the appropriate Queue for the particular FSM for which the event is targeted.
    I envision the FSM's Thread wait()ing on the monitor on the Queue and having the event dispatcher Thread performing the notify() after enqueing the event. Using this design allows me to have yet another Thread that will periodically peek at the size of the Queues to ensure that they are consistently near empty. If the Queue's begin to get backlogged, then the Monitoring Thread can begin shutting down FSM Threads until the optimum number of active FSM's Threads is achieved.
    What I don't like about this design is the potentially large number of Threads being instantiated. I would appreciate and welcome suggestions for a better way of doing this.
    Thanks,
    -S

    Bob, Matfud,
    First I'd like to thank both of you for taking the time to reply. I must admit that I originally wanted to approach the problem in a manner similar to your respective suggestions and indeed solicited opinions here because I also was uncomfortable with the waste in having so many threads. However there are some issues which still steer me in that direction.
    If I were to use a single Queue up front as suggested then the disparate event sources would be forced to synchronize on every event thus limiting to some extent concurrency. By having separate Queues, the only time synchronization comes into play is when two or more event sources have events targeted to the same FSM at the same time. On a multiprocessor box this allows me to have disparate events dispatched simultaneously rather than sequentially in most cases. I feel that this would add a considerable degree of liveness and concurrency to the system.
    With regard to employing a Thread pool, I like the idea as it would give me better control over the number of simultaneously executing Threads as is indeed the case with Session beans. However, I'm not sure that it provides optimal utilization of resources, but rather consistent utilization of resources. Finding the Thread pool dry may not indicate that I'm already using maximum resources but may instead just be a sign that I did not allocate a sufficient number of Threads to the pool.
    Nevertheless, my main reason for considering independent Threads per Finite State Machine is also one of concurrency. Consider the following scenario: Two event sources receive two separate events, both targeted for the same FSM. If the event dispatcher were to pluck two Threads from the Thread pool and allow each to attempt to inflict their respective events on the FSM, then I'll need to provide a good deal of synchronization within the FSM as a single FSM cannot service multiple events concurrently, but by definition must service them in sequence thus triggering state transitions. On the other hand, if each FSM is a Thread of its own, then it will wait() until notified that there is an event on the Queue and will pull and process one event at a time until the Queue is drained upon which time it will return to its idle state and wait() again until notified. In this scenario the FSM's processing need not be synchronized since it is pulling the event, and only the momentary access of its Queue requires synchronization.
    Matfud, to your point the amount of work per transaction will involve a number of calculations and decision making. Overall considerably more processor effort will reside in the FSM logic as opposed to the event queueing and dispatching mechanism. With regard to interaction between FSM's, there is none.
    Again I want to thank you both for your suggestions. I look forward to your responses as I eagerly want someone to convince me to abandon the design I have so far in favor of something more elegant and performant.
    bob_boothby, matfud, mattbunch, dubwai, YATArchivist, DrClap, warnerja, sylviae, jschell, kdgregory, somebody, anybody, please talk me out of doing this.
    -S

  • JRun Thread Pool Issue

    I'm running CF 9.0.1 on Ubuntu on an "Medium" Amazon EC2 instance. CF has been crashing intermittently (several times per day). At such times, running top gets me this (or something similar):
    PID
    USER
    PR
    NI
    VIRT
    RES
    SHR
    S
    %CPU
    %MEM
    TIME+COMMAND                                                                                                   
    15855
    wwwrun
    20
    0
    1762m
    730m
    20m
    S
    99.3
    19.4
    13:22.96 coldfusion9
    So, it's obviously consuming most of the server resources. The following error has been showing up in my cfserver.log in the leadup to each crash:
    java.lang.RuntimeException: Request timed out waiting for an available thread to run. You may want to consider increasing the number of active threads in the thread pool.
    If I run /opt/coldfusion9/bin/coldfusion status, I get:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  150   25    0     0      -1352560      0      0
    In the administrator, under Server Settings > Request Tuning, the setting for Maximum number of simultaneous Template requests is 25. So this makes sense so far. I could just increase the thread pool to cover these sort of load spikes. I could make it 200. (Which I did just now as a test.)
    However, there's also this file /opt/coldfusion9/runtime/servers/coldfusion/SERVER-INF/jrun.xml. And some of the settings in there appear to conflict. For example, it reads:
    <service class="jrunx.scheduler.SchedulerService" name="SchedulerService">
      <attribute name="bindToJNDI">true</attribute>
      <attribute name="activeHandlerThreads">25</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="minHandlerThreads">20</attribute>
      <attribute name="threadWaitTimeout">180</attribute>
      <attribute name="timeout">600</attribute>
    </service>
    Which a) has fewer active threads (what does this mean?), and b) has a max threads that exceed the simultaneous request limit set in the admin. So, I'm not sure. Are these independent configs that need to be made to match manually? Or is the jrun.xml file supposed to be written by the CF Admin when changes are made there? Hmm. But maybe this is different because presumably the CF Scheduler should only use a subset of all available threads, right...so we'd always have some threads for real live users. We also have this in there:
    <service class="jrun.servlet.http.WebService" name="WebService">
      <attribute name="port">8500</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="deactivated">true</attribute>
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="timeout">300</attribute>
    </service>
    This appears to have changed when I changed the CF Admin setting...maybe...but it's the activeHandlerThreads that matches my new maximum simulataneous requests setting...rather than the maxHandlerThreads, which again exceeds it. Finally, we have this:
    <service class="jrun.servlet.jrpp.JRunProxyService" name="ProxyService">
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="deactivated">false</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="port">51800</attribute>
      <attribute name="timeout">300</attribute>
      <attribute name="cacheRealPath">true</attribute>
    </service>
    So, I'm not certain which (if any) of these I should change and what exactly the relationship is between maximum requests and maximum threads. Also, since several of these list the maxHandlerThreads as 1000, I'm wondering if I should just set the maximum simultaneous requests to 1000. There must be some upper limit that depends on available server resources...but I'm not sure what it is and I don't really want to play around with it since it's a production environment.
    I'm not sure if it pertains to this issue at all, but when I run a ps aux | grep coldfusion I get the following:
    wwwrun   15853  0.0  0.0   8704   760 pts/1
    S
    20:22   0:00 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -autorestart -start coldfusion
    wwwrun   15855  5.4 18.2 1678552 701932 pts/1  
    Sl
    20:22   1:38 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -start coldfusion
    There are always these two and never more than these two processes. So there does not appear to be a one-to-one relationship between processes and threads. I recall from an MX 6.1 install I maintained for many years that additional CF processes were visible in the process list. It seemed to me at the time like I had a process for each thread...so either I was wrong or something is quite different in version 9 since it's reporting 25 running requests and only showing these two processes. If a single process can have multiple threads in the background, then I'm given to wonder why I have two processes instead of one...just curious.
    So, anyway, I've been experimenting while composing this post. As noted above I adjusted the maximum simulataneous requests up to 200. I was hoping this would solve my problem, but CF just crashed again (rather it slogged down and requests started timing out...so effectively "crashed"). This time, top looked similar (still consuming more than 99% of the CPU), but CF status looked different:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     150   0     0      0      0      0      0
    Obviously, since I'd increased the maximum simultaneous requests, it was allowing more requests to run simultaneously...but it was still maxing out the server resources.
    Further experiments (after restarting CF) showed me that the server became unusably slogged after about 30-35 "Reqs Run'g", with all additional requests headed for an inevitible timeout:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     33    0     0      -492   0      0      0
    So, it's clear that increasing the maximum simultaneous requests has not helped. I guess what it comes down to is this: What is it having such a hard time with? Where are these spikes coming from? Bursts of traffic? On what pages? What requests are running at any given time? I guess I simply need more information to continue troubleshooting. If there are long-running requests, or other issues, I'm not seeing it in the logs (although I do have that option checked in the admin). I need to know which requests exactly are those responsible for these spikes. Any help would be much appreciated. Thanks.
    ~Day

    I really appreciate your help. However, I haven't been able to find the JRun Thread settings you describe above.
    Under Request Tuning, I see:
    Server Settings > Request Tuning
    Request Limits
    Maximum number of simultaneous Template requests
      Restricts the number of simultaneously processed requests. Use this setting to increase overall system performance for heavy load applications. Requests beyond the specified limit are queued. On Standard Edition, you must restart ColdFusion to enable this setting. 
    Maximum number of simultaneous Flash Remoting requests
      The number of Flash Remoting requests that can be processed concurrently.
    Maximum number of simultaneous Web Service requests
      The number of Web Service requests that can be processed concurrently.
    Maximum number of simultaneous CFC function requests
      The number of ColdFusion Component methods that can be processed concurrently via HTTP. This does not affect invocation of CFC methods from within CFML, only methods requested via an HTTP request.
    Tag Limit Settings
    Maximum number of simultaneous Report threads
      The maximum number of ColdFusion reports that can be processed concurrently.
    Maximum number of threads available for CFTHREAD
      The maximum number of threads created by CFTHREAD that will be run concurrently. Threads created by CFTHREAD in excess of this are queued.  On Standard Edition, the maximum limit is 10. 
    And under Java and JVM, I see:
    Server Settings > Java and JVM
        Java and JVM settings control the way ColdFusion starts the Java Virtual Machine when it starts.  You can control settings like what classpaths are used and how memory is allocated as well as add custom command line arguments.  Changing these settings requires restarting ColdFusion.  If you enter an incorrect setting, ColdFusion may not restart properly. 
       Backups of the jvm.config file are created when you hit the submit button. You can use this backup to restore from a critical change. 
       Java Virtual Machine Path
      Specifies the location of the Java Virtual Machine.
       Minimum JVM Heap Size (MB)         Maximum JVM Heap Size  (MB)       
       The Memory Size settings determine the amount of memory that the JVM can use for programs and data. 
       ColdFusion Class Path
      Specifies any additional class paths for the JVM, with multiple directories separated by  commas.
       JVM Arguments
      -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
      Specifies any specific JVM initialization options, separated by spaces.
    I did go take a look at FusionReactor and found it's not free (which would be fine, of course, if it would actually help). It looks like there's a fully functional demo, which is cool...but I've haven't been able to get it to install yet, so we'll see.
    Thanks again!
    ~Day
    (By the way, I've cross-posted this inquiry on StackOverflow. So if you're able to help me arrive at a solution you might want to answer there as well.)

  • Custom thread pool for Java 8 parallel stream

    It seems that it is not possible to specify thread pool for Java 8 parallel stream. If that's so, the whole functionality is useless in most of the situations. The only situation I can safely use it is a small single threaded application written by one person.
    In all other cases, if I can not specify the thread pool, I have to share the default pool with other parts of the application. If someone submits a task that takes a lot of time, my tasks will get stuck. Is that correct or am I overlooking something?
    Imagine that someone submits slow networking operation to the fork-join pool. It's not a good idea, but it's so tempting that it will be happening. In such case, all CPU intensive tasks executed on parallel streams will wait for the networking task to finish. There is nothing you can do to defend your part of the application against such situations. Is that so?

    You are absolutely correct. That isn't the only problem with using the F/J framework as the parallel engine for bulk operations. Have a look http://coopsoft.com/ar/Calamity2Article.html

  • Thread pool rejecting threads when I don't think it should, ideas?

    Hi,
    I have a server application in which I only want a specific number of simultaneous requests. If the server gets more then this number it is suppose to close the connection (sends an HTTP 503 error to the client). To do this I used a fix thread pool. When I start the server and submit the max number of requests I get the expected behavior. However if I resubmit the request (within a small period of time, e.g. 1-15 seconds after the first one) I get very odd behavior in that some of the requests are rejected. For example if I set the max to 100 the first set of requests will work fine (100 requests, 100 responses). I then submit again and a small number will be rejected (I've seen it range from 1 to 15 rejected)....
    I made a small app which kind of duplicates this behavior (see below). Basically when I see is that the first time submitting requests works fine but the second time I get a rejected one. As best as I can tell none should be rejected....
    Here is the code, I welcome your thoughts or if you see something I am doing wrong here...
    <pre>
    import java.util.concurrent.*;
    import java.util.concurrent.atomic.AtomicInteger;
    public class ThreadPoolTest {
         static AtomicInteger count = new AtomicInteger();
         public static class threaded implements Runnable {
              @Override
              public void run() {
                   System.out.println("In thread: " + Thread.currentThread().getId());
                   try {
                        Thread.sleep(500);
                   } catch (InterruptedException e) {
                        System.out.println("Thread: " + Thread.currentThread().getId()
                                  + " interuptted");
                   System.out.println("Exiting run: " + Thread.currentThread().getId());
         private static int maxThreads = 3;
         private ThreadPoolExecutor pool;
         public ThreadPoolTest() {
              super();
              pool = new java.util.concurrent.ThreadPoolExecutor(
                        1, maxThreads - 1, 60L, TimeUnit.SECONDS,
                        new ArrayBlockingQueue<Runnable>(1));
         public static void main(String[] args) throws InterruptedException {
              ThreadPoolTest object = new ThreadPoolTest();
              object.doThreads();
              Thread.sleep(3000);
              object.doThreads();
              object.pool.shutdown();
              try {
                   object.pool.awaitTermination(60, TimeUnit.SECONDS);
              } catch (InterruptedException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         private void doThreads() {
              int submitted = 0, rejected = 0;
              int counter = count.getAndIncrement();
              for (int x = 0; x < maxThreads ; x++) {
                   try {
                        System.out.println("Run #: " + counter + " submitting " + x);
                        pool.execute(new threaded());
                        submitted++;
                   catch (RejectedExecutionException re) {
                        System.err.println("\tRun #: " + counter + ", submission " + x
                                  + " was rejected");
                        System.err.println("\tQueue active: " + pool.getActiveCount());
                        System.err.println("\tQueue size: " + pool.getPoolSize());
                        rejected++;
              System.out.println("\n\n\tRun #: " + counter);
              System.out.println("\tSubmitted: " + (submitted + rejected));
              System.out.println("\tAccepted: " + submitted);
              System.out.println("\tRejected: " + rejected + "\n\n");
    </pre>

    First thank you for taking the time to reply, I do appreciate it.
    jtahlborn - The code provided here is a contrived example trying to emulate the bigger app as best as I could. The actual program doesn't have any sleeps, the sleep in the secondary thread is to simulate the program doing some work & replying to a request. The sleep in the primary thread is to simulate a small delay between 'requests' to the pool. I can make this 1 second and up to (at least) 5 seconds with the same results. Additionally I can take out the sleep in the secondary thread and still see the a rejection.
    EJP - Yes I am aware of the TCP/IP queue, however; I don't see that as relevant to my question. The idea is not to prevent the connection but to respond to the client saying we can't process the request (send an "HTTP 503" error). So basically if we have, say, 100 threads running then the 101st, connection will get a 503 error and the connection will be closed.
    Also my test platform - Windows 7 64bit running Java 1.6.0_24-b07 (32bit) on an Intel core i7.
    It occurred to me that I did not show the output of the test program. As the output shows below, the first set of requests are all processed properly. The second set of requests is not. The pool should have 2 threads and 1 slot in the queue, so by the time the second "request" is made at least 2 of the requests from the first call should be done processing, so I could possibly understand run 1, submit #2 failing but not submit 1.
    <pre>
    Run #: 0 submitting 0
    Run #: 0 submitting 1
    Run #: 0 submitting 2
    In thread: 8
    In thread: 9
    Exiting run: 8
    Exiting run: 9
         Run #: 0
         Submitted: 3
         Accepted: 3
         Rejected: 0
    In thread: 8
    Exiting run: 8
    Run #: 1 submitting 0
    In thread: 9
    Run #: 1 submitting 1
         Run #: 1, submission 1 was rejected
         Queue active: 1
         Queue size: 2
    Run #: 1 submitting 2
         Run #: 1
         Submitted: 3
         Accepted: 2
         Rejected: 1
    In thread: 8
    Exiting run: 9
    Exiting run: 8
    </pre>

  • Thread Pool , Executors ...

    Sorry if i make a stupid post now, but i'm looking for a implementation of a Thread Pool using the latest 1.5 java.util.concurrent classes and i can't find anything serious. Any implementation or link to a tutorial should be vary helpful.
    Thnx

    but i'm looking
    for a implementation of a Thread Pool using
    the latest 1.5 java.util.concurrent classes and i
    can't find anything serious. Any implementation or
    link to a tutorial should be vary helpful.
    Thnxhere is an Example :
    import java.util.concurrent.*;
    public class UtilConcurrentTest {
    public static void main(String[] args) throws InterruptedException {
    int numThreads = 4;
    int numTasks = 20;
    ExecutorService service = Executors.newFixedThreadPool(numThreads);
    // do some tasks:
    for (int i = 0; i < numTasks; i++) {
    service.execute(new Task(i));
    service.shutdown();
    log("called shutdown()");
    boolean isTerminated = service.awaitTermination(60, TimeUnit.SECONDS);
    log("service terminated: " + isTerminated);
    public static void log(String msg) {
    System.out.println(System.currentTimeMillis() + "\t" + msg);
    private static class Task implements Runnable {
    private final int id;
    public Task(int id) {
    this.id = id;
    public void run() {
    log("begin:\t" + this);
    try { Thread.sleep(1000); } catch (InterruptedException e) {}
    log("end\t" + this);
    public String toString() {
    return "Task " + id + " in thread " + Thread.currentThread().getName();
    }

  • Thread pool executor problem

    When using a thread pool executor (java.util.concurrent.ThreadPoolExecutor) to limit the number of threads executing at a time, the number of threads running still exceeds the limit number.
    This is the code:
    private ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(2, 3, 20, TimeUnit.SECONDS, new LinkedBlockingQueue());
    The number of tasks in my program are 4, so i always have 4 threads running, although i limited it to 3.
    Can anyone help me with this problem? Or can u propose another solution to limit the number of running threads? By the way, i also tried using a newFixedThreadPool() and got the same problem.
    Thx.

    The number of tasks in my program are 4, so i always
    have 4 threads running, although i limited it to 3.How do you know that there are 4 threads running? If you're generating a JVM thread dump, you're going to see threads that are used internally by the JVM.
    Here's a simple program that creates a fixed-size threadpool and runs jobs. It limits the number of concurrent threads to 3. Compare it to what you're doing, I'm sure that you'll find something different. Also, verify that you're not creating threads somewhere else in your program; all of the ThreadPoolExecutor threads will have names of the form "pool-X-thread-Y"
    import java.util.concurrent.Executors;
    import java.util.concurrent.ExecutorService;
    public class ThreadPoolTest {
        public static void main(String[] args) {
            ExecutorService pool = Executors.newFixedThreadPool(3);
            for (int ii = 0 ; ii < 10 ; ii++) {
                pool.execute(new MyRunnable());
            pool.shutdown();
        private static class MyRunnable implements Runnable {
            public void run() {
                log("running");
                try {
                    Thread.sleep(1000L);
                catch (InterruptedException e) {
                    log("interrupted");
            private void log(String msg) {
                System.err.println(
                        msg + " on " + Thread.currentThread().getName()
                        + " at " + System.currentTimeMillis());
    }

  • Thread pool in servlet container

    Hello all,
    I'm working on this webapp that has some bad response times and I've identified an area were we could shave off a considerable amount of time. The app is invoking a component that causes data to be catched for subsequently targeted apps in the environemnt. Our app does not need to wait for a response so I'd like to make this an asyncronous call. So, how best to implement this?...I considered JMS, but started working on a solution using the Java 1.4 backport of JSR 166 (java.util.concurrent).
    I've been testing the use of a ThreadPoolExecutor, using an ArrayBlockingQueue. The work that each Runnable will perform involves a lot of waiting (the component we call invokes a web service, among a couple other distributed calls). So I figure the pool will be much larger than the queue. Our container has 35 execute threads, so I've been testing with a thread pool size of 25, and a queue of 10.
    Any thoughts on this approach? I understand that some of this work could be simplified by JMS, but if I don't need to be tied to the container, I'd prefer not to. The code if much easier to unit test, and plays nicely with our continious build integration (which runs our junit test for us and notifies on errors).
    Any thoughts are greatly appreciated...Thanks!!

    Well, if it works, that's by far the best way to go - but note that creating threads in a servlet container means those threads are outside of the container's control. Many containers will refuse to give the new threads access to the JNDI context, even, and some may prevent you from creating threads at all.

Maybe you are looking for

  • My sons ipod has been stolen, how can I get it back.

    I have try to sent the SMS by fine my ipod, but its didn't work. Can I get my sons iPod back by see the serial number.

  • Runtime Mapping step throws error

    Good Morning All, Recently we migrgated our XI server from Windows to Linux environment. after migration,We tried to test our interfaces, No Interface are working, XI is getting the messages,In Runtime Mapping step it failed every interfaces due to e

  • No domain logon?

    After installing 9879 and joining the machine to a domain, I don't have any option to log on to the domain. I only get the option for logging on with a Live account. I've had the machine on the domain with a previous build and have been able to log o

  • Iview from webDynpro ABAP with Adobe interactive form

    Hi gurus. Can I make an iview from a webDynpro ABAP that generates an Adobe Interactive Form, and generate this form from the portal as well? thanks in advanced, Dieuba

  • Validator not called due to null value

    Hello everyone, I'm creating a custom component that holds children i.e. an HtmlSelectOneMenu and HtmlInputText. The custom component class is UICombo and I want to attach a Validator to this component that performs a check on the values of the child