JMS Thread Pool Size

Hi,
          I'm using WLS 6.1. The console has a setting for: JMS Thread Pool Size. I wanted to tune the number of threads used by JMS. I thought JMS asynch consumers would use threads in this pool however that doesn't seem to be the case (they all use the default execute threads and queues). Why is this setting available?
          Note the BEA WebLogic JMS Performance Guide talks about tuning this value from version 6.1 up to 8.1 and states "On the server, incoming JMS related requests execute in the JMS execute queue/thread pool."
          Thanks in advance for any responses,
          Mich

Disregarding what it is for, in my experience, tuning this setting rarely has much effect. For 6.1, the main thread pool related tunables to look at are the EJB thread pools and EJB max-beans... settings, the "default" thread pool, and the internal thread-pool for stand-alone clients -- all of which are mentioned in the performance guide.

Similar Messages

  • Thread Pool Size

    What can be the maximum thread pool size used in java.
    What happens if i put up a thread pool size in range of 10k or 20k.
    how does affect system performance ???
    Its quite urgent for me to know. Can someone help out ???
    Regards
    Hrushi

    I'd say it depends on the machine you are running on, amount of memory and number of CPUs and such, not on Java itself. And with that many threads your implementation better be darn good in order not to be the bottleneck...

  • Setting Thread pool size

              Hi,
              I want to know if I set a system property "-Dweblogic.ThreadPoolSize", how will the
              WLS get to know that the pool size has been changed, at run time?
              E.g. I pass -Dweblogic.ThreadPoolSize=30 from the command-line. Then if I change
              the pool size to 40 at runtime, is there any event that I can fire for the change
              in property through APIs?
              Thnx in advance.
              Best Regards
              Ali
              

    Disregarding what it is for, in my experience, tuning this setting rarely has much effect. For 6.1, the main thread pool related tunables to look at are the EJB thread pools and EJB max-beans... settings, the "default" thread pool, and the internal thread-pool for stand-alone clients -- all of which are mentioned in the performance guide.

  • Thread pool in servlet container

    Hello all,
    I'm working on this webapp that has some bad response times and I've identified an area were we could shave off a considerable amount of time. The app is invoking a component that causes data to be catched for subsequently targeted apps in the environemnt. Our app does not need to wait for a response so I'd like to make this an asyncronous call. So, how best to implement this?...I considered JMS, but started working on a solution using the Java 1.4 backport of JSR 166 (java.util.concurrent).
    I've been testing the use of a ThreadPoolExecutor, using an ArrayBlockingQueue. The work that each Runnable will perform involves a lot of waiting (the component we call invokes a web service, among a couple other distributed calls). So I figure the pool will be much larger than the queue. Our container has 35 execute threads, so I've been testing with a thread pool size of 25, and a queue of 10.
    Any thoughts on this approach? I understand that some of this work could be simplified by JMS, but if I don't need to be tied to the container, I'd prefer not to. The code if much easier to unit test, and plays nicely with our continious build integration (which runs our junit test for us and notifies on errors).
    Any thoughts are greatly appreciated...Thanks!!

    Well, if it works, that's by far the best way to go - but note that creating threads in a servlet container means those threads are outside of the container's control. Many containers will refuse to give the new threads access to the JNDI context, even, and some may prevent you from creating threads at all.

  • Config changes to avoid JMS thread deadlock

              From various postings and Weblogic documentation I'm trying to understand the specifics
              of
              how to avoid a JMS thread deadlock problem. The Performance and Tuning Guide states
              "...consider a servlet that reads messages from a designated JMS queue. If all
              execute threads in a server are used to process the servlet requests, then no
              threads are available to
              deliver messages from the JMS queue. A deadlock condition exists,...". I believe
              that is
              what I have, and am trying to understand whether the solution is to create a separate
              execute queue for the servlets, or change the settings on the Execute and JMS
              Thread queues,
              or both, or something else, and if so, what are the settings I should use?
              Specifically, I have servlets that use connections to a custom resource adapter,
              and each
              custom adapter connection writes to a single (for all adapter connection instances)
              permanent JMS queue and reads a response from a one-per-adapter-connection temporary
              queue. Messages are read from the permanent JMS queue by a client process (running
              outside of WLS). My servlets use the default execute queue - i.e., the don't use
              a custom execute queue - and my reads on the temporary JMS queues are timing out,
              I believe because
              no threads are available to process the JMS message
              So can I avoid deadlocks by:
              1. creating a custom execute queue that has n threads
              2. allowing a maximum of n custom adapter connection instances
              3. Setting my JMS thread pool to n (assuming I have no other JMS activity)
              i.e., using the same number - n - for all three settings? I'm about to try this
              and see,
              but if it works I'd still like to have a better understanding as to why (and of
              course if it doesn't,
              I need that understanding even more).
              Or, is text somewhere describing the specifics of how all of the Weblogic thread
              queues
              and JMS play together? There seem to be pieces scattered in various Weblogic documents,
              but
              nowhere have I found a single, coherent and complete description of how all of
              these factors
              interact.
              

              Tom Barnes <[email protected]> wrote:
              >Thanks for the info. Allocating the servlets to their own execute queue solved
              my problem.
              >
              >Glen wrote:
              >
              >> From various postings and Weblogic documentation I'm trying to understand
              >the specifics
              >> of
              >> how to avoid a JMS thread deadlock problem. The Performance and Tuning
              >Guide states
              >> "...consider a servlet that reads messages from a designated JMS queue.
              >If all
              >> execute threads in a server are used to process the servlet requests,
              >then no
              >> threads are available to
              >> deliver messages from the JMS queue. A deadlock condition exists,...".
              > I believe
              >> that is
              >> what I have,
              >
              >You can often verify by inspecting a thread dump. Note that there
              >are other possible reasons for dead-lock, including
              >programming errors at the application level - and if that is
              >the case, the thread dump will likely reveal that as well.
              >
              >> and am trying to understand whether the solution is to create a separate
              >> execute queue for the servlets, or change the settings on the Execute
              >and JMS
              >> Thread queues,
              >> or both, or something else, and if so, what are the settings I should
              >use?
              >>
              >> Specifically, I have servlets that use connections to a custom resource
              >adapter,
              >> and each
              >> custom adapter connection writes to a single (for all adapter connection
              >instances)
              >> permanent JMS queue and reads a response from a one-per-adapter-connection
              >temporary
              >> queue. Messages are read from the permanent JMS queue by a client process
              >(running
              >> outside of WLS). My servlets use the default execute queue - i.e.,
              >the don't use
              >> a custom execute queue - and my reads on the temporary JMS queues are
              >timing out,
              >> I believe because
              >> no threads are available to process the JMS message
              >>
              >> So can I avoid deadlocks by:
              >> 1. creating a custom execute queue that has n threads
              >
              >yes, or configuring more threads for the default pool
              >(somewhat more than you have concurrent servlets)
              >
              >> 2. allowing a maximum of n custom adapter connection instances
              >
              >no - i think your servlets would just end up blocking waiting
              >for adapter connections (instead of blocking waiting for
              >response messages)
              >
              >> 3. Setting my JMS thread pool to n (assuming I have no other JMS
              >activity)
              >
              >i'm not sure, but I don't think this will help in your case - JMS
              >uses the JMS thread pool for a limited purpose, and
              >still uses the default thread pool otherwise (as documented in
              >the perf guide). Plus the default thread pool is needed for
              >RMI/timers/etc.
              >
              >If servlets are truly "stealing" all of the default threads,
              >I think the best option is give
              >the servlets their own thread-pool.
              >
              >> i.e., using the same number - n - for all three settings? I'm about
              >to try this
              >> and see,
              >> but if it works I'd still like to have a better understanding as to
              >why (and of
              >> course if it doesn't,
              >> I need that understanding even more).
              >>
              >> Or, is text somewhere describing the specifics of how all of the Weblogic
              >thread
              >> queues
              >> and JMS play together?
              >
              >The JMS performance guide
              >white-paper is probably the best resource at the moment, it seems
              >to be pointing you in the right direction (provided you confirm
              >the problem is thread pool limits)
              >
              >> There seem to be pieces scattered in various Weblogic documents,
              >> but
              >> nowhere have I found a single, coherent and complete description of
              >how all of
              >> these factors
              >> interact.
              >
              >You are welcome to email a suggestion to bea support.
              >Customer suggestions tend to have more weight than internally
              >generated suggestions.
              >
              >
              

  • The different thread pools in 6.1

    Boy am I just flooded with just 6.1 migration issues. Life was much
    simpler with weblogic.properties file.
    Looking at Config.Dtd I couldnt see the different thread pools that WLS
    5.1 offered.
    How do I do set up thread pools for
    a) Servlets
    b) RMI
    c) JMS
    In 5.1 you had a servletthreadpool, an executethreadcount(for RMI) and a
    jms thread pool.

    In 6.1
    There isn't any seperate execute ThreadPool for servlets.
    The only thread pool that you can tune is the default
    queue which can be done via console. Ofcourse you can
    configure user defined thread queues.
    Also there are some internal thread pools that are not
    exposed to the developers. for e.g. admin queue, replication
    queue , console queue , rmi queue , jms queue etc....
    If you still want to route your servlet requests via a
    seperate thread queue, you have to specify
    the dispatch policy & the queue name.
    see below
    http://e-docs.bea.com/wls/docs61/perform/AppTuning.html#1105201
    Kumar
    Aswin Dinakar wrote:
    Boy am I just flooded with just 6.1 migration issues. Life was much
    simpler with weblogic.properties file.
    Looking at Config.Dtd I couldnt see the different thread pools that WLS
    5.1 offered.
    How do I do set up thread pools for
    a) Servlets
    b) RMI
    c) JMS
    In 5.1 you had a servletthreadpool, an executethreadcount(for RMI) and a
    jms thread pool.

  • Fixed Size Thread Pool which infinitely serve task submitted to it

    Hi,
    I want to create a fixed size thread pool say of size 100 and i will submit around 200 task to it.
    Now i want it to serve them infinitely i.e once all tasks are completed re-do them again and again.
    public void start(Vector<String> addresses)
          //Create a Runnable object of each address in "addresses"
           Vector<FindAgentRunnable> runnables = new Vector<FindAgentRunnable>(1,1);
            for (String address : addresses)
                runnables.addElement(new FindAgentRunnable(address));
           //Create a thread pool of size 100
            ExecutorService pool = Executors.newFixedThreadPool(100);
            //Here i added all the runnables to the thread pool
             for(FindAgentRunnable runnable : runnables)
                    pool.submit(runnable);
                pool.shutdown();
    }Now i wants that this thread pool execute the task infinitely i.e once all the tasks are done then restart all the tasks again.
    I have also tried to add then again and again but it throws a java.util.concurrent.RejectedExecutionException
    public void start(Vector<String> addresses)
          //Create a Runnable object of each address in "addresses"
           Vector<FindAgentRunnable> runnables = new Vector<FindAgentRunnable>(1,1);
            for (String address : addresses)
                runnables.addElement(new FindAgentRunnable(address));
           //Create a thread pool of size 100
            ExecutorService pool = Executors.newFixedThreadPool(100);
            for(;;)
                for(FindAgentRunnable runnable : runnables)
                    pool.submit(runnable);
                pool.shutdown();
                try
                    pool.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS);
                catch (InterruptedException ex)
                    Logger.getLogger(AgentFinder.class.getName()).log(Level.SEVERE, null, ex);
    }Can anybody help me to solve this problem?
    Thnx in advance.

    Ravi_Gupta wrote:
    *@ kajbj*
    so what should i do?
    can you suggest me a solution?Consider this thread "closed". Continue to post in your other thread. I, and all others don't want to give answers that already have been given.

  • Fixed size thread pool excepting more tasks then it should

    Hello,
    I have the following code in a simple program (code below)
              BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(10, false);
              ThreadPoolExecutor newPool = new ThreadPoolExecutor(1, 10, 20, TimeUnit.SECONDS, q);
    for (int x = 0; x < 30; x++) {
    newPool.execute(new threaded());
    My understanding is that this should create a thread pool that will accept 10 tasks, once there have been 10 tasks submitted I should get RejectedExecutionException, however; I am seeing that when I execute the code the pool accepts 20 execute calls before throwing RejectedExecutionException. I am on Windows 7 using Java 1.6.0_21
    Any thoughts on what I am doing incorrectly?
    Thanks
    import java.util.concurrent.*;
    public class ThreadPoolTest {
         public static class threaded implements Runnable {
              @Override
              public void run() {
                   System.out.println("In thread: " + Thread.currentThread().getId());
                   try {
                        Thread.sleep(5000);
                   } catch (InterruptedException e) {
                        System.out.println("Thread: " + Thread.currentThread().getId()
                                  + " interuptted");
                   System.out.println("Exiting thread: " + Thread.currentThread().getId());
         private static int MAX = 10;
         private Executor pool;
         public ThreadPoolTest() {
              super();
              BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(MAX/2, false);
              ThreadPoolExecutor newPool = new ThreadPoolExecutor(1, MAX, 20, TimeUnit.SECONDS, q);
              pool = newPool;
         * @param args
         public static void main(String[] args) {
              ThreadPoolTest object = new ThreadPoolTest();
              object.doThreads();
         private void doThreads() {
              int submitted = 0, rejected = 0;
              for (int x = 0; x < MAX * 3; x++) {
                   try {
                        System.out.println(Integer.toString(x) + " submitting");
                        pool.execute(new threaded());
                        submitted++;
                   catch (RejectedExecutionException re) {
                        System.err.println("Submission " + x + " was rejected");
                        rejected++;
              System.out.println("\n\nSubmitted: " + MAX*2);
              System.out.println("Accepted: " + submitted);
              System.out.println("Rejected: " + rejected);
    }

    I don't know what is wrong because I tried this
    public static void main(String args[])  {
        BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(10, false);
        ThreadPoolExecutor newPool = new ThreadPoolExecutor(1, 10, 20, TimeUnit.SECONDS, q);
        for (int x = 0; x < 100; x++) {
            System.err.println(x + ": " + q.size());
            newPool.submit(new Callable<Void>() {
                @Override
                public Void call() throws Exception {
                    Thread.sleep(1000);
                    return null;
    }and it printed
    0: 0
    1: 0
    2: 1
    3: 2
    4: 3
    5: 4
    6: 5
    7: 6
    8: 7
    9: 8
    10: 9
    11: 10
    12: 10
    13: 10
    14: 10
    15: 10
    16: 10
    17: 10
    18: 10
    19: 10
    20: 10
    Exception in thread "main" java.util.concurrent.RejectedExecutionException
         at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
         at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
         at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
         at Main.main(Main.java:36)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)Ihave Java 6 update 24 on Linux, but I don't believe this should make a difference. Can you try my code?

  • How to correctly use a fixed size thread pool?

    I am quite new to using concurrency in Java, so please forgive if this is a trivial question.
    I would like to make use of something like pool=Executors.newFixedThreadPool(n) to automatically use a fixed number of threads to process pieces of work. I understand that I can asynchronously run some Runnable by one of the threads in the threadpool using pool.execute(someRunnable).
    My problem is this: I have some fixed amount of N datastructures myDS (which are not reentrant or sharable) that get initialized at program start and which are needed by the runnables to do the work. So, what I really would like to do is that I not only reuse N threads but also N of these datastructures to do the work.
    So, lets say I want to have 10 threads, then I would want to create 10 myDS objects once and for all. Each time some work comes in, I want that work to get processed by the next free thread, using the next free datastructure. What I was wondering is if there is something in the library that lets me do the resusing of threads AND datastructures as simply as just reusing a pool of threads. Ideally, each thread would get associated with one datastructure somehow.
    Currently I use an approach where I create 10 Runnable worker objects, each with its own copy of myDS. Those worker objects get stored in an ArrayBlockingQueue of size 10. Each time some work comes in, I get the next Runner from the queue, pass it the piece of work and submit it to the thread pool.
    The tricky part is how to get the worker object back into the Queue: currently I essentially do queue.put(this) at the very end of each Runnable's run method but I am not sure if that is safe or how to do it safely.
    What are the standard patterns and library classes to use for solving this problem correctly?

    Thank you for that feedback!
    There is one issue that worries me though and I obviously do not understand it enough: as I said I hand back the Runnable to the blocking queue at the end of the Runnable.run method using queue.put(this). This is done via a static method from the main class that creates the threads and runnable objects in a main method. Originally I tried to make that method for putting back the Runnable objects serialized but that inevitably always led to a deadlock or hang condition: the method for putting back the runnable was never actually run. So I ended up doing this without serializing the put action in any way and so far it seems to work ... but is this safe?
    To reiterate: I have a static class that creates a thread pool object and a ArrayBlockingQueue queue object of runnable objects. In a loop I use queue.take() to get the next free runnable object, and pass this runnable to pool.execute. Inside the runnable, in method run, i use staticclass.putBack(this) which in turn does queue.put(therunnableigot). Can I trust that this queue.put operation, which can happen from several threads at the same time works without problem without serializing it explicitly? And why would making the staticclass.putBack method serialized cause a hang? I also tried to serialize using the queue object itself instead of the static class, by doing serialize(queue) { queue.put(therunnable) } but that also caused a hang. I have to admit that I do not understand at all why that hang occurred and if I need the serialization here or not.

  • JRun Thread Pool Issue

    I'm running CF 9.0.1 on Ubuntu on an "Medium" Amazon EC2 instance. CF has been crashing intermittently (several times per day). At such times, running top gets me this (or something similar):
    PID
    USER
    PR
    NI
    VIRT
    RES
    SHR
    S
    %CPU
    %MEM
    TIME+COMMAND                                                                                                   
    15855
    wwwrun
    20
    0
    1762m
    730m
    20m
    S
    99.3
    19.4
    13:22.96 coldfusion9
    So, it's obviously consuming most of the server resources. The following error has been showing up in my cfserver.log in the leadup to each crash:
    java.lang.RuntimeException: Request timed out waiting for an available thread to run. You may want to consider increasing the number of active threads in the thread pool.
    If I run /opt/coldfusion9/bin/coldfusion status, I get:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  150   25    0     0      -1352560      0      0
    In the administrator, under Server Settings > Request Tuning, the setting for Maximum number of simultaneous Template requests is 25. So this makes sense so far. I could just increase the thread pool to cover these sort of load spikes. I could make it 200. (Which I did just now as a test.)
    However, there's also this file /opt/coldfusion9/runtime/servers/coldfusion/SERVER-INF/jrun.xml. And some of the settings in there appear to conflict. For example, it reads:
    <service class="jrunx.scheduler.SchedulerService" name="SchedulerService">
      <attribute name="bindToJNDI">true</attribute>
      <attribute name="activeHandlerThreads">25</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="minHandlerThreads">20</attribute>
      <attribute name="threadWaitTimeout">180</attribute>
      <attribute name="timeout">600</attribute>
    </service>
    Which a) has fewer active threads (what does this mean?), and b) has a max threads that exceed the simultaneous request limit set in the admin. So, I'm not sure. Are these independent configs that need to be made to match manually? Or is the jrun.xml file supposed to be written by the CF Admin when changes are made there? Hmm. But maybe this is different because presumably the CF Scheduler should only use a subset of all available threads, right...so we'd always have some threads for real live users. We also have this in there:
    <service class="jrun.servlet.http.WebService" name="WebService">
      <attribute name="port">8500</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="deactivated">true</attribute>
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="timeout">300</attribute>
    </service>
    This appears to have changed when I changed the CF Admin setting...maybe...but it's the activeHandlerThreads that matches my new maximum simulataneous requests setting...rather than the maxHandlerThreads, which again exceeds it. Finally, we have this:
    <service class="jrun.servlet.jrpp.JRunProxyService" name="ProxyService">
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="deactivated">false</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="port">51800</attribute>
      <attribute name="timeout">300</attribute>
      <attribute name="cacheRealPath">true</attribute>
    </service>
    So, I'm not certain which (if any) of these I should change and what exactly the relationship is between maximum requests and maximum threads. Also, since several of these list the maxHandlerThreads as 1000, I'm wondering if I should just set the maximum simultaneous requests to 1000. There must be some upper limit that depends on available server resources...but I'm not sure what it is and I don't really want to play around with it since it's a production environment.
    I'm not sure if it pertains to this issue at all, but when I run a ps aux | grep coldfusion I get the following:
    wwwrun   15853  0.0  0.0   8704   760 pts/1
    S
    20:22   0:00 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -autorestart -start coldfusion
    wwwrun   15855  5.4 18.2 1678552 701932 pts/1  
    Sl
    20:22   1:38 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -start coldfusion
    There are always these two and never more than these two processes. So there does not appear to be a one-to-one relationship between processes and threads. I recall from an MX 6.1 install I maintained for many years that additional CF processes were visible in the process list. It seemed to me at the time like I had a process for each thread...so either I was wrong or something is quite different in version 9 since it's reporting 25 running requests and only showing these two processes. If a single process can have multiple threads in the background, then I'm given to wonder why I have two processes instead of one...just curious.
    So, anyway, I've been experimenting while composing this post. As noted above I adjusted the maximum simulataneous requests up to 200. I was hoping this would solve my problem, but CF just crashed again (rather it slogged down and requests started timing out...so effectively "crashed"). This time, top looked similar (still consuming more than 99% of the CPU), but CF status looked different:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     150   0     0      0      0      0      0
    Obviously, since I'd increased the maximum simultaneous requests, it was allowing more requests to run simultaneously...but it was still maxing out the server resources.
    Further experiments (after restarting CF) showed me that the server became unusably slogged after about 30-35 "Reqs Run'g", with all additional requests headed for an inevitible timeout:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     33    0     0      -492   0      0      0
    So, it's clear that increasing the maximum simultaneous requests has not helped. I guess what it comes down to is this: What is it having such a hard time with? Where are these spikes coming from? Bursts of traffic? On what pages? What requests are running at any given time? I guess I simply need more information to continue troubleshooting. If there are long-running requests, or other issues, I'm not seeing it in the logs (although I do have that option checked in the admin). I need to know which requests exactly are those responsible for these spikes. Any help would be much appreciated. Thanks.
    ~Day

    I really appreciate your help. However, I haven't been able to find the JRun Thread settings you describe above.
    Under Request Tuning, I see:
    Server Settings > Request Tuning
    Request Limits
    Maximum number of simultaneous Template requests
      Restricts the number of simultaneously processed requests. Use this setting to increase overall system performance for heavy load applications. Requests beyond the specified limit are queued. On Standard Edition, you must restart ColdFusion to enable this setting. 
    Maximum number of simultaneous Flash Remoting requests
      The number of Flash Remoting requests that can be processed concurrently.
    Maximum number of simultaneous Web Service requests
      The number of Web Service requests that can be processed concurrently.
    Maximum number of simultaneous CFC function requests
      The number of ColdFusion Component methods that can be processed concurrently via HTTP. This does not affect invocation of CFC methods from within CFML, only methods requested via an HTTP request.
    Tag Limit Settings
    Maximum number of simultaneous Report threads
      The maximum number of ColdFusion reports that can be processed concurrently.
    Maximum number of threads available for CFTHREAD
      The maximum number of threads created by CFTHREAD that will be run concurrently. Threads created by CFTHREAD in excess of this are queued.  On Standard Edition, the maximum limit is 10. 
    And under Java and JVM, I see:
    Server Settings > Java and JVM
        Java and JVM settings control the way ColdFusion starts the Java Virtual Machine when it starts.  You can control settings like what classpaths are used and how memory is allocated as well as add custom command line arguments.  Changing these settings requires restarting ColdFusion.  If you enter an incorrect setting, ColdFusion may not restart properly. 
       Backups of the jvm.config file are created when you hit the submit button. You can use this backup to restore from a critical change. 
       Java Virtual Machine Path
      Specifies the location of the Java Virtual Machine.
       Minimum JVM Heap Size (MB)         Maximum JVM Heap Size  (MB)       
       The Memory Size settings determine the amount of memory that the JVM can use for programs and data. 
       ColdFusion Class Path
      Specifies any additional class paths for the JVM, with multiple directories separated by  commas.
       JVM Arguments
      -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
      Specifies any specific JVM initialization options, separated by spaces.
    I did go take a look at FusionReactor and found it's not free (which would be fine, of course, if it would actually help). It looks like there's a fully functional demo, which is cool...but I've haven't been able to get it to install yet, so we'll see.
    Thanks again!
    ~Day
    (By the way, I've cross-posted this inquiry on StackOverflow. So if you're able to help me arrive at a solution you might want to answer there as well.)

  • Thread pool rejecting threads when I don't think it should, ideas?

    Hi,
    I have a server application in which I only want a specific number of simultaneous requests. If the server gets more then this number it is suppose to close the connection (sends an HTTP 503 error to the client). To do this I used a fix thread pool. When I start the server and submit the max number of requests I get the expected behavior. However if I resubmit the request (within a small period of time, e.g. 1-15 seconds after the first one) I get very odd behavior in that some of the requests are rejected. For example if I set the max to 100 the first set of requests will work fine (100 requests, 100 responses). I then submit again and a small number will be rejected (I've seen it range from 1 to 15 rejected)....
    I made a small app which kind of duplicates this behavior (see below). Basically when I see is that the first time submitting requests works fine but the second time I get a rejected one. As best as I can tell none should be rejected....
    Here is the code, I welcome your thoughts or if you see something I am doing wrong here...
    <pre>
    import java.util.concurrent.*;
    import java.util.concurrent.atomic.AtomicInteger;
    public class ThreadPoolTest {
         static AtomicInteger count = new AtomicInteger();
         public static class threaded implements Runnable {
              @Override
              public void run() {
                   System.out.println("In thread: " + Thread.currentThread().getId());
                   try {
                        Thread.sleep(500);
                   } catch (InterruptedException e) {
                        System.out.println("Thread: " + Thread.currentThread().getId()
                                  + " interuptted");
                   System.out.println("Exiting run: " + Thread.currentThread().getId());
         private static int maxThreads = 3;
         private ThreadPoolExecutor pool;
         public ThreadPoolTest() {
              super();
              pool = new java.util.concurrent.ThreadPoolExecutor(
                        1, maxThreads - 1, 60L, TimeUnit.SECONDS,
                        new ArrayBlockingQueue<Runnable>(1));
         public static void main(String[] args) throws InterruptedException {
              ThreadPoolTest object = new ThreadPoolTest();
              object.doThreads();
              Thread.sleep(3000);
              object.doThreads();
              object.pool.shutdown();
              try {
                   object.pool.awaitTermination(60, TimeUnit.SECONDS);
              } catch (InterruptedException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         private void doThreads() {
              int submitted = 0, rejected = 0;
              int counter = count.getAndIncrement();
              for (int x = 0; x < maxThreads ; x++) {
                   try {
                        System.out.println("Run #: " + counter + " submitting " + x);
                        pool.execute(new threaded());
                        submitted++;
                   catch (RejectedExecutionException re) {
                        System.err.println("\tRun #: " + counter + ", submission " + x
                                  + " was rejected");
                        System.err.println("\tQueue active: " + pool.getActiveCount());
                        System.err.println("\tQueue size: " + pool.getPoolSize());
                        rejected++;
              System.out.println("\n\n\tRun #: " + counter);
              System.out.println("\tSubmitted: " + (submitted + rejected));
              System.out.println("\tAccepted: " + submitted);
              System.out.println("\tRejected: " + rejected + "\n\n");
    </pre>

    First thank you for taking the time to reply, I do appreciate it.
    jtahlborn - The code provided here is a contrived example trying to emulate the bigger app as best as I could. The actual program doesn't have any sleeps, the sleep in the secondary thread is to simulate the program doing some work & replying to a request. The sleep in the primary thread is to simulate a small delay between 'requests' to the pool. I can make this 1 second and up to (at least) 5 seconds with the same results. Additionally I can take out the sleep in the secondary thread and still see the a rejection.
    EJP - Yes I am aware of the TCP/IP queue, however; I don't see that as relevant to my question. The idea is not to prevent the connection but to respond to the client saying we can't process the request (send an "HTTP 503" error). So basically if we have, say, 100 threads running then the 101st, connection will get a 503 error and the connection will be closed.
    Also my test platform - Windows 7 64bit running Java 1.6.0_24-b07 (32bit) on an Intel core i7.
    It occurred to me that I did not show the output of the test program. As the output shows below, the first set of requests are all processed properly. The second set of requests is not. The pool should have 2 threads and 1 slot in the queue, so by the time the second "request" is made at least 2 of the requests from the first call should be done processing, so I could possibly understand run 1, submit #2 failing but not submit 1.
    <pre>
    Run #: 0 submitting 0
    Run #: 0 submitting 1
    Run #: 0 submitting 2
    In thread: 8
    In thread: 9
    Exiting run: 8
    Exiting run: 9
         Run #: 0
         Submitted: 3
         Accepted: 3
         Rejected: 0
    In thread: 8
    Exiting run: 8
    Run #: 1 submitting 0
    In thread: 9
    Run #: 1 submitting 1
         Run #: 1, submission 1 was rejected
         Queue active: 1
         Queue size: 2
    Run #: 1 submitting 2
         Run #: 1
         Submitted: 3
         Accepted: 2
         Rejected: 1
    In thread: 8
    Exiting run: 9
    Exiting run: 8
    </pre>

  • Thread pool executor problem

    When using a thread pool executor (java.util.concurrent.ThreadPoolExecutor) to limit the number of threads executing at a time, the number of threads running still exceeds the limit number.
    This is the code:
    private ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(2, 3, 20, TimeUnit.SECONDS, new LinkedBlockingQueue());
    The number of tasks in my program are 4, so i always have 4 threads running, although i limited it to 3.
    Can anyone help me with this problem? Or can u propose another solution to limit the number of running threads? By the way, i also tried using a newFixedThreadPool() and got the same problem.
    Thx.

    The number of tasks in my program are 4, so i always
    have 4 threads running, although i limited it to 3.How do you know that there are 4 threads running? If you're generating a JVM thread dump, you're going to see threads that are used internally by the JVM.
    Here's a simple program that creates a fixed-size threadpool and runs jobs. It limits the number of concurrent threads to 3. Compare it to what you're doing, I'm sure that you'll find something different. Also, verify that you're not creating threads somewhere else in your program; all of the ThreadPoolExecutor threads will have names of the form "pool-X-thread-Y"
    import java.util.concurrent.Executors;
    import java.util.concurrent.ExecutorService;
    public class ThreadPoolTest {
        public static void main(String[] args) {
            ExecutorService pool = Executors.newFixedThreadPool(3);
            for (int ii = 0 ; ii < 10 ; ii++) {
                pool.execute(new MyRunnable());
            pool.shutdown();
        private static class MyRunnable implements Runnable {
            public void run() {
                log("running");
                try {
                    Thread.sleep(1000L);
                catch (InterruptedException e) {
                    log("interrupted");
            private void log(String msg) {
                System.err.println(
                        msg + " on " + Thread.currentThread().getName()
                        + " at " + System.currentTimeMillis());
    }

  • Thread pool throws reject exceptions even though the queue is not full

    Hi. I am using org.springframework.scheduling.concurrent.ThreadPo olTaskExecutor which is based on java
    java.util.concurrent.ThreadPoolExecutor
    with a enviornment under load.
    I see on some cases, that this thread pool throws tasks with reject exception
    even though the queue size is 0.
    According to the documentation, this thread pool should increase the pool size to core size and then wait untill all queue is full to create new threads.
    this is not what happends. for some reason the queue is not filled but exceptions are thrown.
    Any ideas why this can happen?

    This is the stack trace:
    taskExecutorStats-1 2010-04-27 11:01:43,324 ERROR [com.expand.expandview.infrastructure.task_executor] TaskExecutorController: RejectedExecutionException exception in thread: 18790970, failed on thread pool: [email protected]544f07, to run logic: com.expand.expandview.infrastructure.logics.DispatchLogicsProviderLogic
    org.springframework.core.task.TaskRejectedException: Executor [java.util.concurrent.ThreadPoolExecutor@dd9007] did not accept task: com.expand.expandview.infrastructure.task_executor.TaskExecuterController$1@141f728; nested exception is java.util.concurrent.RejectedExecutionException
         at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:305)
         at com.expand.expandview.infrastructure.task_executor.TaskExecuterController.operate(TaskExecuterController.java:68)
         at com.expand.expandview.infrastructure.proxies.DataProxy.callLogic(DataProxy.java:131)
         at com.expand.expandview.infrastructure.proxies.DataProxy.operate(DataProxy.java:109)
         at com.expand.expandview.infrastructure.logics.AbstractLogic.operate(AbstractLogic.java:455)
         at com.expand.expandview.server.app.logics.stats.StatsPersisterSingleChunkLogic.persistSlots(StatsPersisterSingleChunkLogic.java:362)
         at com.expand.expandview.server.app.logics.stats.StatsPersisterSingleChunkLogic.doLogic(StatsPersisterSingleChunkLogic.java:132)
         at com.expand.expandview.infrastructure.logics.AbstractLogic.execute(AbstractLogic.java:98)
         at com.expand.expandview.server.app.logics.ApplicationLogic.execute(ApplicationLogic.java:79)
         at com.expand.expandview.infrastructure.task_executor.TaskExecuterControllerDirect.operate(TaskExecuterControllerDirect.java:33)
         at com.expand.expandview.infrastructure.proxies.LogicProxy.service(LogicProxy.java:62)
         at com.expand.expandview.infrastructure.logics.AbstractLogic.service(AbstractLogic.java:477)
         at com.expand.expandview.server.app.logics.stats.StatsPersisterLogic.persist(StatsPersisterLogic.java:48)
         at com.expand.expandview.server.app.logics.stats.StatsPersisterLogic.doLogic(StatsPersisterLogic.java:19)
         at com.expand.expandview.infrastructure.logics.AbstractLogic.execute(AbstractLogic.java:98)
         at com.expand.expandview.server.app.logics.ApplicationLogic.execute(ApplicationLogic.java:79)
         at com.expand.expandview.infrastructure.task_executor.TaskExecuterController$1.run(TaskExecuterController.java:80)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.util.concurrent.RejectedExecutionException
         at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1760)
         at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
         at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:302)
         ... 19 more

  • The problem in the thread pool implemented by myself

    Hello, I need to a thread pool in J2ME CDC 1.0 + FP 1.0, so I implemented a simple one by myself that also meets my own requirement.
    Here is the main idea:
    The thread pool creates a fixed number of threads in advance. When a task comes, it is put in the waiting list. All threads tries to get the tasks from the waiting list. If no task exists, the threads wait until someone wakes them up.
    Here are the requirements from myself:
    1. when a task has finished its work in one execution, it is put in the waiting list for the next run.
    2. the task can control the delay between when the task owner tries to put it in the waiting list and when the task is actually put in the waiting list. I need this function because sometimes I don't want the tasks to run too often and want to save some CPU usage.
    In my program, I creates two thread pools. In one pool, every task don't use the delay, and the thread pool works very well. The other pool has the tasks that use the delay, and sometimes, as I can see from the printed information, there are many tasks in the waiting list but 0 or 1 thread executes tasks. It seems that the waiting threads cannot wake up when new tasks comes.
    I suspect the code in addTask(), but cannot find the reason why it fails. Could anyone please help me find out the bug in my code? I put the code of thread pool below
    Thank you in advance
    Zheng Da
    ThreadPool.java
    package j2me.concurrent;
    import java.util.LinkedList;
    import java.util.Timer;
    import java.util.TimerTask;
    import alvis.general.Util;
    public class ThreadPool {
         private int maxQueueSize;
         private boolean running = true;
         private Thread[] threads;
         private LinkedList tasks = new LinkedList();
         private Timer timer = new Timer(true);
         private AtomicInteger usingThreads = new AtomicInteger(0);
         private synchronized boolean isRunning() {
              return running;
         private synchronized void stopRunning() {
              running = false;
         private synchronized PoolTask getTask() {
              while (tasks.isEmpty() && isRunning()) {
                   try {
                        this.wait();
                   } catch (InterruptedException e) {
                        e.printStackTrace();
              if (tasks.isEmpty())
                   return null;
              // Util.log.info(Thread.currentThread().getName() +
              // " gets a task, left tasks: " + tasks.size());
              return (PoolTask) tasks.removeFirst();
         private synchronized void addTaskNoDelay(PoolTask task) {
              tasks.addLast(task);
              notifyAll();
         private synchronized void addTask(final PoolTask task) {
              long delay = task.delay();
              if (delay == 0) {
                   addTaskNoDelay(task);
              } else {
                   timer.schedule(new TimerTask() {
                        public void run() {
                             addTaskNoDelay(task);
                   }, delay);
         private synchronized int numTasks() {
              return tasks.size();
         private class PoolThread extends Thread {
              public void run() {
                   Util.poolThreads.inc();
                   while (isRunning()) {
                        PoolTask task = getTask();
                        if (task == null) {
                             Util.poolThreads.dec();
                             return;
                        usingThreads.inc();
                        long currentTime = System.currentTimeMillis();
                        task.run();
                        long elapsedTime = System.currentTimeMillis() - currentTime;
                        if (elapsedTime > 100)
                             System.err.println(task.toString() + " takes " + ((double) elapsedTime)/1000 + "s");
                        usingThreads.dec();
                        if (!task.finish()) {
                             addTask(task);
                   Util.poolThreads.dec();
         public ThreadPool(int size, int taskQueueSize) {
              maxQueueSize = taskQueueSize;
              threads = new Thread[size];
              for (int i = 0; i < threads.length; i++) {
                   threads[i] = new PoolThread();
                   threads.start();
         public synchronized boolean executor(PoolTask task) {
              if (!isRunning()) {
                   return false;
              Util.log.info("Thread Pool gets " + task + ", there are "
                        + numTasks() + " waiting tasks");
              if (numTasks() >= maxQueueSize) {
                   return false;
              addTask(task);
              return true;
         public synchronized void destroy() {
              stopRunning();
              timer.cancel();
              // TODO: I am not sure it can wake up all threads and destroy them.
              this.notifyAll();
         public synchronized void printSnapshot() {
              System.err.println("using threads: " + usingThreads + ", remaining tasks: " + tasks.size());
    PoolTask.javapackage j2me.concurrent;
    public interface PoolTask extends Runnable {
         * It shows if the task has already finished.
         * If it isn't, the task will be put in the thread pool for the next execution.
         * @return
         boolean finish();
         * It shows the delay in milliseconds that the task is put in the thread pool.
         * @return
         long delay();

    are receiving/sends tasks packets time consuming operation in your case or not? if it is not you do not need to use thread pools at all. you can create a queue like in your code through the linked list and dispatch this queue periodically with minimum monitor usage. try this.
    import java.util.LinkedList;
    public class PacketDispatcher extends Thread {
        LinkedList list = new LinkedList();
        public PacketDispatcher (String name) {
            super(name);
        public void putTask(Task task) {
            synchronized (list) {
                list
                        .add(task);
                list.notify();
        public void run() {
            while (true/* your condition */) {
                Task task = null;
                synchronized (list) {
                    while (list.isEmpty())
                        try {
                            list.wait();
                        } catch (InterruptedException e) {
                            e.printStackTrace();
                    task = (Task)list
                            .poll();
                if (task == null) {
                    try {
                        Thread
                                .sleep(1);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    continue;
                task
                        .run();
                if (!task.isFinished()) {
                    putTask(task);
                Thread
                        .yield();
        public static void main(String[] args) {
            // just for test
            try {
                Thread.sleep (10000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            PacketDispatcher dispatcher = new PacketDispatcher("Packet Dispatcher");
            Task task = new Task();
            dispatcher.putTask(task);
            dispatcher.start();
            try {
                Thread.sleep (10000);
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            Task task2 = new Task();
            dispatcher.putTask(task2);
    class Task {
        long result = 0;
        public boolean isFinished () {
            if (getResult() >= 10000000) {
                return true;
            return false;
        public void run() {
            for (int i = 0; i < 1000; i++) {
                result += i;
        public long getResult () {
            return result;       
    }

  • JMS Threads disappear from JNI application

    My application is a combination of C++/FORTRAN/Java which runs on Solaris 5.9. Messages come into the application via a JMS topic (Sonic implementation). I use JNI (JNI_VERSION_1_2) to embed the JVM. I'm able to process incoming messages for a period of time - but then it appears that the JMS threads just 'go away'. I've tried increasing the stack/heap size of the JVM -but the threads still dissappear. Sometimes the threads go away within minutes. Other times it takes hours. I also have been unable to make a correlation between the speed of the messages going into the application and the speed with which the threads dissappear.
    When I enable JNI verbose messages, I see the following messages right before the JMS threads go away:
    [Loaded java/util/ResourceBundle$1.class from /usr/java1.2/jre/lib/rt.jar]
    [Loaded java/net/URLClassLoader$2.class from /usr/java1.2/jre/lib/rt.jar]
    Happy process thread dump:
    [Thread# 1] t@23: "JMS Session Delivery Thread"
    [Thread# 2] t@22: "ClientListener $CONNECTION$ Administrator"
    [Thread# 3] t@21: "ClientSender $CONNECTION$ Administrator"
    [Thread# 4] t@6: "Finalizer"
    [Thread# 5] t@5: "Reference Handler"
    [Thread# 6] t@4: "Signal dispatcher"
    [Thread# 7] t@1: "main"
    [Thread# 8] t@3: <Sync-GC thread>
    [Thread# 9] t@2: <Sync-GC thread>
    Unhappy process thread dump - JMS threads have disappeared:
    [Thread# 1] t@5: "Finalizer"
    [Thread# 2] t@4: "Reference Handler"
    [Thread# 3] t@3: "Signal dispatcher"
    [Thread# 4] t@1: "main"
    [Thread# 5] t@2: <Sync-GC thread>
    Any ideas??? I've tried other versions of Java - but with no success.
    Please help!

    Pffft.
    Is this some sort of joke? I mean you are joking right?

Maybe you are looking for