ORB Connections/Thread pool

Does any one know the max number of connections that AppServer 7 can handle. Any idea about the max size of the thread pool?

I have finally found the solution. The solution was to get hold of the ORB of the app server. Hence the following 2 lines did the trick.
org.omg.CORBA.ORB orb = null;
try {
javax.naming.Context jndiRootCtx = new InitialContext();
orb = (ORB)jndiRootCtx.lookup("java:comp/ORB");
}

Similar Messages

  • Thread pool rejecting threads when I don't think it should, ideas?

    Hi,
    I have a server application in which I only want a specific number of simultaneous requests. If the server gets more then this number it is suppose to close the connection (sends an HTTP 503 error to the client). To do this I used a fix thread pool. When I start the server and submit the max number of requests I get the expected behavior. However if I resubmit the request (within a small period of time, e.g. 1-15 seconds after the first one) I get very odd behavior in that some of the requests are rejected. For example if I set the max to 100 the first set of requests will work fine (100 requests, 100 responses). I then submit again and a small number will be rejected (I've seen it range from 1 to 15 rejected)....
    I made a small app which kind of duplicates this behavior (see below). Basically when I see is that the first time submitting requests works fine but the second time I get a rejected one. As best as I can tell none should be rejected....
    Here is the code, I welcome your thoughts or if you see something I am doing wrong here...
    <pre>
    import java.util.concurrent.*;
    import java.util.concurrent.atomic.AtomicInteger;
    public class ThreadPoolTest {
         static AtomicInteger count = new AtomicInteger();
         public static class threaded implements Runnable {
              @Override
              public void run() {
                   System.out.println("In thread: " + Thread.currentThread().getId());
                   try {
                        Thread.sleep(500);
                   } catch (InterruptedException e) {
                        System.out.println("Thread: " + Thread.currentThread().getId()
                                  + " interuptted");
                   System.out.println("Exiting run: " + Thread.currentThread().getId());
         private static int maxThreads = 3;
         private ThreadPoolExecutor pool;
         public ThreadPoolTest() {
              super();
              pool = new java.util.concurrent.ThreadPoolExecutor(
                        1, maxThreads - 1, 60L, TimeUnit.SECONDS,
                        new ArrayBlockingQueue<Runnable>(1));
         public static void main(String[] args) throws InterruptedException {
              ThreadPoolTest object = new ThreadPoolTest();
              object.doThreads();
              Thread.sleep(3000);
              object.doThreads();
              object.pool.shutdown();
              try {
                   object.pool.awaitTermination(60, TimeUnit.SECONDS);
              } catch (InterruptedException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         private void doThreads() {
              int submitted = 0, rejected = 0;
              int counter = count.getAndIncrement();
              for (int x = 0; x < maxThreads ; x++) {
                   try {
                        System.out.println("Run #: " + counter + " submitting " + x);
                        pool.execute(new threaded());
                        submitted++;
                   catch (RejectedExecutionException re) {
                        System.err.println("\tRun #: " + counter + ", submission " + x
                                  + " was rejected");
                        System.err.println("\tQueue active: " + pool.getActiveCount());
                        System.err.println("\tQueue size: " + pool.getPoolSize());
                        rejected++;
              System.out.println("\n\n\tRun #: " + counter);
              System.out.println("\tSubmitted: " + (submitted + rejected));
              System.out.println("\tAccepted: " + submitted);
              System.out.println("\tRejected: " + rejected + "\n\n");
    </pre>

    First thank you for taking the time to reply, I do appreciate it.
    jtahlborn - The code provided here is a contrived example trying to emulate the bigger app as best as I could. The actual program doesn't have any sleeps, the sleep in the secondary thread is to simulate the program doing some work & replying to a request. The sleep in the primary thread is to simulate a small delay between 'requests' to the pool. I can make this 1 second and up to (at least) 5 seconds with the same results. Additionally I can take out the sleep in the secondary thread and still see the a rejection.
    EJP - Yes I am aware of the TCP/IP queue, however; I don't see that as relevant to my question. The idea is not to prevent the connection but to respond to the client saying we can't process the request (send an "HTTP 503" error). So basically if we have, say, 100 threads running then the 101st, connection will get a 503 error and the connection will be closed.
    Also my test platform - Windows 7 64bit running Java 1.6.0_24-b07 (32bit) on an Intel core i7.
    It occurred to me that I did not show the output of the test program. As the output shows below, the first set of requests are all processed properly. The second set of requests is not. The pool should have 2 threads and 1 slot in the queue, so by the time the second "request" is made at least 2 of the requests from the first call should be done processing, so I could possibly understand run 1, submit #2 failing but not submit 1.
    <pre>
    Run #: 0 submitting 0
    Run #: 0 submitting 1
    Run #: 0 submitting 2
    In thread: 8
    In thread: 9
    Exiting run: 8
    Exiting run: 9
         Run #: 0
         Submitted: 3
         Accepted: 3
         Rejected: 0
    In thread: 8
    Exiting run: 8
    Run #: 1 submitting 0
    In thread: 9
    Run #: 1 submitting 1
         Run #: 1, submission 1 was rejected
         Queue active: 1
         Queue size: 2
    Run #: 1 submitting 2
         Run #: 1
         Submitted: 3
         Accepted: 2
         Rejected: 1
    In thread: 8
    Exiting run: 9
    Exiting run: 8
    </pre>

  • Pattern for Thread Pool?

    Hi
    i want to build a kind of download manager. The application should be able to handle some concurrent threads, each representing a download in progress.
    I thought i might be more efficient to reuse a download thread after the download has ended as to create a new thread each time (like the connection object for db queries). Is this right? If yes, i thought to build a thread pool that serves a limited number of threaded download objects as requested (am I on the right way?).
    Now, I have to basic problems: (a) is it right, that, if the run() method of a thread has ended, the whole thread gets destroved? if yes, how should i prevent the thread from being destroyed, so i can reuse it later on? Second (b) how would that pool mechnism look like, means, there must be some kind of vector where i put in and take out the threads.
    As you see, these are basic "pool" technique questions. So, I thought, maybe there is a design pattern that would give me the basic mechanism, Does anyone know such a pattern?
    Thanks for your help
    josh

    I thought i might be more efficient to reuse a
    download thread after the download has ended as to
    create a new thread each time (like the connection
    object for db queries). Is this right? If yes, iIt may be right, if creating new threads is wasting enough CPU cycles to justify the complication of a thread pool. Maybe for a high-load server it would be more efficient. You'll have to figure that out for your own specific application.
    Another good use for thread pools is to avoid putting time-consuming operations in ActionListeners, etc. Instead you can have them pass the task off to a thread pool, keeping the GUI responsive.
    Now, I have to basic problems: (a) is it right, that,
    if the run() method of a thread has ended, the whole
    thread gets destroved? if yes, how should i prevent
    the thread from being destroyed, so i can reuse it
    later on? Second (b) how would that pool mechnism look
    like, means, there must be some kind of vector where i
    put in and take out the threads. (a) You are right. Therefore, the worker threads should not exit their run() methods until interrupted. (b) Worker threads could check a job queue (containing Runnables, perhaps) and if there are none, they should wait() on some object. When another thread adds a new job to the queue, it should call notify() on the same object, thus waking up one of the worker threads to perform the task.
    I wrote a thread pool once, just as an exercise. You will run into a number of problems and design issues (such as, what should the worker threads do when interrupted, exit immediately or clear the job queue and then exit?) If you have any more questions, ask in this thead.
    Krum

  • 100% thread pool/CPU usage prevent MII Applications from working

    Hello to all,
    our MII Applications were running on MII 11.5 (windowsserver 2003, with IIS) without any problems.
    We migrated these MII Applications  to SAP xMII Version 12.0 SP8, SAP NW 7.00 SP20, Java HotSpot(TM) 64-Bit Server VM (build 1.4.2_22-rev-b03, mixed mode).
    Since then we faced several problems that prevent the MII Applications from working, e.g.
    - 100% application thread pool usage rate
    - 100% CPU usage
    - awful lot of http connections
    Several sap support calls didn't find the root cause.
    Workaround: restart sap system weekly
    Does anyone has any idea how to go on?
    Thanks
    Simone

    Hi Mike,
    sap support told us to install this Java version.
    Furthermore SAP Note 716604 says "do not use J2SE 5.0" and "Currently, we recommend 1.4.2_24 b06".
    Any more ideas?
    Regards,
    Simone

  • On Threads, Pools and other beasts

    I'm seeing some unexpected behaviours in our production system relating to
    (AFAICS) threads and connection pools, and hope you could bring some info
    about them.
    Our system is built over 4 PIII Xeon processors under Linux.
    We do currently have this configuration:
    15 threads for the "default" queue and 10 for an special servlet one, we
    decided to separate threads in two queues in order to assure our users get
    always a thread besides what the rest of the system is "doing".
    Even we do not have a lot of users (about 20 or so) they do generate a lot
    of load as for the bussiness logic inherent to the application.
    For sample, so you can understand what goes with practically any user
    "action", think on this.
    After a user "confirms" some data via a servlet and after executing the data
    validation and bussines rules some messages are sent via JMS to the
    "asynchronous" part of the system (that is running in the same weblogic
    instance). After commiting the user transaction an thus releasing the
    servlet thread, so it can be used by the same or other user, JMS messages
    are delivered to MDBs that must transform information from on-line (servlet)
    processes in different ways so they can be stored onto other systems, i.e.
    into a mainframe, into an XML DB and possibly into another RDBMS. Our
    configuration is that there can be as much as 10 MDB of each type (I mean
    for each kind of "action" of a servlet) running concurrently and as you can
    suppose those processes do take some time to communicate with destination
    systems and perform their work.
    We end at last with a lot of concurrent processes in our system that ends
    some time with the users complaining about system responsiveness.
    After all this explanation I would like to know if 25 threads for
    "background" and on-line processes is too low (as I'm afraid they are). The
    problem is we can't seem to increase the number of threads without being
    very careful with JDBC connection pools.
    Currently we have two connection pools. We do demarcate transactions in the
    clients (servlets, batch processes) we have a "transacted" pool and a "non
    transacted" one.
    We are delegating persistence to the contanier (formally in our case we are
    using TopLink persistence and it uses in it's deployment descriptor both
    types of pools)
    Our configuration is as follows:
    Oracle pool NON Tx 60 connections
    Oracle pool Tx 30 connections
    initially we create 5 connections for each pool with an increment of 5 for
    each one too.
    From the tests I have made I have discovered that setting more threads than
    the minimum amount of pools yields to this exception:
    weblogic.common.ResourceException: No available connections in pool
    myNonTxPool
    at
    weblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:57
    8)
    at
    weblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:40
    5)
    at
    weblogic.common.internal.ResourceAllocator.reserveNoWait(ResourceAllocator.j
    ava:373)
    at
    weblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:165
    at
    weblogic.jdbc.common.internal.ConnectionPool.reserveNoWait(ConnectionPool.ja
    va:126)
    at
    weblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(RmiDataSource.
    java:194)
    at
    weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java
    :219)
    the behaviour I would expect is that in case a thread needs a connection and
    there isn't any one available the thread may be blocked and it would receive
    the connecion once one is released, of course as I can see in the stack the
    ConnectionPool.reserveNoWait() method is behaving just the other way.
    The main problem with this is that as you can see we are "forced" to spend
    90 (60+30) connections to the DB (even we will never use more than 25
    (15+10) simultaneously just because we must assure that at least there is
    one "reserved" connection to each thread.
    Our DBA thinks that it can't be possible that we spend such number of
    connections that could be taken by another application(s) (as the DB is
    shared with other apps)
    Currently our DB system is not set as "multithreaded" so each connection
    created against the DB is a process on the system and of course they are a
    really scarce resource.
    My question is. What would be a "fine" number of threads for an application
    like this that is mainly "background-batch processing" but assuring on-line
    users have their threads always available?
    I have just another doubt (maybe this is not the right thread to ask for it
    but...) how does the UserTransaction actually works? I mean, is the
    connection given to the thread (and thus extracted from pool) as soon as the
    thread begin it's work? or is it given in the instant of "commiting" to the
    DB. I know maybe using TopLink changes default Weblogic CMP behaviour but I
    would like to know what the "default" Weblogic behaviour is; and, what
    happens when you don't start a transaction in the client and total execution
    time exceeds 30 seconds? I have seen a rollback due to "exceeding" those 30
    seconds althought I'm sure we do not open any transaction, what kind of
    "transaction" is that? Is just a way of Weblogic to assure a thread is not
    "locked" more than a certain period of time so the system never "stalls"?
    Thanks in advance.
    Regards.
    Ignacio.

    Hi Ignacio,
    See my answer inline.
    "Ignacio G. Dupont" <[email protected]> wrote in message
    news:[email protected]...
    I'm seeing some unexpected behaviors in our production system relating to
    (AFAICS) threads and connection pools, and hope you could bring some info
    about them.
    Our system is built over 4 PIII Xeon processors under Linux.
    We do currently have this configuration:
    15 threads for the "default" queue and 10 for an special servlet one, weThat numbers defines number of concurrent requests services
    associated with that queues can handle. If monitoring CPU utilization
    shows that CPU load is not high, let say less than 90%, - you can increase
    that numbers.
    decided to separate threads in two queues in order to assure our users get
    always a thread besides what the rest of the system is "doing".
    Even we do not have a lot of users (about 20 or so) they do generate a lot
    of load as for the bussiness logic inherent to the application.>
    For sample, so you can understand what goes with practically any user
    "action", think on this.[eaten]
    We end at last with a lot of concurrent processes in our system that ends
    some time with the users complaining about system responsiveness.You will have to run a load test in your QA environment and play with queue
    sizes. In addition, you may want to run a profiler (like JProbe or
    OptimizeIt)
    for maximum load to find if there are bottlenecks in the application.
    After all this explanation I would like to know if 25 threads for
    "background" and on-line processes is too low (as I'm afraid they are).The
    It all depends of the usage pattern. I'd say that for a production
    environment
    with any noticeable load, it's low.
    problem is we can't seem to increase the number of threads without being
    very careful with JDBC connection pools.Yes, you will have to increase size of the pools to match maximum
    number of ongoing transactions. Minimum would be a number of execution
    threads. Actual number should be determined either by load testing or
    by setting it to a guaranteed high level.
    Currently we have two connection pools. We do demarcate transactions inthe
    clients (servlets, batch processes) we have a "transacted" pool and a "non
    transacted" one.
    We are delegating persistence to the contanier (formally in our case weare
    using TopLink persistence and it uses in it's deployment descriptor both
    types of pools)
    Our configuration is as follows:
    Oracle pool NON Tx 60 connections
    Oracle pool Tx 30 connections
    initially we create 5 connections for each pool with an increment of 5 for
    each one too.
    From the tests I have made I have discovered that setting more threadsthan
    the minimum amount of pools yields to this exception:That's quite natural.
    weblogic.common.ResourceException: No available connections in pool
    myNonTxPool[eaten]
    the behaviour I would expect is that in case a thread needs a connectionand
    there isn't any one available the thread may be blocked and it wouldreceive
    That would lock exec threads very quickly. A connection pool is a vital
    resource that is to be available constantly. So weblogic uses fail-fast
    approach so that you can adjust setting to match highest load.
    the connecion once one is released, of course as I can see in the stackthe
    ConnectionPool.reserveNoWait() method is behaving just the other way.>
    The main problem with this is that as you can see we are "forced" to spend
    90 (60+30) connections to the DB (even we will never use more than 25
    (15+10) simultaneously just because we must assure that at least there is
    one "reserved" connection to each thread.That's right.
    Our DBA thinks that it can't be possible that we spend such number of
    connections that could be taken by another application(s) (as the DB is
    shared with other apps)I don't think it's a correct observation. Oracle can be configured to handle
    more connections. I saw weblogic pools configured to handle 200
    connections.
    Currently our DB system is not set as "multithreaded" so each connection
    created against the DB is a process on the system and of course they are a
    scarce resource.Application demand for resources should be satisfied.
    My question is. What would be a "fine" number of threads for anapplication
    like this that is mainly "background-batch processing" but assuringon-line
    users have their threads always available?It should be high enough to satisfy requirement to handle given number
    of concurrent requests processed on given hardware. Normally this
    is determined by load testing and gradual increase of this number
    to the point where you see that hardware (seen as CPU load)
    cannot handle it. Buy the way this point sometimes is unreachable
    as application becomes DB-bound, i.e. bottleneck is shifted to
    the database.
    I have just another doubt (maybe this is not the right thread to ask forit
    but...) how does the UserTransaction actually works? I mean, is the
    connection given to the thread (and thus extracted from pool) as soon asthe
    thread begin it's work? or is it given in the instant of "committing" tothe
    It's given when a connection, assuming it's obtained from TxDatasource,
    is requested.
    DB. I know maybe using TopLink changes default Weblogic CMP behavior but I
    would like to know what the "default" Weblogic behavior is; and, what
    happens when you don't start a transaction in the client and totalexecution
    time exceeds 30 seconds? I have seen a rollback due to "exceeding" those30
    seconds although I'm sure we do not open any transaction, what kind of
    "transaction" is that? Is just a way of Weblogic to assure a thread is notFor instance, stateful session beans are transactional.
    "locked" more than a certain period of time so the system never "stalls"?Basically, no, it's not. There is no way to "unlock" thread after certain
    period.
    So when a queue has finished processing, TX monitor checks the timeout,
    and if there is one, issues a corresponding rollback. So, it's possible
    for thread to run for 10 hours if the timeout is 30 seconds.
    Since 7.0 weblogic is capable of detecting such situations so that
    administrator can be informed about it and required actions can be
    taken [on application side].
    Hope this helps.
    Regards,
    Slava Imeshev

  • Thread Pool's decreasing application performance

    Hi,
    In my application, I need to make 10,000 threads for network calls at one instant and release the threads after downloading content.
    Content downloading of individual thread consumes less than 1min.
    When I try this using Timer Task and normal Threads, it works perfectly.
    But after introducing ThreadPool (initial :20,000 ; maximum: 50,000) , the response has degraded.
    Is there any limitations or known issues similar to this for thread pool's.
    Thanks!

    rock_win wrote:
    In my application, I need to make 10,000 threads for network calls at one instant and release the threads after downloading content.10000 threads all doing network connects at the same time? You better contact 10000 distinct servers and have tons of network bandwidth.
    You'll hit problems at that level long before the number of threads becomes a problem.
    Content downloading of individual thread consumes less than 1min.
    When I try this using Timer Task and normal Threads, it works perfectly.What are "normal Threads"? How many operations are you running in parallel here?
    But after introducing ThreadPool (initial :20,000 ; maximum: 50,000) , the response has degraded.You want to have 10000 Threads and declare a pool with 20000 initial threads? Why?

  • Submit submit a large number of task to a thread pool (more than 10,000)

    i want to submit a large number of task to a thread pool (more than 10,000).
    Since a thread pool take runnable as input i have to create as many objects of Runnable as the number of task, but since the number of task is very large it causes the memory overflow and my application crashes.
    Can you suggest me some way to overcome this problem?

    Ravi_Gupta wrote:
    I have to serve them infinitely depending upon the choice of the user.
    Take a look at my code (code of MyCustomRunnable is already posted)
    public void start(Vector<String> addresses)
    searching = true;What is this for? Is it a kind of comment?
    >
    Vector<MyCustomRunnable> runnables = new Vector<MyCustomRunnable>(1,1);
    for (String address : addresses)
    try
    runnables.addElement(new MyCustomRunnable(address));
    catch (IOException ex)
    ex.printStackTrace();
    }Why does MyCustomRunnable throw an IOException? Why is using up resources when it hasn't started. Why build this vector at all?
    >
    //ThreadPoolExecutor pool = new ThreadPoolExecutor(100,100,50000L,TimeUnit.MILLISECONDS,new LinkedBlockingQueue());
    ExecutorService pool = Executors.newFixedThreadPool(100);You have 100 CPUs wow! I can only assume your operations are blocking on a Socket connection most of the time.
    >
    boolean interrupted = false;
    Vector<Future<String>> futures = new Vector<Future<String>>(1,1);You don't save much by reusing your vector here.
    for(int i=1; !interrupted; i++)You are looping here until the thread is interrupted, why are you doing this? Are you trying to generate loading on a remote server?
    System.out.println("Cycle: " + i);
    for(MyCustomRunnable runnable : runnables)Change the name of you Runnable as it clearly does much more than that. Typically a Runnable is executed once and does not create resources in its constructor nor have a cleanup method.
    futures.addElement((Future<String>) pool.submit(runnable));Again, it unclear why you would use a vector rather than a list here.
    >
    for(Future<String> future : futures)
    try
    future.get();
    catch (InterruptedException ex)
    interrupted = true;If you want this to break the loop put the try/catch outside the loop.
    ex.printStackTrace();
    catch (ExecutionException ex)
    ex.printStackTrace();If you are generating a load test you may want to record this kind of failure. e.g. count them.
    futures.clear();
    try
    Thread.sleep(60000);Why do you sleep even if you have been interrupted? For better timing, you should sleep, before check if you futures have finished.
    catch(InterruptedException e)
    searching = false;again does nothing.
    System.out.println("Thread pool terminated..................");
    //return;remove this comment. its dangerous.
    break;why do you have two way of breaking the loop. why not interrupted = true here.
    searching = false;
    System.out.println("Shut downing pool");
    pool.shutdownNow();
    try
    for(MyCustomRunnable runnable : runnables)
    runnable.close(); //release resources associated with it.
    catch(IOException e)put the try/catch inside the loop. You may want to ignore the exception but if one fails, the rest of the resources won't get cleaned up.
    The above code serve the task infinitely untill it is terminated by user.
    i had created a large number of runnables and future objects and they remain in memory until
    user terminates the operation might be the cause of the memory overflow.It could be the size of the resources each runnable holds. Have you tried increasing your maximum memory? e.g. -Xmx512m

  • Error making initial connections for pool

              I'm using WLS6.1 on NT, trying my personal implementation of a connector (non
              CCI pattern) with a particular EIS.
              i'm having this error... what depends from?:
              ####<25-ott-00 19.52.41 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190014> <Initializing connection pool for resource adapter
              BlackBoxNoTx.>
              ####<25-ott-00 19.52.42 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190010> <Unable to determine Resource Principal for Container
              Managed Security Context.>
              ####<25-ott-00 19.52.42 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190035> <There was/were 1 physical connection(s) created with
              the following Meta Data: Product name of EIS instance: DBMS:cloudscape Product
              version of EIS instance: 3.5.1 Maximum number of connections supported from different
              processes: 0 User name for connection: null>
              ####<25-ott-00 19.52.42 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190026> << BlackBoxNoTx > Connection Pool initialization has
              completed successfully.>
              ####<25-ott-00 19.52.42 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190014> <Initializing connection pool for resource adapter
              SMJConnector.>
              ####<25-ott-00 19.52.42 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190010> <Unable to determine Resource Principal for Container
              Managed Security Context.>
              ####<25-ott-00 19.52.47 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190010> <Unable to determine Resource Principal for Container
              Managed Security Context.>
              ####<25-ott-00 19.53.06 CEST> <Error> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190024> << SMJConnector > Error making initial connections
              for pool. Reason: examplescnnct.ConnectionImpl>
              ####<25-ott-00 19.53.06 CEST> <Warning> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190025> << SMJConnector > Connection Pool has been initialized
              with no connections.>
              ####<25-ott-00 19.53.06 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <main> <system> <> <190026> << SMJConnector > Connection Pool initialization has
              completed successfully.>
              ####<25-ott-00 19.54.30 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <Application Manager Thread> <> <> <190027> << SMJConnector > Shutting down connections.>
              ####<25-ott-00 19.54.30 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              <Application Manager Thread> <> <> <190028> << SMJConnector > Connections shutdown
              successfully.>
              

    Have you configured your principal map? Connection pools are initially populated using the
              default ("*") principal from the map.
              http://edocs.bea.com/wls/docs61/jconnector/config.html#1237429
              HTH.
              FilAiello wrote:
              > I'm using WLS6.1 on NT, trying my personal implementation of a connector (non
              > CCI pattern) with a particular EIS.
              >
              > <examplesServer>
              > <main> <system> <> <190010> <Unable to determine Resource Principal for Container
              > Managed Security Context.>
              > ####<25-ott-00 19.52.42 CEST> <Info> <Connector> <semarmw0780> <examplesServer>
              Tom Mitchell
              [email protected]
              Very Current Beverly, MA Weather
              http://www.tom.org:8080
              

  • Basics of Thread Pool and MDB

    Hi,
    I am not able to connects the dots between Self-Tuning Thread Pool Threads ,number of MDB's and Open connection to Queue Manager (Listeners).
    Following is my setting
    1 Self-Tuning Thread Pool : Default i.e 5
    2) Initial Beans in Free Pool: 100
    3) Max Beans in Free Pool : 200
    What i see
    Pool Current Count :- 100 (this is as expected)
    On start up of server 105 MDB's are created (No problem with this ) (Have put static variable incrementing in constructor)
    When Messages are sent to MQ (50-100) the "Beans In Use Count" under Monitoring never shows more then 16
    and finally the "open MQ Count" on MQ Explorer is always 16.
    Questions.
    1) What do i need to change, for increasing the count of "Beans In Use Count" and "open MQ Count" on MQ Explorer to be more than 16?
    2)If the Self-Tuning thread pool is 5, how come 16 beans are executed at once? or its just that 16 are picked from pool and only 5 are executed at given time?
    NOTE:- I am using weblogic app server to connecto IBM MQ with JMS Module, so creating the customer WorkManager and attaching it to my listener (MDB) is not supported by weblogic, it says the mdb is not under webloic thread pool so this settingt will have no effect
    -thnaks

    Hi ,
    You have to create a custom work manager and then you have to associate this customer work manager to the dispatch policy of the MDB to increase the threads.
    Following links would surely help you as TomB has explained the same issue very well, do have a look at them.
    Re: WorkManager Max thread constraints not applied to MDB
    Also you can also go through this links which would help you get more information:
    Topic: Configuring Enterprise Beans in WebLogic Search for "To map an EJB to a workmanager"
    http://middlewaremagic.com/weblogic/?p=5665
    Topic: WebLogic WorkManager with EJB3 And MaxThread Constraint
    http://middlewaremagic.com/weblogic/?p=5507
    Regards,
    Ravish Mody
    http://middlewaremagic.com/weblogic/
    Come, Join Us and Experience The Magic…

  • Number of open connections in pool

    Hi,
    I want to find out how many connections are being used from my connection pool. From the monitoring capabilities you can only know the numer of timed-out connections, threads waiting and connections which failed validation.
    How can i know the number of used connections?.
    And too, can i monitorize the openning of the connections, the closure, when it is asigned to another thread, freed, etc.
    Thanks
    Juan

    Hi,
    I have been using the same for the last 6 months on UR1, UR2 and even on SJS8 without any issues. All works fine. The setting in my case is like this:
    <jvm-options>-DMONITOR_JDBC=true</jvm-options>
        <jvm-options>-DMONITOR_TIME_PERIOD_SECONDS=60</jvm-options>
        <jvm-options>-DMONITOR_JDBC_TIME_PERIOD_SECONDS=60</jvm-options>But note that just having this setting in the JVM options is not enough. In the connection pool that you wish to monitor, you need to add one more property as follows:
    <property value="true" name="perf-monitor"/>This will make the connection pool available for monitoring. If yous wish to turn it off, all you need to do is to turn perf-monitor value to false.
    There is one other way to monitor the connection pool though. This can be done on the system level. The way to do it is as follows:
    asadmin get --user <appservadmin user> --password <appservadmin password> --port <admin server port> --monitor <instance name>.resources.jdbc-connection-pool.<JDBC Connection pool name>.*To monitor the transactions, you can use this:
    asadmin get --user <appservadmin user> --password <appservadmin password> --port <admin server port> --monitor <instance name>.transaction-service.*Hope this is of help.
    Regards,
    Abrar

  • Unable to get a connection for pool - ResourceUnavailableException

    Hi
    I have a BPEL process which starts a child instance of another asynchronous BPEL process for each message in an XML file. The child BPEL process makes a call to the Oracle Apps JCA Adapter to push the data into E-Business Suite.
    All works perfectly except when the number of messages exceeds a certain limit (15 or so). The error received is as follows:
    "Exception occured when binding was invoked.
    Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'SyncPersonRecord' failed due to:
    JCA Binding Component connection issue. JCA Binding Component is unable to create an outbound JCA (CCI) connection.
    ebsPeoplesoftEmployees:SyncPersonRecord [ SyncPersonRecord_ptt::SyncPersonRecord(InputParameters,OutputParameters) ] :
    The JCA Binding Component was unable to establish an outbound JCA CCI connection due to the following issue:
    javax.resource.spi.ApplicationServerInternalException:
    Unable to get a connection for pool = 'eis/Apps/Apps',
    weblogic.common.resourcepool.ResourceUnavailableException:
    No resources currently available in pool eis/Apps/Apps to allocate to applications.
    Either specify a time period to wait for resources to become available, or increase the size of the pool and retry.. Please make sure that the JCA connection factory and any dependent connection factories have been configured with a sufficient limit for max connections.
    Please also make sure that the physical connection to the backend EIS is available and the backend itself is accepting connections. ".
    The invoked JCA adapter raised a resource exception.
    Please examine the above error message carefully to determine a resolution."
    Obviously what is happening is the connection pool maximum is reached (currently 15) and this is throwing the error.
    What I need to do is to implement the suggestion of "specifying a time period to wait" and I was hoping someone could tell me how I do this?
    I have tried setting the 'Connection Creation Retry Frequency' parameter to 30 seconds which made no difference and also have checked the documentation on "Configuring and Managing JDBC Data Sources for Oracle WebLogic Server".
    Does anyone know if this is something that is implemented directly in the BPEL process/composite or in the connection source itself.
    Many thanks

    Open the jndi : eis/Apps/Apps in /console - config tab - increase the initial and max conn capacity and save it. Retry the scenario

  • A good design for a single thread pool manager using java.util.concurrent

    Hi,
    I am developing a client side project which in distinct subparts will execute some tasks in parallel.
    So, just to be logorroic, something like that:
    program\
                \--flow A\
                           \task A1
                           \task A2
                \--flow B\
                            \task B1
                            \task B2
                            \...I would like both flow A and flow B (and all their launched sub tasks) to be executed by the same thread pool, because I want to set a fixed amount of threads that my program can globally run.
    My idea would be something like:
    public class ThreadPoolManager {
        private static ExecutorService executor;
        private static final Object classLock = ThreadPoolManager.class;
         * Returns the single instance of the ExecutorService by means of
         * lazy-initialization
         * @return the single instance of ThreadPoolManager
        public static ExecutorService getExecutorService() {
            synchronized (classLock) {
                if (executor != null) {
                    return executor;
                } else {
                    // TODO: put the dimension of the FixedThreadPool in a property
                    executor = Executors.newFixedThreadPool(50);
                return executor;
         * Private constructor: deny creating a new object
        private ThreadPoolManager() {
    }The tasks I have to execute will be of type Callable, since I expect some results, so you see an ExecutorService interface above.
    The flaws with this design is that I don't prevent the use (for example) of executor.shutdownNow(), which would cause problems.
    The alternative solution I have in mind would be something like having ThreadPoolManager to be a Singleton which implements ExecutorService, implementing all the methods with Delegation to an ExecutorService object created when the ThreadPoolManager object is instantiated for the first time and returned to client:
    public class ThreadPoolManager implements ExecutorService {
        private static ThreadPoolManager pool;
        private static final Object classLock = ThreadPoolManager.class;
        private ExecutorService executor;
         * Returns the single instance of the ThreadPoolManager by means of
         * lazy-initialization
         * @return the single instance of ThreadPoolManager
        public static ExecutorService getThreadPoolManager() {
            synchronized (classLock) {
                if (pool !=null) {
                    return pool;
                } else {
                    // create the real thread pool
                    // TODO: put the dimension of the FixedThreadPool in a property
                    // file
                    pool = new ThreadPoolManager();
                    pool.executor = Executors.newFixedThreadPool(50);
                    // executor = Executors.newCachedThreadPool();
                    return pool;
         * Private constructor: deny creating a new object
        private ThreadPoolManager() {
        /* ======================================== */
        /* implement ExecutorService interface methods via delegation to executor
         * (forbidden method calls, like shutdownNow() , will be "ignored")
          // .....I hope to have expressed all the things, and hope to receive an answer that clarifies my doubts or gives me an hint for an alternative solution or an already made solution.
    ciao
    Alessio

    Two things. Firstly, it's better to use     private static final Object classLock = new Object();because that saves you worrying about whether any other code synchronises on it. Secondly, if you do decide to go for the delegation route then java.lang.reflect.Proxy may be a good way forward.

  • JRun Thread Pool Issue

    I'm running CF 9.0.1 on Ubuntu on an "Medium" Amazon EC2 instance. CF has been crashing intermittently (several times per day). At such times, running top gets me this (or something similar):
    PID
    USER
    PR
    NI
    VIRT
    RES
    SHR
    S
    %CPU
    %MEM
    TIME+COMMAND                                                                                                   
    15855
    wwwrun
    20
    0
    1762m
    730m
    20m
    S
    99.3
    19.4
    13:22.96 coldfusion9
    So, it's obviously consuming most of the server resources. The following error has been showing up in my cfserver.log in the leadup to each crash:
    java.lang.RuntimeException: Request timed out waiting for an available thread to run. You may want to consider increasing the number of active threads in the thread pool.
    If I run /opt/coldfusion9/bin/coldfusion status, I get:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  150   25    0     0      -1352560      0      0
    In the administrator, under Server Settings > Request Tuning, the setting for Maximum number of simultaneous Template requests is 25. So this makes sense so far. I could just increase the thread pool to cover these sort of load spikes. I could make it 200. (Which I did just now as a test.)
    However, there's also this file /opt/coldfusion9/runtime/servers/coldfusion/SERVER-INF/jrun.xml. And some of the settings in there appear to conflict. For example, it reads:
    <service class="jrunx.scheduler.SchedulerService" name="SchedulerService">
      <attribute name="bindToJNDI">true</attribute>
      <attribute name="activeHandlerThreads">25</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="minHandlerThreads">20</attribute>
      <attribute name="threadWaitTimeout">180</attribute>
      <attribute name="timeout">600</attribute>
    </service>
    Which a) has fewer active threads (what does this mean?), and b) has a max threads that exceed the simultaneous request limit set in the admin. So, I'm not sure. Are these independent configs that need to be made to match manually? Or is the jrun.xml file supposed to be written by the CF Admin when changes are made there? Hmm. But maybe this is different because presumably the CF Scheduler should only use a subset of all available threads, right...so we'd always have some threads for real live users. We also have this in there:
    <service class="jrun.servlet.http.WebService" name="WebService">
      <attribute name="port">8500</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="deactivated">true</attribute>
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="timeout">300</attribute>
    </service>
    This appears to have changed when I changed the CF Admin setting...maybe...but it's the activeHandlerThreads that matches my new maximum simulataneous requests setting...rather than the maxHandlerThreads, which again exceeds it. Finally, we have this:
    <service class="jrun.servlet.jrpp.JRunProxyService" name="ProxyService">
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="deactivated">false</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="port">51800</attribute>
      <attribute name="timeout">300</attribute>
      <attribute name="cacheRealPath">true</attribute>
    </service>
    So, I'm not certain which (if any) of these I should change and what exactly the relationship is between maximum requests and maximum threads. Also, since several of these list the maxHandlerThreads as 1000, I'm wondering if I should just set the maximum simultaneous requests to 1000. There must be some upper limit that depends on available server resources...but I'm not sure what it is and I don't really want to play around with it since it's a production environment.
    I'm not sure if it pertains to this issue at all, but when I run a ps aux | grep coldfusion I get the following:
    wwwrun   15853  0.0  0.0   8704   760 pts/1
    S
    20:22   0:00 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -autorestart -start coldfusion
    wwwrun   15855  5.4 18.2 1678552 701932 pts/1  
    Sl
    20:22   1:38 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -start coldfusion
    There are always these two and never more than these two processes. So there does not appear to be a one-to-one relationship between processes and threads. I recall from an MX 6.1 install I maintained for many years that additional CF processes were visible in the process list. It seemed to me at the time like I had a process for each thread...so either I was wrong or something is quite different in version 9 since it's reporting 25 running requests and only showing these two processes. If a single process can have multiple threads in the background, then I'm given to wonder why I have two processes instead of one...just curious.
    So, anyway, I've been experimenting while composing this post. As noted above I adjusted the maximum simulataneous requests up to 200. I was hoping this would solve my problem, but CF just crashed again (rather it slogged down and requests started timing out...so effectively "crashed"). This time, top looked similar (still consuming more than 99% of the CPU), but CF status looked different:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     150   0     0      0      0      0      0
    Obviously, since I'd increased the maximum simultaneous requests, it was allowing more requests to run simultaneously...but it was still maxing out the server resources.
    Further experiments (after restarting CF) showed me that the server became unusably slogged after about 30-35 "Reqs Run'g", with all additional requests headed for an inevitible timeout:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     33    0     0      -492   0      0      0
    So, it's clear that increasing the maximum simultaneous requests has not helped. I guess what it comes down to is this: What is it having such a hard time with? Where are these spikes coming from? Bursts of traffic? On what pages? What requests are running at any given time? I guess I simply need more information to continue troubleshooting. If there are long-running requests, or other issues, I'm not seeing it in the logs (although I do have that option checked in the admin). I need to know which requests exactly are those responsible for these spikes. Any help would be much appreciated. Thanks.
    ~Day

    I really appreciate your help. However, I haven't been able to find the JRun Thread settings you describe above.
    Under Request Tuning, I see:
    Server Settings > Request Tuning
    Request Limits
    Maximum number of simultaneous Template requests
      Restricts the number of simultaneously processed requests. Use this setting to increase overall system performance for heavy load applications. Requests beyond the specified limit are queued. On Standard Edition, you must restart ColdFusion to enable this setting. 
    Maximum number of simultaneous Flash Remoting requests
      The number of Flash Remoting requests that can be processed concurrently.
    Maximum number of simultaneous Web Service requests
      The number of Web Service requests that can be processed concurrently.
    Maximum number of simultaneous CFC function requests
      The number of ColdFusion Component methods that can be processed concurrently via HTTP. This does not affect invocation of CFC methods from within CFML, only methods requested via an HTTP request.
    Tag Limit Settings
    Maximum number of simultaneous Report threads
      The maximum number of ColdFusion reports that can be processed concurrently.
    Maximum number of threads available for CFTHREAD
      The maximum number of threads created by CFTHREAD that will be run concurrently. Threads created by CFTHREAD in excess of this are queued.  On Standard Edition, the maximum limit is 10. 
    And under Java and JVM, I see:
    Server Settings > Java and JVM
        Java and JVM settings control the way ColdFusion starts the Java Virtual Machine when it starts.  You can control settings like what classpaths are used and how memory is allocated as well as add custom command line arguments.  Changing these settings requires restarting ColdFusion.  If you enter an incorrect setting, ColdFusion may not restart properly. 
       Backups of the jvm.config file are created when you hit the submit button. You can use this backup to restore from a critical change. 
       Java Virtual Machine Path
      Specifies the location of the Java Virtual Machine.
       Minimum JVM Heap Size (MB)         Maximum JVM Heap Size  (MB)       
       The Memory Size settings determine the amount of memory that the JVM can use for programs and data. 
       ColdFusion Class Path
      Specifies any additional class paths for the JVM, with multiple directories separated by  commas.
       JVM Arguments
      -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
      Specifies any specific JVM initialization options, separated by spaces.
    I did go take a look at FusionReactor and found it's not free (which would be fine, of course, if it would actually help). It looks like there's a fully functional demo, which is cool...but I've haven't been able to get it to install yet, so we'll see.
    Thanks again!
    ~Day
    (By the way, I've cross-posted this inquiry on StackOverflow. So if you're able to help me arrive at a solution you might want to answer there as well.)

  • Custom thread pool for Java 8 parallel stream

    It seems that it is not possible to specify thread pool for Java 8 parallel stream. If that's so, the whole functionality is useless in most of the situations. The only situation I can safely use it is a small single threaded application written by one person.
    In all other cases, if I can not specify the thread pool, I have to share the default pool with other parts of the application. If someone submits a task that takes a lot of time, my tasks will get stuck. Is that correct or am I overlooking something?
    Imagine that someone submits slow networking operation to the fork-join pool. It's not a good idea, but it's so tempting that it will be happening. In such case, all CPU intensive tasks executed on parallel streams will wait for the networking task to finish. There is nothing you can do to defend your part of the application against such situations. Is that so?

    You are absolutely correct. That isn't the only problem with using the F/J framework as the parallel engine for bulk operations. Have a look http://coopsoft.com/ar/Calamity2Article.html

  • Thread Pool , Executors ...

    Sorry if i make a stupid post now, but i'm looking for a implementation of a Thread Pool using the latest 1.5 java.util.concurrent classes and i can't find anything serious. Any implementation or link to a tutorial should be vary helpful.
    Thnx

    but i'm looking
    for a implementation of a Thread Pool using
    the latest 1.5 java.util.concurrent classes and i
    can't find anything serious. Any implementation or
    link to a tutorial should be vary helpful.
    Thnxhere is an Example :
    import java.util.concurrent.*;
    public class UtilConcurrentTest {
    public static void main(String[] args) throws InterruptedException {
    int numThreads = 4;
    int numTasks = 20;
    ExecutorService service = Executors.newFixedThreadPool(numThreads);
    // do some tasks:
    for (int i = 0; i < numTasks; i++) {
    service.execute(new Task(i));
    service.shutdown();
    log("called shutdown()");
    boolean isTerminated = service.awaitTermination(60, TimeUnit.SECONDS);
    log("service terminated: " + isTerminated);
    public static void log(String msg) {
    System.out.println(System.currentTimeMillis() + "\t" + msg);
    private static class Task implements Runnable {
    private final int id;
    public Task(int id) {
    this.id = id;
    public void run() {
    log("begin:\t" + this);
    try { Thread.sleep(1000); } catch (InterruptedException e) {}
    log("end\t" + this);
    public String toString() {
    return "Task " + id + " in thread " + Thread.currentThread().getName();
    }

Maybe you are looking for