Setting Thread pool size

          Hi,
          I want to know if I set a system property "-Dweblogic.ThreadPoolSize", how will the
          WLS get to know that the pool size has been changed, at run time?
          E.g. I pass -Dweblogic.ThreadPoolSize=30 from the command-line. Then if I change
          the pool size to 40 at runtime, is there any event that I can fire for the change
          in property through APIs?
          Thnx in advance.
          Best Regards
          Ali
          

Disregarding what it is for, in my experience, tuning this setting rarely has much effect. For 6.1, the main thread pool related tunables to look at are the EJB thread pools and EJB max-beans... settings, the "default" thread pool, and the internal thread-pool for stand-alone clients -- all of which are mentioned in the performance guide.

Similar Messages

  • JMS Thread Pool Size

    Hi,
              I'm using WLS 6.1. The console has a setting for: JMS Thread Pool Size. I wanted to tune the number of threads used by JMS. I thought JMS asynch consumers would use threads in this pool however that doesn't seem to be the case (they all use the default execute threads and queues). Why is this setting available?
              Note the BEA WebLogic JMS Performance Guide talks about tuning this value from version 6.1 up to 8.1 and states "On the server, incoming JMS related requests execute in the JMS execute queue/thread pool."
              Thanks in advance for any responses,
              Mich

    Disregarding what it is for, in my experience, tuning this setting rarely has much effect. For 6.1, the main thread pool related tunables to look at are the EJB thread pools and EJB max-beans... settings, the "default" thread pool, and the internal thread-pool for stand-alone clients -- all of which are mentioned in the performance guide.

  • Thread Pool Size

    What can be the maximum thread pool size used in java.
    What happens if i put up a thread pool size in range of 10k or 20k.
    how does affect system performance ???
    Its quite urgent for me to know. Can someone help out ???
    Regards
    Hrushi

    I'd say it depends on the machine you are running on, amount of memory and number of CPUs and such, not on Java itself. And with that many threads your implementation better be darn good in order not to be the bottleneck...

  • Change of Min Pool Size has no effect?

    Hi, we are still using odp 9.2.0.4 on windows server 2003 webserver and oracle database 9i on Sun solaris. Recently we discovered a problem using Min Pool Size = 1, it seems that if new connections are required we sometimes get a connection timeout. So we decided to set Min Pool Size = 30, and on the testserver this works fine. In the odp tracefile (tracelevel=2) you see the 30 connections building up immediately. But on the production server we see no difference, it looks like the (new installed, application pool reset) application still uses the Min Pool Size = 1 setting.
    The connection string is build up in application code. On the testserver it was sufficient to stop and start the website and its application pool, on the production server this has no effect.
    Does anyone have an idea what the problem is??? We also tried restarting the IIS server, that did not help either. In production there are more applications using ODP.
    Regards, Paul.

    What problem are you trying to solve at this point? Simply "why is odp not tracing?" The only suggestions I really have there are
    1) make sure you restart the app after enabling tracing parameters
    2) make sure the directory you've set tracing to is open as far as permissions
    3) make sure you've set tracing in the right registry setting if you have multiple versions of ODP installed
    4) if you're trying to write to c:\ root, try creating and pointing to a different (c:\odptrace for example) directory instead.
    Hope it helps,
    Greg

  • Thread pool in servlet container

    Hello all,
    I'm working on this webapp that has some bad response times and I've identified an area were we could shave off a considerable amount of time. The app is invoking a component that causes data to be catched for subsequently targeted apps in the environemnt. Our app does not need to wait for a response so I'd like to make this an asyncronous call. So, how best to implement this?...I considered JMS, but started working on a solution using the Java 1.4 backport of JSR 166 (java.util.concurrent).
    I've been testing the use of a ThreadPoolExecutor, using an ArrayBlockingQueue. The work that each Runnable will perform involves a lot of waiting (the component we call invokes a web service, among a couple other distributed calls). So I figure the pool will be much larger than the queue. Our container has 35 execute threads, so I've been testing with a thread pool size of 25, and a queue of 10.
    Any thoughts on this approach? I understand that some of this work could be simplified by JMS, but if I don't need to be tied to the container, I'd prefer not to. The code if much easier to unit test, and plays nicely with our continious build integration (which runs our junit test for us and notifies on errors).
    Any thoughts are greatly appreciated...Thanks!!

    Well, if it works, that's by far the best way to go - but note that creating threads in a servlet container means those threads are outside of the container's control. Many containers will refuse to give the new threads access to the JNDI context, even, and some may prevent you from creating threads at all.

  • Initial and Minimum Pool Size

    in j2ee admin console, i set the parameter to 8. I go to oracle and from sql*plus i execute the following command
    select * from v$session
    However, when i run my sample program I only see a session from my computer. Is it right? or I am supposed to see 8 sessions

    Well, that would depend on whether you set initial pool size to 8 or minimum pool size to 8, wouldn't it? All you said was that you set "the parameter" to 8.

  • How to set the correct shared pool size and db_buffer_cache using awr

    Hi All,
    I want to how to set the correct size for shared_pool_size and db_cache_size using shared pool advisory and buffer pool advisory of awr report. I have paste the shared and buffer pool advisory of awr report.
    Shared Pool Advisory
    * SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
    * Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. in v$librarycache), with the number of Lib Cache Memory Objects is invalid.
    Shared Pool Size(M)     SP Size Factr     Est LC Size (M)     Est LC Mem Obj     Est LC Time Saved (s)     Est LC Time Saved Factr     Est LC Load Time (s)     Est LC Load Time Factr     Est LC Mem Obj Hits (K)
    4,096     1.00     471     25,153     184,206     1.00     149     1.00     9,069
    4,736     1.16     511     27,328     184,206     1.00     149     1.00     9,766
    5,248     1.28     511     27,346     184,206     1.00     149     1.00     9,766
    5,760     1.41     511     27,346     184,206     1.00     149     1.00     9,766
    6,272     1.53     511     27,346     184,206     1.00     149     1.00     9,766
    6,784     1.66     511     27,346     184,206     1.00     149     1.00     9,766
    7,296     1.78     511     27,346     184,206     1.00     149     1.00     9,766
    7,808     1.91     511     27,346     184,206     1.00     149     1.00     9,766
    8,320     2.03     511     27,346     184,206     1.00     149     1.00     9,766
    Buffer Pool Advisory
    * Only rows with estimated physical reads >0 are displayed
    * ordered by Block Size, Buffers For Estimate
    P     Size for Est (M)     Size Factor     Buffers (thousands)     Est Phys Read Factor     Estimated Phys Reads (thousands)     Est Phys Read Time     Est %DBtime for Rds
    D     4,096     0.10     485     1.02     1,002     1     0.00
    D     8,192     0.20     970     1.00     987     1     0.00
    D     12,288     0.30     1,454     1.00     987     1     0.00
    D     16,384     0.40     1,939     1.00     987     1     0.00
    D     20,480     0.50     2,424     1.00     987     1     0.00
    D     24,576     0.60     2,909     1.00     987     1     0.00
    D     28,672     0.70     3,394     1.00     987     1     0.00
    D     32,768     0.80     3,878     1.00     987     1     0.00
    D     36,864     0.90     4,363     1.00     987     1     0.00
    D     40,960     1.00     4,848     1.00     987     1     0.00
    D     45,056     1.10     5,333     1.00     987     1     0.00
    D     49,152     1.20     5,818     1.00     987     1     0.00
    D     53,248     1.30     6,302     1.00     987     1     0.00
    D     57,344     1.40     6,787     1.00     987     1     0.00
    D     61,440     1.50     7,272     1.00     987     1     0.00
    D     65,536     1.60     7,757     1.00     987     1     0.00
    D     69,632     1.70     8,242     1.00     987     1     0.00
    D     73,728     1.80     8,726     1.00     987     1     0.00
    D     77,824     1.90     9,211     1.00     987     1     0.00
    D     81,920     2.00     9,696     1.00     987     1     0.00
    My shared pool size is 4gb and db_cache_size is 40Gb.
    Please help me in configuring the correct size for this.
    Thanks and Regards,

    Hi ,
    Actually batch load is taking too much time.
    Please find below the 1 hr awr report
         Snap Id     Snap Time     Sessions     Cursors/Session
    Begin Snap:     6557     27-Nov-11 16:00:06     126     1.3
    End Snap:     6558     27-Nov-11 17:00:17     130     1.6
    Elapsed:          60.17 (mins)          
    DB Time:          34.00 (mins)          
    Report Summary
    Cache Sizes
         Begin     End          
    Buffer Cache:     40,960M     40,960M     Std Block Size:     8K
    Shared Pool Size:     4,096M     4,096M     Log Buffer:     25,908K
    Load Profile
         Per Second     Per Transaction     Per Exec     Per Call
    DB Time(s):     0.6     1.4     0.00     0.07
    DB CPU(s):     0.5     1.2     0.00     0.06
    Redo size:     281,296.9     698,483.4          
    Logical reads:     20,545.6     51,016.4          
    Block changes:     1,879.5     4,667.0          
    Physical reads:     123.7     307.2          
    Physical writes:     66.4     164.8          
    User calls:     8.2     20.4          
    Parses:     309.4     768.4          
    Hard parses:     8.5     21.2          
    W/A MB processed:     1.7     4.3          
    Logons:     0.7     1.6          
    Executes:     1,235.9     3,068.7          
    Rollbacks:     0.0     0.0          
    Transactions:     0.4               
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %:     100.00     Redo NoWait %:     100.00
    Buffer Hit %:     99.66     In-memory Sort %:     100.00
    Library Hit %:     99.19     Soft Parse %:     97.25
    Execute to Parse %:     74.96     Latch Hit %:     99.97
    Parse CPU to Parse Elapsd %:     92.41     % Non-Parse CPU:     98.65
    Shared Pool Statistics
         Begin     End
    Memory Usage %:     80.33     82.01
    % SQL with executions>1:     90.90     86.48
    % Memory for SQL w/exec>1:     90.10     86.89
    Top 5 Timed Foreground Events
    Event     Waits     Time(s)     Avg wait (ms)     % DB time     Wait Class
    DB CPU          1,789          87.72     
    db file sequential read     27,531     50     2     2.45     User I/O
    db file scattered read     26,322     30     1     1.47     User I/O
    row cache lock     1,798     20     11     0.96     Concurrency
    OJVM: Generic     36     15     421     0.74     Other
    Host CPU (CPUs: 24 Cores: 12 Sockets: )
    Load Average Begin     Load Average End     %User     %System     %WIO     %Idle
    0.58     1.50     2.8     0.7     0.1     96.6
    Instance CPU
    %Total CPU     %Busy CPU     %DB time waiting for CPU (Resource Manager)
    2.2     63.6     0.0
    Memory Statistics
         Begin     End
    Host Mem (MB):     131,072.0     131,072.0
    SGA use (MB):     50,971.4     50,971.4
    PGA use (MB):     545.5     1,066.3
    % Host Mem used for SGA+PGA:     39.30     39.70
    RAC Statistics
         Begin     End
    Number of Instances:     2     2
    Global Cache Load Profile
         Per Second     Per Transaction
    Global Cache blocks received:     3.09     7.68
    Global Cache blocks served:     1.86     4.62
    GCS/GES messages received:     78.64     195.27
    GCS/GES messages sent:     53.82     133.65
    DBWR Fusion writes:     0.52     1.30
    Estd Interconnect traffic (KB)     65.50     
    Global Cache Efficiency Percentages (Target local+remote 100%)
    Buffer access - local cache %:     99.65
    Buffer access - remote cache %:     0.02
    Buffer access - disk %:     0.34
    Global Cache and Enqueue Services - Workload Characteristics
    Avg global enqueue get time (ms):     0.0
    Avg global cache cr block receive time (ms):     1.7
    Avg global cache current block receive time (ms):     1.0
    Avg global cache cr block build time (ms):     0.0
    Avg global cache cr block send time (ms):     0.0
    Global cache log flushes for cr blocks served %:     1.4
    Avg global cache cr block flush time (ms):     0.9
    Avg global cache current block pin time (ms):     0.0
    Avg global cache current block send time (ms):     0.0
    Global cache log flushes for current blocks served %:     0.1
    Avg global cache current block flush time (ms):     0.0
    Global Cache and Enqueue Services - Messaging Statistics
    Avg message sent queue time (ms):     0.0
    Avg message sent queue time on ksxp (ms):     0.4
    Avg message received queue time (ms):     0.5
    Avg GCS message process time (ms):     0.0
    Avg GES message process time (ms):     0.0
    % of direct sent messages:     79.13
    % of indirect sent messages:     17.10
    % of flow controlled messages:     3.77
    Cluster Interconnect
         Begin      End
    Interface     IP Address     Pub     Source     IP     Pub     Src
    en9     10.51.10.61     N     Oracle Cluster Repository               
    Main Report
    * Report Summary
    * Wait Events Statistics
    * SQL Statistics
    * Instance Activity Statistics
    * IO Stats
    * Buffer Pool Statistics
    * Advisory Statistics
    * Wait Statistics
    * Undo Statistics
    * Latch Statistics
    * Segment Statistics
    * Dictionary Cache Statistics
    * Library Cache Statistics
    * Memory Statistics
    * Streams Statistics
    * Resource Limit Statistics
    * Shared Server Statistics
    * init.ora Parameters
    More RAC Statistics
    * RAC Report Summary
    * Global Messaging Statistics
    * Global CR Served Stats
    * Global CURRENT Served Stats
    * Global Cache Transfer Stats
    * Interconnect Stats
    * Dynamic Remastering Statistics
    Back to Top
    Statistic Name     Time (s)     % of DB Time
    sql execute elapsed time     1,925.20     94.38
    DB CPU     1,789.38     87.72
    connection management call elapsed time     99.65     4.89
    PL/SQL execution elapsed time     89.81     4.40
    parse time elapsed     46.32     2.27
    hard parse elapsed time     25.01     1.23
    Java execution elapsed time     21.24     1.04
    PL/SQL compilation elapsed time     11.92     0.58
    failed parse elapsed time     9.37     0.46
    hard parse (sharing criteria) elapsed time     8.71     0.43
    sequence load elapsed time     0.06     0.00
    repeated bind elapsed time     0.02     0.00
    hard parse (bind mismatch) elapsed time     0.01     0.00
    DB time     2,039.77     
    background elapsed time     122.00     
    background cpu time     113.42     
    Statistic     Value     End Value
    NUM_LCPUS     0     
    NUM_VCPUS     0     
    AVG_BUSY_TIME     12,339     
    AVG_IDLE_TIME     348,838     
    AVG_IOWAIT_TIME     221     
    AVG_SYS_TIME     2,274     
    AVG_USER_TIME     9,944     
    BUSY_TIME     299,090     
    IDLE_TIME     8,375,051     
    IOWAIT_TIME     6,820     
    SYS_TIME     57,512     
    USER_TIME     241,578     
    LOAD     1     2
    OS_CPU_WAIT_TIME     312,200     
    PHYSICAL_MEMORY_BYTES     137,438,953,472     
    NUM_CPUS     24     
    NUM_CPU_CORES     12     
    GLOBAL_RECEIVE_SIZE_MAX     1,310,720     
    GLOBAL_SEND_SIZE_MAX     1,310,720     
    TCP_RECEIVE_SIZE_DEFAULT     16,384     
    TCP_RECEIVE_SIZE_MAX     9,223,372,036,854,775,807     
    TCP_RECEIVE_SIZE_MIN     4,096     
    TCP_SEND_SIZE_DEFAULT     16,384     
    TCP_SEND_SIZE_MAX     9,223,372,036,854,775,807     
    TCP_SEND_SIZE_MIN     4,096     
    Back to Wait Events Statistics
    Back to Top
    Operating System Statistics - Detail
    Snap Time     Load     %busy     %user     %sys     %idle     %iowait
    27-Nov 16:00:06     0.58                         
    27-Nov 17:00:17     1.50     3.45     2.79     0.66     96.55     0.08
    Back to Wait Events Statistics
    Back to Top
    Foreground Wait Class
    * s - second, ms - millisecond - 1000th of a second
    * ordered by wait time desc, waits desc
    * %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
    * Captured Time accounts for 95.7% of Total DB time 2,039.77 (s)
    * Total FG Wait Time: 163.14 (s) DB CPU time: 1,789.38 (s)
    Wait Class     Waits     %Time -outs     Total Wait Time (s)     Avg wait (ms)     %DB time
    DB CPU               1,789          87.72
    User I/O     61,229     0     92     1     4.49
    Other     102,743     40     31     0     1.50
    Concurrency     3,169     10     24     7     1.16
    Cluster     58,920     0     11     0     0.52
    System I/O     45,407     0     6     0     0.29
    Configuration     107     7     1     5     0.03
    Commit     383     0     0     1     0.01
    Network     15,275     0     0     0     0.00
    Application     52     8     0     0     0.00
    Back to Wait Events Statistics
    Back to Top
    Foreground Wait Events
    * s - second, ms - millisecond - 1000th of a second
    * Only events with Total Wait Time (s) >= .001 are shown
    * ordered by wait time desc, waits desc (idle events last)
    * %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
    Event     Waits     %Time -outs     Total Wait Time (s)     Avg wait (ms)     Waits /txn     % DB time
    db file sequential read     27,531     0     50     2     18.93     2.45
    db file scattered read     26,322     0     30     1     18.10     1.47
    row cache lock     1,798     0     20     11     1.24     0.96
    OJVM: Generic     36     42     15     421     0.02     0.74
    db file parallel read     394     0     7     19     0.27     0.36
    control file sequential read     22,248     0     6     0     15.30     0.28
    reliable message     4,439     0     4     1     3.05     0.18
    gc current grant busy     7,597     0     3     0     5.22     0.16
    PX Deq: Slave Session Stats     2,661     0     3     1     1.83     0.16
    DFS lock handle     3,208     0     3     1     2.21     0.16
    direct path write temp     4,842     0     3     1     3.33     0.15
    library cache load lock     39     0     3     72     0.03     0.14
    gc cr multi block request     37,008     0     3     0     25.45     0.14
    IPC send completion sync     5,451     0     2     0     3.75     0.10
    gc cr block 2-way     4,669     0     2     0     3.21     0.09
    enq: PS - contention     3,183     33     1     0     2.19     0.06
    gc cr grant 2-way     5,151     0     1     0     3.54     0.06
    direct path read temp     1,722     0     1     1     1.18     0.05
    gc current block 2-way     1,807     0     1     0     1.24     0.03
    os thread startup     6     0     1     108     0.00     0.03
    name-service call wait     12     0     1     47     0.01     0.03
    PX Deq: Signal ACK RSG     2,046     50     0     0     1.41     0.02
    log file switch completion     3     0     0     149     0.00     0.02
    rdbms ipc reply     3,610     0     0     0     2.48     0.02
    gc current grant 2-way     1,432     0     0     0     0.98     0.02
    library cache pin     903     32     0     0     0.62     0.02
    PX Deq: reap credit     35,815     100     0     0     24.63     0.01
    log file sync     383     0     0     1     0.26     0.01
    Disk file operations I/O     405     0     0     0     0.28     0.01
    library cache lock     418     3     0     0     0.29     0.01
    kfk: async disk IO     23,159     0     0     0     15.93     0.01
    gc current block busy     4     0     0     35     0.00     0.01
    gc current multi block request     1,206     0     0     0     0.83     0.01
    ges message buffer allocation     38,526     0     0     0     26.50     0.00
    enq: FB - contention     131     0     0     0     0.09     0.00
    undo segment extension     8     100     0     6     0.01     0.00
    CSS initialization     8     0     0     6     0.01     0.00
    SQL*Net message to client     14,600     0     0     0     10.04     0.00
    enq: HW - contention     96     0     0     0     0.07     0.00
    CSS operation: action     8     0     0     4     0.01     0.00
    gc cr block busy     33     0     0     1     0.02     0.00
    latch free     30     0     0     1     0.02     0.00
    enq: TM - contention     49     6     0     0     0.03     0.00
    enq: JQ - contention     19     100     0     1     0.01     0.00
    SQL*Net more data to client     666     0     0     0     0.46     0.00
    asynch descriptor resize     3,179     100     0     0     2.19     0.00
    latch: shared pool     3     0     0     3     0.00     0.00
    CSS operation: query     24     0     0     0     0.02     0.00
    PX Deq: Signal ACK EXT     72     0     0     0     0.05     0.00
    KJC: Wait for msg sends to complete     269     0     0     0     0.19     0.00
    latch: object queue header operation     4     0     0     1     0.00     0.00
    gc cr block congested     5     0     0     0     0.00     0.00
    utl_file I/O     11     0     0     0     0.01     0.00
    enq: TO - contention     3     33     0     0     0.00     0.00
    SQL*Net message from client     14,600     0     219,478     15033     10.04     
    jobq slave wait     7,726     100     3,856     499     5.31     
    PX Deq: Execution Msg     10,556     19     50     5     7.26     
    PX Deq: Execute Reply     2,946     31     27     9     2.03     
    PX Deq: Parse Reply     3,157     35     3     1     2.17     
    PX Deq: Join ACK     2,976     28     2     1     2.05     
    PX Deq Credit: send blkd     7     14     0     4     0.00     
    Back to Wait Events Statistics
    Back to Top
    Background Wait Events
    * ordered by wait time desc, waits desc (idle events last)
    * Only events with Total Wait Time (s) >= .001 are shown
    * %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
    Event     Waits     %Time -outs     Total Wait Time (s)     Avg wait (ms)     Waits /txn     % bg time
    os thread startup     140     0     13     90     0.10     10.35
    db file parallel write     8,233     0     6     1     5.66     5.08
    log file parallel write     3,906     0     6     1     2.69     4.62
    log file sequential read     350     0     5     16     0.24     4.49
    control file sequential read     13,737     0     5     0     9.45     3.72
    DFS lock handle     2,990     27     2     1     2.06     1.43
    db file sequential read     921     0     2     2     0.63     1.39
    SQL*Net break/reset to client     18     0     1     81     0.01     1.19
    control file parallel write     2,455     0     1     1     1.69     1.12
    ges lms sync during dynamic remastering and reconfig     24     100     1     50     0.02     0.98
    library cache load lock     35     0     1     24     0.02     0.68
    ASM file metadata operation     3,483     0     1     0     2.40     0.65
    enq: CO - master slave det     1,203     100     1     0     0.83     0.46
    kjbdrmcvtq lmon drm quiesce: ping completion     9     0     1     62     0.01     0.46
    enq: WF - contention     11     0     0     35     0.01     0.31
    CGS wait for IPC msg     32,702     100     0     0     22.49     0.19
    gc object scan     28,788     100     0     0     19.80     0.15
    row cache lock     535     0     0     0     0.37     0.14
    library cache pin     370     55     0     0     0.25     0.12
    ksxr poll remote instances     19,119     100     0     0     13.15     0.11
    name-service call wait     6     0     0     19     0.00     0.10
    gc current block 2-way     304     0     0     0     0.21     0.09
    gc cr block 2-way     267     0     0     0     0.18     0.08
    gc cr grant 2-way     355     0     0     0     0.24     0.08
    ges LMON to get to FTDONE     3     100     0     24     0.00     0.06
    enq: CF - contention     145     76     0     0     0.10     0.05
    PX Deq: reap credit     8,842     100     0     0     6.08     0.05
    reliable message     126     0     0     0     0.09     0.05
    db file scattered read     19     0     0     3     0.01     0.05
    library cache lock     162     1     0     0     0.11     0.04
    latch: shared pool     2     0     0     27     0.00     0.04
    Disk file operations I/O     504     0     0     0     0.35     0.04
    gc current grant busy     148     0     0     0     0.10     0.04
    gcs log flush sync     84     0     0     1     0.06     0.04
    ges message buffer allocation     24,934     0     0     0     17.15     0.02
    enq: CR - block range reuse ckpt     83     0     0     0     0.06     0.02
    latch free     22     0     0     1     0.02     0.02
    CSS operation: action     13     0     0     2     0.01     0.02
    CSS initialization     4     0     0     6     0.00     0.02
    direct path read     1     0     0     21     0.00     0.02
    rdbms ipc reply     153     0     0     0     0.11     0.01
    db file parallel read     2     0     0     8     0.00     0.01
    direct path write     5     0     0     3     0.00     0.01
    gc current multi block request     49     0     0     0     0.03     0.01
    gc current block busy     5     0     0     2     0.00     0.01
    enq: PS - contention     24     50     0     0     0.02     0.01
    gc cr multi block request     54     0     0     0     0.04     0.01
    ges generic event     1     100     0     10     0.00     0.01
    gc current grant 2-way     35     0     0     0     0.02     0.01
    kfk: async disk IO     183     0     0     0     0.13     0.01
    Log archive I/O     3     0     0     2     0.00     0.01
    gc buffer busy acquire     2     0     0     3     0.00     0.00
    LGWR wait for redo copy     123     0     0     0     0.08     0.00
    IPC send completion sync     18     0     0     0     0.01     0.00
    enq: TA - contention     11     0     0     0     0.01     0.00
    read by other session     2     0     0     2     0.00     0.00
    enq: TM - contention     9     89     0     0     0.01     0.00
    latch: ges resource hash list     135     0     0     0     0.09     0.00
    PX Deq: Slave Session Stats     12     0     0     0     0.01     0.00
    KJC: Wait for msg sends to complete     89     0     0     0     0.06     0.00
    enq: TD - KTF dump entries     8     0     0     0     0.01     0.00
    enq: US - contention     7     0     0     0     0.00     0.00
    CSS operation: query     12     0     0     0     0.01     0.00
    enq: TK - Auto Task Serialization     6     100     0     0     0.00     0.00
    PX Deq: Signal ACK RSG     24     50     0     0     0.02     0.00
    log file single write     6     0     0     0     0.00     0.00
    enq: WL - contention     2     100     0     1     0.00     0.00
    ADR block file read     13     0     0     0     0.01     0.00
    ADR block file write     5     0     0     0     0.00     0.00
    latch: object queue header operation     1     0     0     1     0.00     0.00
    gc cr block busy     1     0     0     1     0.00     0.00
    rdbms ipc message     103,276     67     126,259     1223     71.03     
    PX Idle Wait     6,467     67     12,719     1967     4.45     
    wait for unread message on broadcast channel     7,240     100     7,221     997     4.98     
    gcs remote message     218,809     84     7,213     33     150.49     
    DIAG idle wait     203,228     95     7,185     35     139.77     
    shared server idle wait     121     100     3,630     30000     0.08     
    ASM background timer     3,343     0     3,611     1080     2.30     
    Space Manager: slave idle wait     723     100     3,610     4993     0.50     
    heartbeat monitor sleep     722     100     3,610     5000     0.50     
    ges remote message     73,089     52     3,609     49     50.27     
    dispatcher timer     66     88     3,608     54660     0.05     
    pmon timer     1,474     82     3,607     2447     1.01     
    PING     1,487     19     3,607     2426     1.02     
    Streams AQ: qmn slave idle wait     125     0     3,594     28754     0.09     
    Streams AQ: qmn coordinator idle wait     250     50     3,594     14377     0.17     
    smon timer     18     50     3,505     194740     0.01     
    JOX Jit Process Sleep     73     100     976     13370     0.05     
    class slave wait     56     0     605     10806     0.04     
    KSV master wait     2,215     98     1     0     1.52     
    SQL*Net message from client     109     0     0     2     0.07     
    PX Deq: Parse Reply     27     44     0     1     0.02     
    PX Deq: Join ACK     30     40     0     1     0.02     
    PX Deq: Execute Reply     20     30     0     0     0.01     
    Streams AQ: RAC qmn coordinator idle wait     259     100     0     0     0.18     
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram
    * Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
    * % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
    * % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
    * Ordered by Event (idle events last)
              % of Waits
    Event     Total Waits     <1ms     <2ms     <4ms     <8ms     <16ms     <32ms     <=1s     >1s
    ADR block file read     13     100.0                                   
    ADR block file write     5     100.0                                   
    ADR file lock     6     100.0                                   
    ARCH wait for archivelog lock     3     100.0                                   
    ASM file metadata operation     3483     99.6     .1     .1                    .2     
    CGS wait for IPC msg     32.7K     100.0                                   
    CSS initialization     12     50.0                    50.0               
    CSS operation: action     21     28.6     9.5          61.9                    
    CSS operation: query     36     86.1     5.6     8.3                         
    DFS lock handle     6198     98.6     1.2     .1                    .1     
    Disk file operations I/O     909     95.7     3.6     .7                         
    IPC send completion sync     5469     99.9     .1     .0     .0                    
    KJC: Wait for msg sends to complete     313     100.0                                   
    LGWR wait for redo copy     122     100.0                                   
    Log archive I/O     3     66.7               33.3                    
    OJVM: Generic     36     55.6                              44.4     
    PX Deq: Signal ACK EXT     72     98.6     1.4                              
    PX Deq: Signal ACK RSG     2070     99.7               .0     .1     .0     .1     
    PX Deq: Slave Session Stats     2673     99.7     .2                         .1     .0
    PX Deq: reap credit     44.7K     100.0                                   
    SQL*Net break/reset to client     20     95.0                                   5.0
    SQL*Net message to client     14.7K     100.0                                   
    SQL*Net more data from client     32     100.0                                   
    SQL*Net more data to client     689     100.0                                   
    asynch descriptor resize     3387     100.0                                   
    buffer busy waits     2     100.0                                   
    control file parallel write     2455     96.6     2.2     .6     .6          .1          
    control file sequential read     36K     99.4     .3     .1     .1     .1     .1     .0     
    db file parallel read     397     8.8     .8     5.5     12.6     17.4     46.3     8.6     
    db file parallel write     8233     85.4     10.3     2.3     1.4     .4     .1          
    db file scattered read     26.3K     79.2     1.5     8.2     10.5     .6     .1     .0     
    db file sequential read     28.4K     60.2     3.3     18.0     18.1     .3     .1     .0     
    db file single write     2     100.0                                   
    direct path read     2     50.0                         50.0          
    direct path read temp     1722     95.8     2.8     .1     .5     .8     .1          
    direct path write     6     83.3                    16.7               
    direct path write temp     4842     96.3     2.7     .5     .2     .0     .0     .2     
    enq: AF - task serialization     1     100.0                                   
    enq: CF - contention     145     99.3     .7                              
    enq: CO - master slave det     1203     98.9     .8     .2                         
    enq: CR - block range reuse ckpt     83     100.0                                   
    enq: DR - contention     2     100.0                                   
    enq: FB - contention     131     100.0                                   
    enq: HW - contention     97     100.0                                   
    enq: JQ - contention     19     89.5     10.5                              
    enq: JS - job run lock - synchronize     3     100.0                                   
    enq: MD - contention     1     100.0                                   
    enq: MW - contention     2     100.0                                   
    enq: PS - contention     3207     99.5     .4     .1                         
    enq: TA - contention     11     100.0                                   
    enq: TD - KTF dump entries     8     100.0                                   
    enq: TK - Auto Task Serialization     6     100.0                                   
    enq: TM - contention     58     100.0                                   
    enq: TO - contention     3     100.0                                   
    enq: TQ - DDL contention     1     100.0                                   
    enq: TS - contention     1     100.0                                   
    enq: UL - contention     1     100.0                                   
    enq: US - contention     7     100.0                                   
    enq: WF - contention     11     81.8                              18.2     
    enq: WL - contention     2     50.0     50.0                              
    gc buffer busy acquire     2     50.0               50.0                    
    gc cr block 2-way     4934     99.9     .1                    .0     .0     
    gc cr block busy     35     68.6     31.4                              
    gc cr block congested     6     100.0                                   
    gc cr disk read     2     100.0                                   
    gc cr grant 2-way     4824     100.0     .0                              
    gc cr grant congested     2     100.0                                   
    gc cr multi block request     37.1K     99.8     .2     .0     .0     .0     .0     .0     
    gc current block 2-way     2134     99.9     .0                         .0     
    gc current block busy     7     14.3     14.3          14.3          28.6     28.6     
    gc current block congested     2     100.0                                   
    gc current grant 2-way     1337     99.9     .1                              
    gc current grant busy     7123     99.2     .2     .2     .0     .0     .3     .1     
    gc current grant congested     2     100.0                                   
    gc current multi block request     1260     99.8     .2                              
    gc object scan     28.8K     100.0                                   
    gcs log flush sync     65     95.4          3.1     1.5                    
    ges LMON to get to FTDONE     3                              100.0          
    ges generic event     1                         100.0               
    ges inquiry response     2     100.0                                   
    ges lms sync during dynamic remastering and reconfig     24                         16.7     29.2     54.2     
    ges message buffer allocation     63.1K     100.0                                   
    kfk: async disk IO     23.3K     100.0     .0     .0                         
    kjbdrmcvtq lmon drm quiesce: ping completion     9     11.1                              88.9     
    ksxr poll remote instances     19.1K     100.0                                   
    latch free     52     59.6     40.4                              
    latch: call allocation     2     100.0                                   
    latch: gc element     1     100.0                                   
    latch: gcs resource hash     1     100.0                                   
    latch: ges resource hash list     135     100.0                                   
    latch: object queue header operation     5     40.0     40.0     20.0                         
    latch: shared pool     5     40.0                    20.0     20.0     20.0     
    library cache load lock     74     9.5     5.4     8.1     17.6     10.8     13.5     35.1     
    library cache lock     493     99.2     .4     .4                         
    library cache pin     1186     98.4     .3     1.2     .1                    
    library cache: mutex X     6     100.0                                   
    log file parallel write     3897     72.9     1.5     17.1     7.5     .6     .3     .1     
    log file sequential read     350     4.6               3.1     59.4     30.0     2.9     
    log file single write     6     100.0                                   
    log file switch completion     3                         33.3          66.7     
    log file sync     385     90.4     3.6     4.7     .8     .5               
    name-service call wait     18          5.6     5.6     5.6     16.7     44.4     22.2     
    os thread startup     146                                   100.0     
    rdbms ipc reply     3763     99.7     .3                              
    read by other session     2     50.0          50.0                         
    reliable message     4565     99.7     .2     .0               .0     .1     
    row cache lock     2334     99.3     .2     .1                    .1     .3
    undo segment extension     8     50.0                    37.5     12.5          
    utl_file I/O     11     100.0                                   
    ASM background timer     3343     57.0     .3     .1     .1     .1          21.1     21.4
    DIAG idle wait     203.2K     3.4     .2     .4     18.0     41.4     14.8     21.8     
    JOX Jit Process Sleep     73                                   2.7     97.3
    KSV master wait     2213     99.4     .1     .2                    .3     
    PING     1487     81.0                                   19.0
    PX Deq Credit: send blkd     7     57.1          14.3     14.3          14.3          
    PX Deq: Execute Reply     2966     59.8     .8     9.5     5.6     10.2     2.6     11.4     
    PX Deq: Execution Msg     10.6K     72.4     12.1     2.6     2.5     .1     5.6     4.6     .0
    PX Deq: Join ACK     3006     77.9     22.1     .1                         
    PX Deq: Parse Reply     3184     67.1     31.1     1.6     .2                    
    PX Idle Wait     6466     .2     8.7     4.3     4.8     .3     .1     5.0     76.6
    SQL*Net message from client     14.7K     72.4     2.8     .8     .5     .9     .4     2.8     19.3
    Space Manager: slave idle wait     722                                        100.0
    Streams AQ: RAC qmn coordinator idle wait     259     100.0                                   
    Streams AQ: qmn coordinator idle wait     250     50.0                                   50.0
    Streams AQ: qmn slave idle wait     125                                        100.0
    class slave wait     55     67.3          7.3     1.8     5.5     1.8     7.3     9.1
    dispatcher timer     66     6.1                                   93.9
    gcs remote message     218.6K     7.7     1.8     1.2     1.6     1.7     15.7     70.3     
    ges remote message     72.9K     29.7     5.1     2.7     2.2     1.5     4.0     54.7     
    heartbeat monitor sleep     722                                        100.0
    jobq slave wait     7725                    .1          .0     99.9     
    pmon timer     1474     18.4                                   81.6
    rdbms ipc message     103.3K     20.7     2.7     1.5     1.3     .9     .7     40.7     31.6
    shared server idle wait     121                                        100.0
    smon timer     18                                        100.0
    wait for unread message on broadcast channel     7238                         .3          99.7     
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram Detail (64 msec to 2 sec)
    * Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
    * Units for % of Total Waits: ms is milliseconds s is 1024 milliseconds (approximately 1 second)
    * % of Total Waits: total waits for all wait classes, including Idle
    * % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
    * Ordered by Event (only non-idle events are displayed)
              % of Total Waits
    Event     Waits 64ms to 2s     <32ms     <64ms     <1/8s     <1/4s     <1/2s     <1s     <2s     >=2s
    ASM file metadata operation     6     99.8          .1     .1                    
    DFS lock handle     6     99.9               .1     .0               
    OJVM: Generic     16     55.6               2.8          41.7          
    PX Deq: Signal ACK RSG     3     99.9     .0     .1                         
    PX Deq: Slave Session Stats     3     99.9          .0               .0     .0     
    SQL*Net break/reset to client     1     95.0                              5.0     
    control file sequential read     1     100.0          .0                         
    db file parallel read     34     91.4     8.6                              
    db file scattered read     4     100.0     .0          .0                    
    db file sequential read     6     100.0     .0     .0     .0                    
    direct path write temp     11     99.8     .1     .1     .0                    
    enq: WF - contention     2     81.8               18.2                    
    gc cr block 2-way     1     100.0          .0                         
    gc cr multi block request     1     100.0          .0                         
    gc current block 2-way     1     100.0     .0                              
    gc current block busy     2     71.4     28.6                              
    gc current grant busy     8     99.9     .0     .1                         
    ges lms sync during dynamic remastering and reconfig     13     45.8     20.8     33.3                         
    kjbdrmcvtq lmon drm quiesce: ping completion     8     11.1     11.1     77.8                         
    latch: shared pool     1     80.0     20.0                              
    library cache load lock     26     64.9     14.9     12.2     4.1     4.1               
    log file parallel write     2     99.9     .0               .0               
    log file sequential read     10     97.1     2.0     .6     .3                    
    log file switch completion     2     33.3               66.7                    
    name-service call wait     4     77.8          22.2                         
    os thread startup     146               100.0                         
    reliable message     4     99.9          .0               .1          
    row cache lock     2     99.7                    .0     .0          .3
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram Detail (4 sec to 2 min)
    * Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
    * Units for % of Total Waits: s is 1024 milliseconds (approximately 1 second) m is 64*1024 milliseconds (approximately 67 seconds or 1.1 minutes)
    * % of Total Waits: total waits for all wait classes, including Idle
    * % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
    * Ordered by Event (only non-idle events are displayed)
              % of Total Waits
    Event     Waits 4s to 2m     <2s     <4s     <8s     <16s     <32s     < 1m     < 2m     >=2m
    row cache lock     6     99.7     .3                              
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram Detail (4 min to 1 hr)
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Service Statistics
    * ordered by DB Time
    Service Name     DB Time (s)     DB CPU (s)     Physical Reads (K)     Logical Reads (K)
    ubshost     1,934     1,744     445     73,633
    SYS$USERS     105     45     1     404
    SYS$BACKGROUND     0     0     1     128
    ubshostXDB     0     0     0     0
    Back to Wait Events Statistics
    Back to Top
    Service Wait Class Stats
    * Wait Class info for services in the Service Statistics section.
    * Total Waits and Time Waited displayed for the following wait classes: User I/O, Concurrency, Administrative, Network
    * Time Waited (Wt Time) in seconds
    Service Name     User I/O Total Wts     User I/O Wt Time     Concurcy Total Wts     Concurcy Wt Time     Admin Total Wts     Admin Wt Time     Network Total Wts     Network Wt Time
    ubshost      60232     90     2644     4     0     0     13302     0
    SYS$USERS      997     2     525     19     0     0     1973     0
    SYS$BACKGROUND      1456     2     1258     14     0     0     0     0
    I am not able to paste the whole awr report. I have paste some of the sections of awr report.
    Please help.
    Thanks and Regards,

  • How do I set the initial servlet pool size in WL 5.1

              In WL 4.5, I can set the initial servlet pool size using the
              weblogic.httpd.servlet.SingleThreadedModelPoolSize. I tried to set this property in WL 5.1, and get an "Found undeclared property..." message when booting WL. Is this feature still supported in WL 5.1? If so, how do I set it?
              Thankx
              

    It appears that pool size of 5 is hardcoded somewhere - no matter where you specify
              weblogic.httpd.servlet.SingleThreadedModelPoolSize, the following test servlet:
              import javax.servlet.*;
              import javax.servlet.http.*;
              public class SingleT extends HttpServlet implements SingleThreadModel {
              static int instanceCount = 0;
              public SingleT() {
              super();
              System.out.println("Instance " + (++instanceCount) + " created");
              always produces:
              Instance 1 created
              Wed Nov 01 11:15:36 PST 2000:<I> <ServletContext-General> SingleT: init
              Instance 2 created
              Wed Nov 01 11:15:36 PST 2000:<I> <ServletContext-General> SingleT: init
              Instance 3 created
              Wed Nov 01 11:15:36 PST 2000:<I> <ServletContext-General> SingleT: init
              Instance 4 created
              Wed Nov 01 11:15:36 PST 2000:<I> <ServletContext-General> SingleT: init
              Instance 5 created
              Wed Nov 01 11:15:36 PST 2000:<I> <ServletContext-General> SingleT: init
              Joe Trung <[email protected]> wrote:
              > Hi Huy,
              > There are lot of 'undeclared' stuffes if you move from 451 to 51.
              > However, if you run WLS with
              > '-Dweblogic.httpd.servlet.SingleThreadedModelPoolSize=10'
              > You will get what you want. I think BEA has moved this option to the <System props>, no more in its <config>
              > Joe
              > "Huy Pham" <[email protected]> wrote:
              >>
              >>In WL 4.5, I can set the initial servlet pool size using the
              >>weblogic.httpd.servlet.SingleThreadedModelPoolSize. I tried to set this property in WL 5.1, and get an "Found undeclared property..." message when booting WL. Is this feature still supported in WL 5.1? If so, how do I set it?
              >>
              >>Thankx
              Dimitri
              

  • Set maximum session bean pool size?

    Using the embedded OC4J, how can I set the maximum pool size for my session beans? I am using Jdeveloper 10g. Do I have to manually edit some XML file?

    Set the system property com.sun.jndi.ldap.connect.pool.maxsize
    System.setProperty("com.sun.jndi.ldap.connect.pool.maxsize", "25");

  • Fixed Size Thread Pool which infinitely serve task submitted to it

    Hi,
    I want to create a fixed size thread pool say of size 100 and i will submit around 200 task to it.
    Now i want it to serve them infinitely i.e once all tasks are completed re-do them again and again.
    public void start(Vector<String> addresses)
          //Create a Runnable object of each address in "addresses"
           Vector<FindAgentRunnable> runnables = new Vector<FindAgentRunnable>(1,1);
            for (String address : addresses)
                runnables.addElement(new FindAgentRunnable(address));
           //Create a thread pool of size 100
            ExecutorService pool = Executors.newFixedThreadPool(100);
            //Here i added all the runnables to the thread pool
             for(FindAgentRunnable runnable : runnables)
                    pool.submit(runnable);
                pool.shutdown();
    }Now i wants that this thread pool execute the task infinitely i.e once all the tasks are done then restart all the tasks again.
    I have also tried to add then again and again but it throws a java.util.concurrent.RejectedExecutionException
    public void start(Vector<String> addresses)
          //Create a Runnable object of each address in "addresses"
           Vector<FindAgentRunnable> runnables = new Vector<FindAgentRunnable>(1,1);
            for (String address : addresses)
                runnables.addElement(new FindAgentRunnable(address));
           //Create a thread pool of size 100
            ExecutorService pool = Executors.newFixedThreadPool(100);
            for(;;)
                for(FindAgentRunnable runnable : runnables)
                    pool.submit(runnable);
                pool.shutdown();
                try
                    pool.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS);
                catch (InterruptedException ex)
                    Logger.getLogger(AgentFinder.class.getName()).log(Level.SEVERE, null, ex);
    }Can anybody help me to solve this problem?
    Thnx in advance.

    Ravi_Gupta wrote:
    *@ kajbj*
    so what should i do?
    can you suggest me a solution?Consider this thread "closed". Continue to post in your other thread. I, and all others don't want to give answers that already have been given.

  • Fixed size thread pool excepting more tasks then it should

    Hello,
    I have the following code in a simple program (code below)
              BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(10, false);
              ThreadPoolExecutor newPool = new ThreadPoolExecutor(1, 10, 20, TimeUnit.SECONDS, q);
    for (int x = 0; x < 30; x++) {
    newPool.execute(new threaded());
    My understanding is that this should create a thread pool that will accept 10 tasks, once there have been 10 tasks submitted I should get RejectedExecutionException, however; I am seeing that when I execute the code the pool accepts 20 execute calls before throwing RejectedExecutionException. I am on Windows 7 using Java 1.6.0_21
    Any thoughts on what I am doing incorrectly?
    Thanks
    import java.util.concurrent.*;
    public class ThreadPoolTest {
         public static class threaded implements Runnable {
              @Override
              public void run() {
                   System.out.println("In thread: " + Thread.currentThread().getId());
                   try {
                        Thread.sleep(5000);
                   } catch (InterruptedException e) {
                        System.out.println("Thread: " + Thread.currentThread().getId()
                                  + " interuptted");
                   System.out.println("Exiting thread: " + Thread.currentThread().getId());
         private static int MAX = 10;
         private Executor pool;
         public ThreadPoolTest() {
              super();
              BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(MAX/2, false);
              ThreadPoolExecutor newPool = new ThreadPoolExecutor(1, MAX, 20, TimeUnit.SECONDS, q);
              pool = newPool;
         * @param args
         public static void main(String[] args) {
              ThreadPoolTest object = new ThreadPoolTest();
              object.doThreads();
         private void doThreads() {
              int submitted = 0, rejected = 0;
              for (int x = 0; x < MAX * 3; x++) {
                   try {
                        System.out.println(Integer.toString(x) + " submitting");
                        pool.execute(new threaded());
                        submitted++;
                   catch (RejectedExecutionException re) {
                        System.err.println("Submission " + x + " was rejected");
                        rejected++;
              System.out.println("\n\nSubmitted: " + MAX*2);
              System.out.println("Accepted: " + submitted);
              System.out.println("Rejected: " + rejected);
    }

    I don't know what is wrong because I tried this
    public static void main(String args[])  {
        BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(10, false);
        ThreadPoolExecutor newPool = new ThreadPoolExecutor(1, 10, 20, TimeUnit.SECONDS, q);
        for (int x = 0; x < 100; x++) {
            System.err.println(x + ": " + q.size());
            newPool.submit(new Callable<Void>() {
                @Override
                public Void call() throws Exception {
                    Thread.sleep(1000);
                    return null;
    }and it printed
    0: 0
    1: 0
    2: 1
    3: 2
    4: 3
    5: 4
    6: 5
    7: 6
    8: 7
    9: 8
    10: 9
    11: 10
    12: 10
    13: 10
    14: 10
    15: 10
    16: 10
    17: 10
    18: 10
    19: 10
    20: 10
    Exception in thread "main" java.util.concurrent.RejectedExecutionException
         at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
         at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
         at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
         at Main.main(Main.java:36)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)Ihave Java 6 update 24 on Linux, but I don't believe this should make a difference. Can you try my code?

  • Guidelines for setting Application Module Pool Size Parameters?

    Are there guidelines for setting the application module pool size parameters, such as initial pool size, maximum pool size, etc., based on the expected number of users or other factors? I've read the developer guide sections (ch 28-29), but still don't have a good feel for how to correctly set the optimal values for the pool configuration parameters? Even more importanty, how do I monitor the pool's efficiency during runtime to determine if the pooling parameters are configured correctly?
    This will be critical to performance and scalability, so I'm looking for a way to get some visibility into how the pooling is working during production operation to assess whether there are bottlenecks/constraints/ineffeciencies?
    Note I am using Tomcat as the java runtime container; ADF BC / JSF jdev 10.1.3.1
    Thanks in advance and Merry Christmas!

    KUBA - were you able to resolve these issues and if so are there any lessons learned you can share?
    I'm hoping someone from the ADF team can answer our original question including guidelines for setting pool parameters and how to monitor the pool's performance while running in production.
    thanks

  • Setting max bean pool size in MDB

    Hi,
    I need to set the max bean pool size for my MDB to 1. This MDB is a part of my application and is packaged in an ear.
    I tried to set it with the following annotation -
    import javax.ejb.*;
    @MessageDriven
    (mappedName = "MyQueue",
    name = "MyMDB",
    activationConfig = {
    @ActivationConfigProperty(propertyName="maxBeansInFreePool",
    propertyValue="1"),
    @ActivationConfigProperty(propertyName="initialBeansInFreePool",
    propertyValue="1"),
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
    However, this does not seem to work since I see the Current pool count on the WLS console as 3 after processing is done.
    After looking at various posts in this forum, I also tried it with weblogic ejbgen as follows-
    import weblogic.ejbgen.*;
    @MessageDriven(ejbName = "MyMDB",
    destinationType = "javax.jms.Queue",
    initialBeansInFreePool = "1",
    maxBeansInFreePool = "1",
    destinationJndiName = "MyQueue")
    However, with this the MDB did not get deployed in WLS.
    I am using Weblogic 10.3 / EJB 3.0.
    Any help on this is greatly appreciated.
    Thanks
    Meera

    As far as I know, it currently isn't possible to set max-beans-in-free-pool via annotations. You can use a deployment plan (configurable from console and/or follow the link supplied by atheek1).
    I think you can also automatically generate descriptors based on javadoc text via ejb-gen, I'm not quite sure if this tooling can work in conjunction with EJB 3.0 annotations. See http://download.oracle.com/docs/cd/E12840_01/wls/docs103/ejb/EJBGen_reference.html
    Tom

  • How to correctly use a fixed size thread pool?

    I am quite new to using concurrency in Java, so please forgive if this is a trivial question.
    I would like to make use of something like pool=Executors.newFixedThreadPool(n) to automatically use a fixed number of threads to process pieces of work. I understand that I can asynchronously run some Runnable by one of the threads in the threadpool using pool.execute(someRunnable).
    My problem is this: I have some fixed amount of N datastructures myDS (which are not reentrant or sharable) that get initialized at program start and which are needed by the runnables to do the work. So, what I really would like to do is that I not only reuse N threads but also N of these datastructures to do the work.
    So, lets say I want to have 10 threads, then I would want to create 10 myDS objects once and for all. Each time some work comes in, I want that work to get processed by the next free thread, using the next free datastructure. What I was wondering is if there is something in the library that lets me do the resusing of threads AND datastructures as simply as just reusing a pool of threads. Ideally, each thread would get associated with one datastructure somehow.
    Currently I use an approach where I create 10 Runnable worker objects, each with its own copy of myDS. Those worker objects get stored in an ArrayBlockingQueue of size 10. Each time some work comes in, I get the next Runner from the queue, pass it the piece of work and submit it to the thread pool.
    The tricky part is how to get the worker object back into the Queue: currently I essentially do queue.put(this) at the very end of each Runnable's run method but I am not sure if that is safe or how to do it safely.
    What are the standard patterns and library classes to use for solving this problem correctly?

    Thank you for that feedback!
    There is one issue that worries me though and I obviously do not understand it enough: as I said I hand back the Runnable to the blocking queue at the end of the Runnable.run method using queue.put(this). This is done via a static method from the main class that creates the threads and runnable objects in a main method. Originally I tried to make that method for putting back the Runnable objects serialized but that inevitably always led to a deadlock or hang condition: the method for putting back the runnable was never actually run. So I ended up doing this without serializing the put action in any way and so far it seems to work ... but is this safe?
    To reiterate: I have a static class that creates a thread pool object and a ArrayBlockingQueue queue object of runnable objects. In a loop I use queue.take() to get the next free runnable object, and pass this runnable to pool.execute. Inside the runnable, in method run, i use staticclass.putBack(this) which in turn does queue.put(therunnableigot). Can I trust that this queue.put operation, which can happen from several threads at the same time works without problem without serializing it explicitly? And why would making the staticclass.putBack method serialized cause a hang? I also tried to serialize using the queue object itself instead of the static class, by doing serialize(queue) { queue.put(therunnable) } but that also caused a hang. I have to admit that I do not understand at all why that hang occurred and if I need the serialization here or not.

  • JRun Thread Pool Issue

    I'm running CF 9.0.1 on Ubuntu on an "Medium" Amazon EC2 instance. CF has been crashing intermittently (several times per day). At such times, running top gets me this (or something similar):
    PID
    USER
    PR
    NI
    VIRT
    RES
    SHR
    S
    %CPU
    %MEM
    TIME+COMMAND                                                                                                   
    15855
    wwwrun
    20
    0
    1762m
    730m
    20m
    S
    99.3
    19.4
    13:22.96 coldfusion9
    So, it's obviously consuming most of the server resources. The following error has been showing up in my cfserver.log in the leadup to each crash:
    java.lang.RuntimeException: Request timed out waiting for an available thread to run. You may want to consider increasing the number of active threads in the thread pool.
    If I run /opt/coldfusion9/bin/coldfusion status, I get:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  150   25    0     0      -1352560      0      0
    In the administrator, under Server Settings > Request Tuning, the setting for Maximum number of simultaneous Template requests is 25. So this makes sense so far. I could just increase the thread pool to cover these sort of load spikes. I could make it 200. (Which I did just now as a test.)
    However, there's also this file /opt/coldfusion9/runtime/servers/coldfusion/SERVER-INF/jrun.xml. And some of the settings in there appear to conflict. For example, it reads:
    <service class="jrunx.scheduler.SchedulerService" name="SchedulerService">
      <attribute name="bindToJNDI">true</attribute>
      <attribute name="activeHandlerThreads">25</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="minHandlerThreads">20</attribute>
      <attribute name="threadWaitTimeout">180</attribute>
      <attribute name="timeout">600</attribute>
    </service>
    Which a) has fewer active threads (what does this mean?), and b) has a max threads that exceed the simultaneous request limit set in the admin. So, I'm not sure. Are these independent configs that need to be made to match manually? Or is the jrun.xml file supposed to be written by the CF Admin when changes are made there? Hmm. But maybe this is different because presumably the CF Scheduler should only use a subset of all available threads, right...so we'd always have some threads for real live users. We also have this in there:
    <service class="jrun.servlet.http.WebService" name="WebService">
      <attribute name="port">8500</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="deactivated">true</attribute>
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="timeout">300</attribute>
    </service>
    This appears to have changed when I changed the CF Admin setting...maybe...but it's the activeHandlerThreads that matches my new maximum simulataneous requests setting...rather than the maxHandlerThreads, which again exceeds it. Finally, we have this:
    <service class="jrun.servlet.jrpp.JRunProxyService" name="ProxyService">
      <attribute name="activeHandlerThreads">200</attribute>
      <attribute name="minHandlerThreads">1</attribute>
      <attribute name="maxHandlerThreads">1000</attribute>
      <attribute name="mapCheck">0</attribute>
      <attribute name="threadWaitTimeout">300</attribute>
      <attribute name="backlog">500</attribute>
      <attribute name="deactivated">false</attribute>
      <attribute name="interface">*</attribute>
      <attribute name="port">51800</attribute>
      <attribute name="timeout">300</attribute>
      <attribute name="cacheRealPath">true</attribute>
    </service>
    So, I'm not certain which (if any) of these I should change and what exactly the relationship is between maximum requests and maximum threads. Also, since several of these list the maxHandlerThreads as 1000, I'm wondering if I should just set the maximum simultaneous requests to 1000. There must be some upper limit that depends on available server resources...but I'm not sure what it is and I don't really want to play around with it since it's a production environment.
    I'm not sure if it pertains to this issue at all, but when I run a ps aux | grep coldfusion I get the following:
    wwwrun   15853  0.0  0.0   8704   760 pts/1
    S
    20:22   0:00 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -autorestart -start coldfusion
    wwwrun   15855  5.4 18.2 1678552 701932 pts/1  
    Sl
    20:22   1:38 /opt/coldfusion9/runtime/bin/coldfusion9 -jar jrun.jar -start coldfusion
    There are always these two and never more than these two processes. So there does not appear to be a one-to-one relationship between processes and threads. I recall from an MX 6.1 install I maintained for many years that additional CF processes were visible in the process list. It seemed to me at the time like I had a process for each thread...so either I was wrong or something is quite different in version 9 since it's reporting 25 running requests and only showing these two processes. If a single process can have multiple threads in the background, then I'm given to wonder why I have two processes instead of one...just curious.
    So, anyway, I've been experimenting while composing this post. As noted above I adjusted the maximum simulataneous requests up to 200. I was hoping this would solve my problem, but CF just crashed again (rather it slogged down and requests started timing out...so effectively "crashed"). This time, top looked similar (still consuming more than 99% of the CPU), but CF status looked different:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     150   0     0      0      0      0      0
    Obviously, since I'd increased the maximum simultaneous requests, it was allowing more requests to run simultaneously...but it was still maxing out the server resources.
    Further experiments (after restarting CF) showed me that the server became unusably slogged after about 30-35 "Reqs Run'g", with all additional requests headed for an inevitible timeout:
    Pg/Sec  DB/Sec  CP/Sec  Reqs  Reqs  Reqs  AvgQ   AvgReq AvgDB  Bytes  Bytes
    Now Hi  Now Hi  Now Hi  Q'ed  Run'g TO'ed Time   Time   Time   In/Sec Out/Sec
    0   0   0   0   -1  -1  0     33    0     0      -492   0      0      0
    So, it's clear that increasing the maximum simultaneous requests has not helped. I guess what it comes down to is this: What is it having such a hard time with? Where are these spikes coming from? Bursts of traffic? On what pages? What requests are running at any given time? I guess I simply need more information to continue troubleshooting. If there are long-running requests, or other issues, I'm not seeing it in the logs (although I do have that option checked in the admin). I need to know which requests exactly are those responsible for these spikes. Any help would be much appreciated. Thanks.
    ~Day

    I really appreciate your help. However, I haven't been able to find the JRun Thread settings you describe above.
    Under Request Tuning, I see:
    Server Settings > Request Tuning
    Request Limits
    Maximum number of simultaneous Template requests
      Restricts the number of simultaneously processed requests. Use this setting to increase overall system performance for heavy load applications. Requests beyond the specified limit are queued. On Standard Edition, you must restart ColdFusion to enable this setting. 
    Maximum number of simultaneous Flash Remoting requests
      The number of Flash Remoting requests that can be processed concurrently.
    Maximum number of simultaneous Web Service requests
      The number of Web Service requests that can be processed concurrently.
    Maximum number of simultaneous CFC function requests
      The number of ColdFusion Component methods that can be processed concurrently via HTTP. This does not affect invocation of CFC methods from within CFML, only methods requested via an HTTP request.
    Tag Limit Settings
    Maximum number of simultaneous Report threads
      The maximum number of ColdFusion reports that can be processed concurrently.
    Maximum number of threads available for CFTHREAD
      The maximum number of threads created by CFTHREAD that will be run concurrently. Threads created by CFTHREAD in excess of this are queued.  On Standard Edition, the maximum limit is 10. 
    And under Java and JVM, I see:
    Server Settings > Java and JVM
        Java and JVM settings control the way ColdFusion starts the Java Virtual Machine when it starts.  You can control settings like what classpaths are used and how memory is allocated as well as add custom command line arguments.  Changing these settings requires restarting ColdFusion.  If you enter an incorrect setting, ColdFusion may not restart properly. 
       Backups of the jvm.config file are created when you hit the submit button. You can use this backup to restore from a critical change. 
       Java Virtual Machine Path
      Specifies the location of the Java Virtual Machine.
       Minimum JVM Heap Size (MB)         Maximum JVM Heap Size  (MB)       
       The Memory Size settings determine the amount of memory that the JVM can use for programs and data. 
       ColdFusion Class Path
      Specifies any additional class paths for the JVM, with multiple directories separated by  commas.
       JVM Arguments
      -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
      Specifies any specific JVM initialization options, separated by spaces.
    I did go take a look at FusionReactor and found it's not free (which would be fine, of course, if it would actually help). It looks like there's a fully functional demo, which is cool...but I've haven't been able to get it to install yet, so we'll see.
    Thanks again!
    ~Day
    (By the way, I've cross-posted this inquiry on StackOverflow. So if you're able to help me arrive at a solution you might want to answer there as well.)

Maybe you are looking for

  • FTP put leaving garbage in a .txt file

    Hi, I'm working with LiveCycle, Workbench ES 8.5.2. I'm doing an FTP put of a file to a server. The FTP seems to works fine, the file is in the server, but the service created some garbage after my data. I tried both transfer mode - binary, ascii - b

  • Attach eventlisteners to dynamic movieclips and pass a variable.

    Hi there, I have a mc (changeColorMc) and three movieclips. The three movieclips are created on the fly (so there could be more movieclips) and filled with a color from an Array. This works fine. Now I want to add an eventlistener for each movieclip,

  • File formats in Finder

    While trying to open a file in Xcode (because no application could be found on double-click), my Finder stopped responding. After a forced restart, many files are now presented like folders. Good example perhaps : HD/System/Library/Frameworks/ : all

  • Streaming videos on c2-00

    How to stream videos, say of youtube, on my c2-00 ?

  • Frozen mail

    I have received a mail, when I am standing over it my mail program frozes, I can´t delete it, not even on my iphone...what can I do?