Garbage collection every 60 seconds

Hi.
I have a problem with the (default) garbage collector doing a needless Full GC every 60 seconds:
1003.575: [Full GC 1003.576: [Tenured: 255242K->255199K(443476K), 3.7030366 secs] 255388K->255199K(476820K), [Perm : 5011K->4999K(5120K)], 3.7031405 secs]
1067.281: [Full GC 1067.281: [Tenured: 255199K->255199K(443476K), 3.6781538 secs] 255199K->255199K(476820K), [Perm : 4999K->4999K(5120K)], 3.6782605 secs]
1130.963: [Full GC 1130.963: [Tenured: 255199K->255199K(443476K), 3.6929562 secs] 255199K->255199K(476820K), [Perm : 4999K->4999K(5120K)], 3.6930599 secs]
1194.658: [Full GC 1194.658: [Tenured: 255199K->255199K(443476K), 3.6852628 secs] 255199K->255199K(476820K), [Perm : 4999K->4999K(5120K)], 3.6853695 secs]
Why is it doing this? There is basically no activity in the application, and there is clearly nothing to garbage collect. According to http://java.sun.com/docs/hotspot/gc1.4.2/ the garbage collector only performs a Full GC when the Tenured generation becomes full. However, that is clearly not the case here.
Any hints will be much appreciated.

A bit more info:
I am running JDK 1.4.2_04, on both Windows 2000 and Linux.
I am not doing any System.gc() calls.
The java start command is:
java -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -Xms256m -mx512m -cp %ODDSENGINE_HOME%\bin;%ODDSENGINE_HOME%\conf;%ODDSENGINE_HOME%\lib\common.jar;%ODDSENGINE_HOME%\lib\oddsengine.jar;%ODDSENGINE_HOME%\lib\activation.jar;%ODDSENGINE_HOME%\lib\mail.jar;%ODDSENGINE_HOME%\lib\mysql-connector-java-3.0.8-stable-bin.jar -Djava.rmi.server.codebase="file:/%ODDSENGINE_HOME%/lib/oddsengine.jar file:/%ODDSENGINE_HOME%/lib/common.jar" -Djava.security.policy=%ODDSENGINE_HOME%\conf\serverapp.policy com.betbrain.odds.engine.application.server.ServerApp
I have another program that does not use RMI and a policy, and there the garbage collector runs as expected, i.e. only when the tenured generation becomes full.

Similar Messages

  • Garbage collection Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01

    "Hi,
    I try and understand the mechanism of garbage collection of the Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01.
    There is description of this mechanism in the pdf file : "memory management and garbage collection" available at the paragraph "Java performance tuning tutorial" at the page :
    http://h21007.www2.hp.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1,1701,1607,00.html
    Regarding my question :
    Below is an extract of the log file of garbage collections. This extract has 2 consecutive garbage collections.
    (each begins with "<GC:").
    <GC: 1 387875.630047 554 1258496 1 161087488 0 161087488 20119552 0 20119552
    334758064 238778016 335544320
    46294096 46294096 46399488 5.319209 >
    <GC: 5 387926.615209 555 1258496 1 161087488 0 161087488 0 0 20119552
    240036512 242217264 335544320
    46317184 46317184 46399488 5.206192 >
    There are 2 "full garbage collections", one of reason "1" and one of reason "5".
    For the first one "Old generation After " =238778016
    For the second "Old generation After " =238778016
    Thus, "Old generation Before garbage collection" of the second is higher than "Old generation After garbage collection". Why?
    I expected all objects to be allocated in the "Eden" space. And therefore I did not expect to s

    I agree but my current Hp support is not very good on JVM issues.
    Rob Woollen <[email protected]> wrote:
    You'd probably be better off asking this question to HP.
    -- Rob
    Martial wrote:
    The object of this mail is the Hewlett-Packard 1.3.1.01 Hotspot JavaVirtual Machine
    release and its garbage collection mechanism.
    I am interested in the "-Xverbosegc" option for garbage collectionmonitoring.
    I have been through the online document :
    http://www.hp.com/products1/unix/java/infolibrary/prog_guide/java1_3/hotspot.html#-Xverbosegc
    I would like to find out more about the garbage collection mechanismand need
    further information to understand the result of the log file generatedwith the
    "-Xverbosegc"
    For example here is an extract of a garbage collection log file generatedwith
    Hewlett-Packard Hotspot Java Virtual Machine. Release 1.3.1.01.
    These are 2 consecutive rows of the files :
    <GC: 5 385565.750251 543 48 1 161087488 0 161087488 0 0 20119552 264184480255179792
    335544320 46118384 46118384 46137344 5.514721 >
    <GC: 1 385876.530728 544 1258496 1 161087488 0 161087488 20119552 020119552 334969696
    255530640 335544320 46121664 46106304 46137344 6.768760 >
    We have 2 full garbage collections, one of Reason 5 and the next oneof Reason
    1.
    What happened between these 2 garbage collections as we got : "Oldgeneration
    After" of row 2 is higher than "Old generation Before" of row 1? Iexpected Objects
    to be initially allocated in eden and so we could not get "old generation2modified
    between the end of one garbage collection and before the next one.
    Could you please clarify this issue and/or give more information aboutgarbage
    collection mechanisms with the Hewlett-Packard Hotspot Java VirtualMachine. Release
    1.3.1.01.

  • [scjp exam] garbage collection questions

    can someone please post me some garbage collection questions + correct answers, because i've taken the SCJP test again and i didn't passed it again. A co-worker of mine also didn't passed either.
    (We both got 0% on Garbage Collection every time) And i really thought i had it correct the last time.
    So please can someone mail/post me some garbage collection questions? thanks!

    The garbage collector collects unreachable objects.
    You have no control over when the objects are
    collected, but there is a guarantee that all
    unreachable objects will be collected before an
    OutOfMemoryError is thorwn.Exactly my thoughts...
    That's it. It really is just that simple. I scored 94%
    on the Programmer's Exam, with 100% in the garbage
    collection section.Hmm i got 54% last exam i took. I really begin to doubt about myself, because i find the questions on the exam so tricky.. Last book was Exam Cram (Java), and i found this a very nice and good book to study for the exam. Garbage collection is just one question, so it's either 0% or 100%. So far it hasn't gone anything higher then 0% with me.
    Here are some questions:
    1. When is the Object first available for garbage
    collection (when does it become unreachable)?
    Object o = new Object();
    Object p = o;
    o = null;
    p = null;p = null; -> available for gc.
    2. When is the Object first available for garbage
    collection?
    Object o = new Object();
    Vector v = new Vector();
    v.add(o);
    o = null;
    v = null;v = null; -> available for gc.
    3. Can the Vectors be garbage collected at the end of
    this code?
    Vector v = new Vector();
    Vector w = new Vector();
    v.add(w);
    w.add(v);
    v = null;
    w = null;yes
    Now i have a question:
    public int foo {
    Integer result = new Integer(10);
    result = null;
    return result;
    when is result available for gc?

  • Collecting the stats every 5 seconds

    Hello,
    I would like to collect the data from the weblogic server every 5 seconds. With WLST, I am able to get all the information I need. The problem is how to automate it to make it run every 5-10 seconds. If I am in WLST Online prompt, can I create a script and make it run frequently. It will be great if I get an advise about how to go about it and what is the best way to do it.

    Hi Satya, Thanks for the solution, that should really help. But, I am having trouble implementing it. Let me give you a little detail. I have scripts running on my remote server which connects to our Weblogic Servers and collect data.
    java weblogic.WLST monitor.py > output.dat
    where monitor.py:
    waitTime=300000
    THRESHOLD=100000000
    username='system'
    password='password'
    connect(username,password,'t3://prod.web.com:56159')
    runtime()
    serverNames = adminHome.getMBeansByType("ServerRuntime")
    print 'Server State TotalJVM FreeJVM UsedJVM TotalThreads IdleThreads QueueLength'
    for name in serverNames:
    cd("/ServerRuntimes/"+name.getName()+"/JVMRuntime/"+name.getName())
    totalJVM = cmo.getHeapSizeCurrent()
    freeJVM = cmo.getHeapFreeCurrent()
    usedJVM = (totalJVM - freeJVM)
    cd("/ServerRuntimes/"+name.getName()+"/ExecuteQueueRuntimes/")
    cd('weblogic.kernel.Default')
    TotalThreads = get('ExecuteThreadTotalCount')
    IdleThreads = get('ExecuteThreadCurrentIdleCount')
    QueueLength = get('PendingRequestCurrentCount')
    print name.getName(),' ',name.getState(),' ',totalJVM,' ',freeJVM,' ',usedJVM,' ',TotalThreads,' ',IdleThreads,' ',QueueLength
    I want the same results in vmstat format, which I believe collectStats() function will achieve. Can you please give me a little more detail on how to work on it. Will the function print the stats within the WLST prompt or after it comes out. Please advise me as I am not getting an idea about how to implement it.
    Thanks
    Ravi

  • High cpu usage for garbage collection (uptime vs total gc time)

    Hi Team,
    We have a very high cpu usage issue in the production.
    When we restart the server, the cpu idle time would be around 95% and it comes down as days goes by. Today idle cpu is 30% and it is just 6th day after the server restart.
    Environemnt details:
    Jrockit version:
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_05-b04)
    BEA WebLogic JRockit(TM) 1.4.2_05 JVM R24.4.0-1 (build ari-38120-20041118-1131-linux-ia32, Native Threads, GC strategy: parallel)
    Gc Algorithm: JRockit Garbage Collection System currently running strategy: Single generational, parallel mark, parallel sweep
    Number Of Processors: 4
    Max Heap Size: 1073741824
    Total Garbage Collection Time: 21:43:56.5
    Uptime: 114:33:4.1
    Total Garbage Collection Count: 420872
    Total Number Of Threads: 198
    Number Of Daemon Threads: 191
    Can you guys please tell me what would be problem in the server which causing the high cpu usage?
    One more thing I would like to know is that why the total number of threads is 198 when we specified the Executor pool size as 25? I agree that weblogic would create some threads for its maintenance but around 160 threads!!! something is wrong I guess.
    Santhosh.
    [email protected]

    Hi,
    I'm having a similar problem, but haven't been able to resolve it yet. Troubleshooting is made even harder by the fact that this is only happening on our production server, and I've been unable to reproduce it in the lab.
    I'll post whatever findings I have and hopefully we'll be able to find a solution with the help of BEA engineers.
    In my case, I have a stand-alone Tomcat server that runs fine for about 1-2 days, and then the JVM suddenly starts using more CPU, and as a result, the server load shoots up (normal CPU utilization is ~5% but eventually goes up to ~95%; load goes from 0.1 to 4+).
    What I have found so far is that this corresponds to increased GC activity.
    Let me list my environment specs before I proceed, though:
    CPU: Dual Xeon 3.06GHz
    RAM: 2GB
    OS: RHEL4.4 (2.6.9-42.0.2.ELsmp)
    JVM build 1.5.0_03-b07 (BEA JRockit(R) (build dra-45238-20050523-2008-linux-ia32, R25.2.0-28))
    Tomcat version 5.5.12
    JAVA_OPTS="-Xms768m -Xmx768m -XXtlasize16k -XXlargeobjectlimit16k -Xverbose:memory,cpuinfo -Xverboselog:/var/log/tomcat5/jvm.log -Xverbosetimestamp"
    Here are excerpts from my verbose log (I'm getting some HT warning, not sure if that's a problem):
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Detected SMP with 2 CPUs that support HT.
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Trying to determine if HT is enabled.
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Trying to read from /dev/cpu/0/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Warning: Failed to read from /dev/cpu/0/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Trying to read from /dev/cpu/1/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Warning: Failed to read from /dev/cpu/1/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] HT is: supported by the CPU, not enabled by the OS, enabled in JRockit.
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Warning: HT enabled even though OS does not seem to support it.
    [Fri Oct 20 15:54:55 2006][22855][memory ] GC strategy: System optimized over throughput (initial strategy singleparpar)
    [Fri Oct 20 15:54:55 2006][22855][memory ] heap size: 786432K, maximal heap size: 786432K
    [Fri Oct 20 16:07:30 2006][22855][memory ] Changing GC strategy to generational, parallel mark and parallel sweep
    [Fri Oct 20 16:07:30 2006][22855][memory ] 791.642-791.874: GC 786432K->266892K (786432K), 232.000 ms
    [Fri Oct 20 16:08:02 2006][22855][memory ] 824.122: nursery GC 291998K->274164K (786432K), 175.873 ms
    [Fri Oct 20 16:09:51 2006][22855][memory ] 932.526: nursery GC 299321K->281775K (786432K), 110.879 ms
    [Fri Oct 20 16:10:24 2006][22855][memory ] 965.844: nursery GC 308151K->292222K (786432K), 174.609 ms
    [Fri Oct 20 16:11:54 2006][22855][memory ] 1056.368: nursery GC 314718K->300068K (786432K), 66.032 ms
    [Sat Oct 21 23:21:09 2006][22855][memory ] 113210.427: nursery GC 734274K->676137K (786432K), 188.985 ms
    [Sat Oct 21 23:30:41 2006][22855][memory ] 113783.140: nursery GC 766601K->708592K (786432K), 96.007 ms
    [Sat Oct 21 23:36:15 2006][22855][memory ] 114116.332-114116.576: GC 756832K->86835K (786432K), 243.333 ms
    [Sat Oct 21 23:48:20 2006][22855][memory ] 114841.653: nursery GC 182299K->122396K (786432K), 175.252 ms
    [Sat Oct 21 23:48:52 2006][22855][memory ] 114873.851: nursery GC 195060K->130483K (786432K), 142.122 ms
    [Sun Oct 22 00:01:31 2006][22855][memory ] 115632.706: nursery GC 224096K->166618K (786432K), 327.264 ms
    [Sun Oct 22 00:16:37 2006][22855][memory ] 116539.368: nursery GC 246564K->186328K (786432K), 173.888 ms
    [Sun Oct 22 00:26:21 2006][22855][memory ] 117122.577: nursery GC 279056K->221543K (786432K), 170.367 ms
    [Sun Oct 22 00:26:21 2006][22855][memory ] 117123.041: nursery GC 290439K->225833K (786432K), 69.170 ms
    [Sun Oct 22 00:29:10 2006][22855][memory ] 117291.795: nursery GC 298947K->238083K (786432K), 207.200 ms
    [Sun Oct 22 00:39:05 2006][22855][memory ] 117886.478: nursery GC 326956K->263441K (786432K), 87.009 ms
    [Sun Oct 22 00:55:22 2006][22855][memory ] 118863.947: nursery GC 357229K->298971K (786432K), 246.643 ms
    [Sun Oct 22 01:08:17 2006][22855][memory ] 119638.750: nursery GC 381744K->322332K (786432K), 147.996 ms
    [Sun Oct 22 01:11:22 2006][22855][memory ] 119824.249: nursery GC 398678K->336478K (786432K), 93.046 ms
    [Sun Oct 22 01:21:35 2006][22855][memory ] 120436.740: nursery GC 409150K->345186K (786432K), 81.304 ms
    [Sun Oct 22 01:21:38 2006][22855][memory ] 120439.582: nursery GC 409986K->345832K (786432K), 153.534 ms
    [Sun Oct 22 01:21:42 2006][22855][memory ] 120443.544: nursery GC 410632K->346473K (786432K), 121.371 ms
    [Sun Oct 22 01:21:44 2006][22855][memory ] 120445.508: nursery GC 411273K->347591K (786432K), 60.688 ms
    [Sun Oct 22 01:21:44 2006][22855][memory ] 120445.623: nursery GC 412391K->347785K (786432K), 68.935 ms
    [Sun Oct 22 01:21:45 2006][22855][memory ] 120446.576: nursery GC 412585K->348897K (786432K), 152.333 ms
    [Sun Oct 22 01:21:45 2006][22855][memory ] 120446.783: nursery GC 413697K->349080K (786432K), 70.456 ms
    [Sun Oct 22 01:34:16 2006][22855][memory ] 121197.612: nursery GC 437378K->383392K (786432K), 165.771 ms
    [Sun Oct 22 01:37:37 2006][22855][memory ] 121398.496: nursery GC 469709K->409076K (786432K), 78.257 ms
    [Sun Oct 22 01:37:37 2006][22855][memory ] 121398.730: nursery GC 502490K->437713K (786432K), 65.747 ms
    [Sun Oct 22 01:44:03 2006][22855][memory ] 121785.259: nursery GC 536605K->478156K (786432K), 132.293 ms
    [Sun Oct 22 01:44:04 2006][22855][memory ] 121785.603: nursery GC 568408K->503635K (786432K), 71.751 ms
    [Sun Oct 22 01:50:39 2006][22855][memory ] 122180.985: nursery GC 591332K->530811K (786432K), 131.831 ms
    [Sun Oct 22 02:13:52 2006][22855][memory ] 123573.719: nursery GC 655566K->595257K (786432K), 117.311 ms
    [Sun Oct 22 02:36:04 2006][22855][memory ] 124905.507: nursery GC 688896K->632129K (786432K), 346.990 ms
    [Sun Oct 22 02:50:24 2006][22855][memory ] 125765.715-125765.904: GC 786032K->143954K (786432K), 189.000 ms
    [Sun Oct 22 02:50:26 2006][22855][memory ] 125767.535-125767.761: GC 723232K->70948K (786432K), 225.000 ms
    vvvvv
    [Sun Oct 22 02:50:27 2006][22855][memory ] 125768.751-125768.817: GC 712032K->71390K (786432K), 64.919 ms
    [Sun Oct 22 02:50:28 2006][22855][memory ] 125769.516-125769.698: GC 711632K->61175K (786432K), 182.000 ms
    [Sun Oct 22 02:50:29 2006][22855][memory ] 125770.753-125770.880: GC 709632K->81558K (786432K), 126.000 ms
    [Sun Oct 22 02:50:30 2006][22855][memory ] 125771.699-125771.878: GC 708432K->61368K (786432K), 179.000 ms
    So, I'm running with the default GC strategy which lets the GC pick the most suitable approach (single space or generational). It seems to switch to generational almost immediately and runs well - most GC runs are in the nursery, and only once in a while it goes through the older space.
    Now, if you look at [Sun Oct 22 02:50:27 2006], that's when everything changes. GC starts running every second (later on it's running 3 times a second) doing huge sweeps. It never goes through the nursery again, although the strategy is still generational.
    It's all downhill from this point on, and it's a matter of hours (maybe a day) before we restart the server.
    I guess my only question is: What would cause such GC behavior?
    I would appreciate your ideas/comments!
    Thanks,
    Tenyo

  • Cache query results in too much garbage collection activity

    Oracle Coherence Version 3.6.1.3 Enterprise Edition: Production mode
    JRE 6 Update 21
    Linux OS 64 bit
    The application is using a object Customer having following structure
    Customer(CustID, FirstName, LastName, CCNumber, OCNumber)
    Each property of Customer is a inner classes having getValue as one of the methods retuning a value. The getValue method of CCNumber and OCNumber return a Long value. There are 150m instances of Customer in cache. To hold this much data in cache we are running several nodes on 2 machines.
    The following code is used to create indexes on CCNumber and OCNumber:
         ValueExtractor[] valExt = new ValueExtractor[]{
              new ReflectionExtractor("getCCNumber"), new ReflectionExtractor("getValue")};
         ChainedExtractor chExt = new ChainedExtractor(valExt);
         Long value = new Long(0);
         Filter f = new NotEqualsFilter(chExt, value);
         ValueExtractor condExtractor = new ConditionalExtractor(f, chExt, true);
         cache.addIndex(condExtractor, false, null);The client code queries the cache with following code:
         ValueExtractor[] valExt1 = new ValueExtractor[]{
              new ReflectionExtractor("getCCNumber"), new ReflectionExtractor("getValue")};
         ChainedExtractor chExt1 = new ChainedExtractor(valExt1);
         EqualsFilter filter1 = new EqualsFilter(chExt1, ccnumber);
         ValueExtractor[] valExt2 = new ValueExtractor[]{
              new ReflectionExtractor("getOCNumber"), new ReflectionExtractor("getValue")};
            ChainedExtractor chExt2 = new ChainedExtractor(valExt2);
         EqualsFilter filter2 = new EqualsFilter(chExt2, ocnumber);
         AnyFilter anyFilter = new AnyFilter(new Filter[]{filter1, filter2})
         cache.entrySet(anyFilter);The observation is that for 20 client threads the application performs well(avg response time = 200ms) but as the number of client threads increases the application performance goes down disproportionately(query returns anywhere between 1000ms to 8000ms for 60 threads). I think this is because of the eden space filling up very fast when the number of client thread goes up. The number of collections per second goes up with the number of client threads. There are almost 2-3 ParNew collections every second when there are 60 client threads where as only 1 collection per second for 20 client threads. Even 100-200ms pause degrades the overall query performance.
    My question is why coherence is creating so many objects that fills up eden so fast? Is there anything I need to do in my code?

    Hi Coh,
    The reason for so much garbage is that you are using ReflectionExtractors in you filters, I assume you do not have any indexes on your caches either. This means that each time you execute a query Coherence has to scan the cache for matches to the filter - like a full table scan in a DB. For each entry in the cache Coherence has to deserialize that entry into a real object then using reflection call the methods in the filters. Once the query is finished all these deserialized objects are garbage that needs to be collected. For a big cache this can be a lot of garbage.
    You can change to POF extractors to save the deserialization step which should reduce the garbage quite a bit, although not eliminate it. You could also use indexes, which should eliminate pretty much more of the garbage you are seeing during queries.
    JK

  • Help needed!! Novice with Garbage Collection problems.

    Hi Guys,
    Really hoping somebody can help me here. I am a relative novice when it comes to all things Java but i am slowly trying to learn. I have come across an issue which i have identified but i am just not sure what to do about it.
    Ok, in a nut shell the issue seems to be revolving around the frequency of garbage collection. From the default-err.log file i am seeing (on average) an Allocation Failure occur every 2 secs. Here is a sample from the log with verbose:gc active:
    <AF[4986]: Allocation Failure. need 208480 bytes, 78 ms since last AF>
    <AF[4986]: managing allocation failure, action=2 (559165976/1342176248)>
    <GC: Mon Oct 11 11:51:12 2004
    <GC(4986): freed 4101528 bytes in 1559 ms, 41% free (563267504/1342176248)>
    <GC(4986): mark: 1301 ms, sweep: 258 ms, compact: 0 ms>
    <GC(4986): refs: soft 0 (age >= 32), weak 0, final 0, phantom 0>
    <AF[4986]: completed in 1563 ms>
    <AF[4987]: Allocation Failure. need 208536 bytes, 78 ms since last AF>
    <AF[4987]: managing allocation failure, action=2 (559138336/1342176248)>
    <GC: Mon Oct 11 11:51:14 2004
    <GC(4987): freed 4105128 bytes in 1563 ms, 41% free (563243464/1342176248)>
    <GC(4987): mark: 1293 ms, sweep: 270 ms, compact: 0 ms>
    <GC(4987): refs: soft 0 (age >= 32), weak 0, final 0, phantom 0>
    <AF[4987]: completed in 1563 ms>
    As you can see, allocation failures are occuring all the time, and with 2secs between events, and each GC taking around 1.5secs, i am having massive problems with the response of the server. It seems that the javaw.exe process is just pegged at 100% CPU the whole time and then it will eventually grind to a halt, and the users will get terrible response times.
    OK, the questions are:
    - Even to me (a novice) the above extract from the log doesnt look good. Am i right?
    - what would be causing this? (i know - how long is a peice of string but i am hoping somebody can point me in the right direction so i can look some more)
    - what can i do about it? Is there any parameters i can put into the java args to help me out?
    Currently i am running -Xms of 128meg and an -Xmx of 1024m with no other settings. There are at the moment about 200users logged onto this server concurrently, and it seems to die a couple of hours into them all being logged on. I then have to kick everybody out and reboot to get it in a working state again.
    I am in some serious need of help from some gurus!! any help would be invaluable, thanks heaps guys.
    Tim

    Hi Again guys, thanks for all your replies.
    I have been working my butt of on this issue and i just cant seem to get anywhere... probably due to my complete lack of knowledge on this whole GC thing! :)
    One thing that i have noticed that seems to be very consistant is the fact that as soon as i get an "action=2" in my default-err.log from an allocation failure, thats when the system goes nuts and tends to not recover.
    At all other times it is an action=1, and the system seems to be running OK, but as soon as i get action=2, the time between GC events drop from seconds to miliseconds, and the bytes required just skyrockets. It keeps on this upward spiral till i just have to reboot the box.
    From all my reading, i have found that an action=2 means "2 - The Garbage Collector has tried to allocate out of the wilderness, and failed."
    This is the only item in the logs that i see is directly related to the server performance.
    Can anybody please explain to me (in laymans terms) what the action=2 means, what may cause it and what i should be looking at changing to fix it?
    The extract from the log files (in my first post in this thread) are still valid, as are the min/max memory settings.
    Any help at all would be invaluable.
    Thanks very much.
    Tim

  • WLS not garbage collecting enough?

    WLS 5.1 SP5
    SunOS 5.6 (2.6)
    JDK 1.2.2_005a (with the JVMARGS directive set as per BEA's docs. The
    JVMARGS directive solved the SIGBUS 10 for us when requesting the
    AdminMain servlet)
    We tried some stress testing of our web app by running about 100 to 200
    virtual clients requesting the same URL (servlet that forwards to a
    JSP). These virtual clients are actually Java threads running
    simultaneously.
    After each test, we notice that, via WL console, the heap usage of WL
    increased. However, after leaving WL server by itself (no HTTP requests
    coming to it) for a long time (say 30 minutes to 1 hour), we noticed
    that the heap usage has not gone down, except for about 1%. As soon as
    we explicitly tell WLS to garbage collect via the WL console, then the
    heap usage goes down ... sometimes even as low as just 10% when we tell
    it to gc again just after a gc a few seconds earlier.
    Is this proper behaviour by WLS? ... or does this have something to do
    with the JVMARGS directive? Why doesn't it gc even after a long time?
    Thanks in advance,
    John Salvo
    Homepage: http://homepages.tig.com.au/~jmsalvo/

    There are a number of problems with memory usage and garbage collection. Going to WLS 5.1 will fix
    some of them. You should also consider having your application call System.runFinalization(); and
    System.gc() from time to time.
    Mike
    "Jim Zhou" <[email protected]> wrote:
    I see the same behavior in my stress test/pre-production run. I think it's
    normal behavior. The GC will kick in when the heap size gets 100%, you might
    see a pause in your WLS because all threads will stop for the GC ( you
    should see CPU really busy when GC is in ). JDK 1.2 should have better GC
    algorithm than JDK 1.1. Usually for people still using JDK1.1.x, tuning the
    heap size for tolerable GC pause is a pain., the tuning guide want you to
    run multiple WLS instances in a box to stagger the effects of GC pause. If
    you use JDK 1.2, you probably set your heap size comparable to your box's
    memory.
    In one of my stress test, I set heap size to 256m, I see on my console the
    heap gets full every 30 seconds, and it take 1 or 2 seconds to GC. But if
    your heap is 2G, it might take longer.
    WLS 4.51 SP11
    Solaris 2.6
    JDK 1.2.2_06.
    Regards,
    Jim Zhou.
    Jesus M. Salvo Jr. <[email protected]> wrote in message
    news:[email protected]...
    WLS 5.1 SP5
    SunOS 5.6 (2.6)
    JDK 1.2.2_005a (with the JVMARGS directive set as per BEA's docs. The
    JVMARGS directive solved the SIGBUS 10 for us when requesting the
    AdminMain servlet)
    We tried some stress testing of our web app by running about 100 to 200
    virtual clients requesting the same URL (servlet that forwards to a
    JSP). These virtual clients are actually Java threads running
    simultaneously.
    After each test, we notice that, via WL console, the heap usage of WL
    increased. However, after leaving WL server by itself (no HTTP requests
    coming to it) for a long time (say 30 minutes to 1 hour), we noticed
    that the heap usage has not gone down, except for about 1%. As soon as
    we explicitly tell WLS to garbage collect via the WL console, then the
    heap usage goes down ... sometimes even as low as just 10% when we tell
    it to gc again just after a gc a few seconds earlier.
    Is this proper behaviour by WLS? ... or does this have something to do
    with the JVMARGS directive? Why doesn't it gc even after a long time?
    Thanks in advance,
    John Salvo
    Homepage: http://homepages.tig.com.au/~jmsalvo/

  • Garbage Collection Pauses in a client application: Trashing.

    Hi,
    we have put in production a financial trading application written in Java. We are experimenting a strange behaviour of the garbage collector. We are experimenting GC pauses every 4-5 hours which raise CPU time to 100% for 30 seconds. These pauses are not-deterministic. Memory consumption of the entire application is very low (about 30MB).
    This is a strange beheaviour we have detected in the log file of GC:
    5020.027: [GC 27847K->14659K(63936K), 0.0086360 secs]
    5020.036: [Inc GC 14659K->24546K(63936K), 0.0149416 secs]
    5020.107: [GC 27842K->14658K(63936K), 0.0086947 secs]
    5020.116: [Inc GC 14658K->24546K(63936K), 0.0094716 secs]
    5020.181: [GC 27842K->14658K(63936K), 0.0086846 secs]
    5020.190: [Inc GC 14658K->24546K(63936K), 0.0095778 secs]
    5020.255: [GC 27842K->14658K(63936K), 0.0102155 secs]
    5020.266: [Inc GC 14658K->24546K(63936K), 0.0084659 secs]
    5020.335: [GC 27842K->14658K(63936K), 0.0088606 secs]
    5020.344: [Inc GC 14658K->24554K(63936K), 0.0084514 secs]
    as you can see in one second the GC and the IncGC are called many times. WHY?
    We have read articles and tried all various known settings. But we didn't find any solution and at the moment we don't know how to move.
    Thank you for your hint
    Piero

    Which settings have you tried? By the looks of things you've not set any of your JVM memory variables.
    Try bumping up your maximum heap space using -Xmx.
    If you are running on a nice server try using -Xmx256m -Xmx256m -Xmn64m, these should help for a start.
    Also why are you using the "incremental garbage collector" when you have such a tiny application as you can see a full garbage collection is only taking 0.01 seconds.
    Also try reading this http://java.sun.com/docs/hotspot/gc/

  • FULL GC happens every 3 seconds

    Our application runs on Jboss-4.2.1.GA on Windows 2003 with 1.5.0_11 64-bit java
    java version "1.5.0_11"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_11-b03)
    Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_11-b03, mixed mode)
    NOTE: Our application uses Hibernate and Spring and Client application talks over RMI to our application.
    Some time back the performance of our application really dropped and we saw that GC was happening very frequently so we added the following parameters to control the frequent GC.
    This fixed the issue and every thing was working fine for a while and recently again the performance dropped really bad, this time we saw that a FULL GS is happening every 3 to 4 seconds.
    Here is a snapshot of the GC log
    4947.522: [Full GC 794153K->660654K(934144K), 3.2550211 secs]
    4951.129: [Full GC 795246K->662649K(934144K), 3.0934485 secs]
    4954.505: [Full GC 797241K->659436K(934144K), 4.0668057 secs]
    4958.838: [Full GC 794028K->659892K(934144K), 3.1585596 secs]
    4962.252: [Full GC 794484K->660532K(934144K), 3.2631397 secs]
    4965.800: [Full GC 795124K->660816K(934144K), 3.2850792 secs]
    4969.470: [Full GC 795408K->659146K(934144K), 3.8331614 secs]
    4973.547: [Full GC 793738K->659801K(934144K), 3.3380928 secs]
    4977.143: [Full GC 794393K->660286K(934144K), 3.7536836 secs]
    4981.454: [Full GC 794878K->660600K(934144K), 4.0316665 secs]
    4985.775: [Full GC 795192K->659256K(934144K), 4.3867282 secs]
    4990.466: [Full GC 793848K->659881K(934144K), 3.2915828 secs]
    4994.053: [Full GC 794473K->660415K(934144K), 3.3071425 secs]
    4997.596: [Full GC 795007K->661241K(934144K), 3.4741858 secs]We also see that the CPU utilization of the java process always spikes up to 100%
    Please help me resolve this issue.

    Adding more seed data that gets loaded during server startup sounds to me exactly like the sort of thing that's going to have impact on the application's memory profile! If the heap has more "permanent" data in it now, there is less space for your normal operational garbage which means it needs to be collected more frequently.
    Also notice that there is a bigger overhead for a 64 bit system in the sense that "pointers" in the JVMs internal data structures (e.g. the heap) will all be twice as big (unless you use the JVM compressed pointer option) so if you haven't increased your heap size to compensate then that will also be a source of stress on the heap. Anecdotally, I would start with a heap 50% bigger for a 64 bit app.
    But really, to understand what wrong, you're going to need to invest some of your own time and effort and attach one of the many heap analysis tools to the running system and see exactly how much memory there is free in a stable state and where the objects that are being garbage collected are being created, etc.

  • High garbage collection times

    Our app is currently experiencing quite high garbage collection times. I've been looking for some causes for this and haven't had a ton of luck in finding anything. We have not run into an outOfMemoryException.
    We have run with both 800M heap size and 2G heap size. With the 2G heap size, garbage colleciton takes ~40% of total time. With 800M it takes ~75% of total time. In both instances, OldGen memory is using between 60 and 75% of total heap size.
    MarkSweep is running 1 time for every 11 scavenge runs (roughly). Each MarkSweep run takes around 30 seconds. Each Scavenge run takes around .1 seconds. This is for 2GB heap size.
    My question is, could it just be possible that the app needs more memory? It seems like it could be a combination of needing a larger heap size with a code review to get rid of object references whenever possible.
    If more details are needed to answer, let me know. Also, if I missed something obvious, let me know as well.

    Joe.Cavanagh wrote:
    Our app is currently experiencing quite high garbage collection times. I've been looking for some causes for this and haven't had a ton of luck in finding anything. We have not run into an outOfMemoryException.
    We have run with both 800M heap size and 2G heap size....
    MarkSweep is running 1 time for every 11 scavenge runs (roughly)...
    My question is, could it just be possible that the app needs more memory?Incredibly difficult without knowing a bit more about what you're doing, but 800Mb is a lot of memory.
    It seems like it could be a combination of needing a larger heap sizeHmmmm
    with a code review to get rid of object references whenever possible.Bingo (I suspect). What application could possibly need to keep references to 800Mb worth of objects. If it's truly the case, then some kind of paging system might be in order.
    If more details are needed to answer, let me know. Also, if I missed something obvious, let me know as well.Uhm yes. What are you doing?
    Winston

  • Reducing full Garbage Collection frequency.

    I've been trying to improve the performance while inserting a large number of records to an embedded H2 database. Monitoring memory useage suggests that it's being used rather innefficiently. A lot of the objects created by H2 seem to find their way into "tenured" space before being freed. Full mark-and-sweap garbage collections are occuring every couple of seconds, despite the fact that only about 10% of the available heap is occupied.
    Any advise on tuning the garbage collector to improve throughput in this case?

    I think if surviving objects max out the to-space (part of the young generation) the remaining objects are copied into the tenured generation.
    You can get more info via:
    -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGC -Xloggc:<filename>

  • Quiz for lesson 1 - reading sensor every 30 seconds

    I do not agree with the "scheduleAtFixedRate" not accepted as a correct answer.
    I work with sensors and DSPs every day and, at least for me, "every 30 seconds" means exactly every 30 seconds. The only acceptable choice here would be "scheduleAtFixedRate". If there is not enough time to process sensor's readings the problem should be addressed properly or if there is no need to have sensor reading every 30s the question should say so.
    All comments are welcome.

    Well, perhaps the question is misleading, but....
    The difference between schedule and scheduleAtFixedRate is that schedule is designed for a fixed delay.  So if you want a delay between reads of 30 seconds, schedule is the better answer.
    The scheduleAtFixedRate is designed for a fixed-rate of execution - as the Javadoc says (emphasis is mine):
    "Schedules the specified task for repeated fixed-rate execution, beginning after the specified delay. Subsequent executions take place at approximately regular intervals, separated by the specified period.
    In fixed-rate execution, each execution is scheduled relative to the scheduled execution time of the initial execution. If an execution is delayed for any reason (such as garbage collection or other background activity), two or more executions will occur in rapid succession to "catch up." In the long run, the frequency of execution will be exactly the reciprocal of the specified period (assuming the system clock underlying Object.wait(long) is accurate).
    Fixed-rate execution is appropriate for recurring activities that are sensitive to absolute time, such as ringing a chime every hour on the hour, or running scheduled maintenance every day at a particular time. It is also appropriate for recurring activities where the total time to perform a fixed number of executions is important, such as a countdown timer that ticks once every second for ten seconds."
    So fixedRate is not the better choice in this case - eventually, the delay between events will become 29 seconds, then 28, then 27, and so on.
    Tom

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • How can I execute a command every 10 seconds in a specific time-frame

    Hello,
    I would like to create a script which in a specific time-frame collects some outputs and also pings every 10 seconds.
    To collect the outputs every minute from 22:00PM to 22:10PM I have the following:
    event manager applet snmp_output
    event timer cron cron-entry "0-10/1 22 * * *" maxrun 30
    action 010 cli command "enable"
    action 020 cli command "show clock"
    action 030 cli command "terminal exec prompt timestamp"
    action 040 cli command "show snmp stats oid | append bootdisk:show_snmp_stats_oid.txt "
    action 045 wait 5
    action 050 cli command "show snmp pending | append bootdisk:show_snmp_pending.txt "
    action 055 wait 5
    action 060 cli command "show snmp sessions | append bootdisk:show_snmp_sessions.txt "
    acionn 065 wait 5
    action 070 cli command "end"
    To confirm connectivity to the device doing the SNMP polls I would like to execute a ping every 10 seconds in the same timeframe.
    Cron seems only to support minutes. Is it possible to combine a watchdog timer + a cron timer?
    Can this ping function be incorporated in the SNMP output applet or will I have to write a new one?
    Will I need TCL here (I have no experience in TCL)?
    Best regards,
    Tim

    If you wanted the pings to run in parallel, you could have this applet configure another applet to do the pinging, then remove it on the last run.  This will require an amount of programmatic logic, though.  If you wanted to keep things a bit simpler, add another applet that runs at 22:00 that configures a watchdog pinging applet, then a third applet that runs at 22:10 that removes the pinging applet.
    When it comes to embedded quotes when you configure your nested pinging applet, you'll need to use $q to stand for the embedded quotes.  You'll also need to configure:
    event manager environment q "

Maybe you are looking for