Prevent long GC pauses

Hello,
I'm using jdk1.6.
The problem is, that sometimes I'm facing a long gc pause, specially during "concurrent mode failure".
Is there any option for jvm to set a max AppStopTime? And what could be the impact setting such a option ?
Here are the jvm settings I'm currently using:
-Xms4G -Xmx4G -XX:MaxNewSize=128M -XX:NewSize=128M -XX:MaxPermSize=256M -XX:PermSize=256M"
-server -showversion -Xbatch -XX:+UseLargePages -XX:LargePageSizeInBytes=32M
-XX:+UseParNewGC -XX:ParallelGCThreads=4 -XX:ParallelGCToSpaceAllocBufferSize=192k -XX:SurvivorRatio=6 -XX:TargetSurvivo
rRatio=75 -XX:+UseConcMarkSweepGC -XX:CMSMarkStackSize=32M -XX:CMSMarkStackSizeMax=32M -XX:+CMSParallelRemarkEnabled -XX:-CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC'
Afaik there is no other chance to improve gc settings. Do you see any other possibilities ?
Thanks,
Christian

Depending on how long lived the majority of the objects in your heap are you can better adjust the sizes of the new and old generation spaces. If short lived, make your new gen space larger. You could make it as large as half your total heap using "-XX:NewRatio=1". (NewRatio would replace using MaxNewSize)
How do you know if objects are short lived or long lived? If you watch the action in your heap with VisualGC/Visual VM, and in the display you see everything get cleaned up in the eden & survivor spaces with each GC, then you probably have mostly short lived objects.
You may also see that your survivor spaces are too small. If too small, objects get promoted to the old generation space too quickly, filling up your old gen space unnecessarily, and creating long GC pauses.
The best situation is where eden and the survivior spaces are sized such that as many as possible objects get cleaned up on an ongoing basis without them moving to the old generation space as they age.

Similar Messages

  • Strange Long ParNewGC Pauses During Application Startup

    Recently we started seeing long ParNewGC pauses when starting up Kafka that were causing session timeouts:
    [2015-04-24 13:26:23,244] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
    2.111: [GC (Allocation Failure) 2.111: [ParNew: 136320K->10236K(153344K), 0.0235777 secs] 648320K->522236K(2080128K), 0.0237092 secs] [Times: user=0.03 sys=0.01, real=0.02 secs]
    2.599: [GC (Allocation Failure) 2.599: [ParNew: 146556K->3201K(153344K), 9.1514626 secs] 658556K->519191K(2080128K), 9.1515757 secs] [Times: user=18.25 sys=0.01, real=9.15 secs]
    [2015-04-24 13:26:33,443] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
    After much investigation I found that the trigger was the allocation of a 500M static object early in the startup code.  It of course makes no sense that a single large static object in Old memory would impact ParNew collections, but, it does seem to.  I have created a bug report, but, it is still under investigation.
    I have reproduced the problem with a simple application on several Linux platforms including an EC2 instance and the following JREs:
    OpenJDK: 6, 7, and 8
    Oracle: 7 and 8
    Oracle 6 does not seem to have an issue.  All the ParNewGC times are small.
    Here is the simple program that demonstrates the issue:
    import java.util.ArrayList;
    public class LongParNewPause {
       static byte[] bigStaticObject;
       public static void main(String[] args) throws Exception {
       int bigObjSize = args.length > 0 ? Integer.parseInt(args[0]) : 524288000;
       int littleObjSize = args.length > 1 ? Integer.parseInt(args[1]) : 100;
       int saveFraction  = args.length > 2 ? Integer.parseInt(args[2]) : 10;
       bigStaticObject = new byte[bigObjSize];
      ArrayList<byte[]> holder = new ArrayList<byte[]>();
       int i = 0;
       while (true) {
       byte[] local = new byte[littleObjSize];
       if (i++ % saveFraction == 0) {
      holder.add(local);
    I run it with the following options:
    -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx2G -Xms2G
    Note that I have not seen the issue with 1G heaps.  4G heaps exhibit the issue (as do heaps as small as 1.2G)
    Here is the output:
    0.321: [GC (Allocation Failure) 0.321: [ParNew: 272640K->27329K(306688K), 0.0140537 secs] 784640K->539329K(2063104K), 0.0141584 secs] [Times: user=0.05 sys=0.02, real=0.02 secs]
    0.368: [GC (Allocation Failure) 0.368: [ParNew: 299969K->34048K(306688K), 0.7655383 secs] 811969K->572321K(2063104K), 0.7656172 secs] [Times: user=2.89 sys=0.02, real=0.77 secs]
    1.165: [GC (Allocation Failure) 1.165: [ParNew: 306688K->34048K(306688K), 13.8395969 secs] 844961K->599389K(2063104K), 13.8396650 secs] [Times: user=54.38 sys=0.05, real=13.84 secs]
    15.036: [GC (Allocation Failure) 15.036: [ParNew: 306688K->34048K(306688K), 0.0287254 secs] 872029K->628028K(2063104K), 0.0287876 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
    15.096: [GC (Allocation Failure) 15.096: [ParNew: 306688K->34048K(306688K), 0.0340727 secs] 900668K->657717K(2063104K), 0.0341386 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    Even stranger is the fact that the problem seems to be limited to objects in the range of about 480M to 512M.  Specifically:
    [503316465,536870384]
    Values outside this range appear to be OK.  Anyone have any thoughts?  Can you reproduce the issue on your machine?

    I have started a discussion on this issue on the hotspot-gc-dev list:
    Strange Long ParNew GC Pauses (Sample Code Included)
    One of the engineers on that list was able to reproduce the issue and there is some discussion there about what might be going on.  I am a GC novice, but, am of the opinion that there is a bug to be found in the ParNew GC code introduced in Java 7.
    Here is a more frightening example.  The ParNew GCs keeping getting longer and longer - it never stabilized like the previous example I sent did.  I killed the process once the ParNew GC times reached almost 1 minute each.
    Bad Case - 500M Static Object:
    java -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx6G -Xms6G LongParNewPause $((500*1024*1024)) 100 100
    0.309: [GC0.309: [ParNew: 272640K->3028K(306688K), 0.0287780 secs] 784640K->515028K(6257408K), 0.0288700 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
    0.372: [GC0.372: [ParNew: 275668K->7062K(306688K), 0.0228070 secs] 787668K->519062K(6257408K), 0.0228580 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    0.430: [GC0.430: [ParNew: 279702K->11314K(306688K), 0.0327930 secs] 791702K->523314K(6257408K), 0.0328510 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
    0.497: [GC0.497: [ParNew: 283954K->15383K(306688K), 0.0336020 secs] 795954K->527383K(6257408K), 0.0336550 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
    0.565: [GC0.565: [ParNew: 288023K->21006K(306688K), 0.0282110 secs] 800023K->533006K(6257408K), 0.0282740 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
    0.627: [GC0.627: [ParNew: 293646K->26805K(306688K), 0.0265270 secs] 805646K->538805K(6257408K), 0.0266220 secs] [Times: user=0.07 sys=0.01, real=0.03 secs]
    0.688: [GC0.688: [ParNew: 299445K->20215K(306688K), 1.3657150 secs] 811445K->535105K(6257408K), 1.3657830 secs] [Times: user=3.97 sys=0.01, real=1.36 secs]
    2.087: [GC2.087: [ParNew: 292855K->17914K(306688K), 6.6188870 secs] 807745K->535501K(6257408K), 6.6189490 secs] [Times: user=19.71 sys=0.03, real=6.61 secs]
    8.741: [GC8.741: [ParNew: 290554K->17433K(306688K), 14.2495190 secs] 808141K->537744K(6257408K), 14.2495830 secs] [Times: user=42.34 sys=0.10, real=14.25 secs]
    23.025: [GC23.025: [ParNew: 290073K->17315K(306688K), 21.1579920 secs] 810384K->540348K(6257408K), 21.1580510 secs] [Times: user=70.10 sys=0.08, real=21.16 secs]
    44.216: [GC44.216: [ParNew: 289955K->17758K(306688K), 27.6932380 secs] 812988K->543511K(6257408K), 27.6933060 secs] [Times: user=103.91 sys=0.16, real=27.69 secs]
    71.941: [GC71.941: [ParNew: 290398K->17745K(306688K), 35.1077720 secs] 816151K->546225K(6257408K), 35.1078600 secs] [Times: user=130.86 sys=0.10, real=35.11 secs]
    107.081: [GC107.081: [ParNew: 290385K->21826K(306688K), 41.4425020 secs] 818865K->553022K(6257408K), 41.4425720 secs] [Times: user=158.25 sys=0.31, real=41.44 secs]
    148.555: [GC148.555: [ParNew: 294466K->21834K(306688K), 45.9826660 secs] 825662K->555757K(6257408K), 45.9827260 secs] [Times: user=180.91 sys=0.14, real=45.98 secs]
    194.570: [GC194.570: [ParNew: 294474K->21836K(306688K), 51.5779770 secs] 828397K->558485K(6257408K), 51.5780450 secs] [Times: user=204.05 sys=0.20, real=51.58 secs]
    246.180: [GC246.180: [ParNew^C: 294476K->18454K(306688K), 58.9307800 secs] 831125K->557829K(6257408K), 58.9308660 secs] [Times: user=232.31 sys=0.23, real=58.93 secs]
    Heap
      par new generation   total 306688K, used 40308K [0x000000067ae00000, 0x000000068fac0000, 0x000000068fac0000)
       eden space 272640K,   8% used [0x000000067ae00000, 0x000000067c357980, 0x000000068b840000)
       from space 34048K,  54% used [0x000000068b840000, 0x000000068ca458f8, 0x000000068d980000)
       to   space 34048K,   0% used [0x000000068d980000, 0x000000068d980000, 0x000000068fac0000)
      concurrent mark-sweep generation total 5950720K, used 539375K [0x000000068fac0000, 0x00000007fae00000, 0x00000007fae00000)
      concurrent-mark-sweep perm gen total 21248K, used 2435K [0x00000007fae00000, 0x00000007fc2c0000, 0x0000000800000000)
    Good Case - 479M Static Object:
    java -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx6G -Xms6G LongParNewPause $((479*1024*1024)) 100 100
    0.298: [GC0.298: [ParNew: 272640K->3036K(306688K), 0.0152390 secs] 763136K->493532K(6257408K), 0.0153450 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.346: [GC0.346: [ParNew: 275676K->7769K(306688K), 0.0193840 secs] 766172K->498265K(6257408K), 0.0194570 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    0.398: [GC0.398: [ParNew: 280409K->11314K(306688K), 0.0203460 secs] 770905K->501810K(6257408K), 0.0204080 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.450: [GC0.450: [ParNew: 283954K->17306K(306688K), 0.0222390 secs] 774450K->507802K(6257408K), 0.0223070 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.504: [GC0.504: [ParNew: 289946K->18380K(306688K), 0.0169000 secs] 780442K->508876K(6257408K), 0.0169630 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
    0.552: [GC0.552: [ParNew: 291020K->26805K(306688K), 0.0203990 secs] 781516K->517301K(6257408K), 0.0204620 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.604: [GC0.604: [ParNew: 299445K->21153K(306688K), 0.0230980 secs] 789941K->514539K(6257408K), 0.0231610 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.659: [GC0.659: [ParNew: 293793K->29415K(306688K), 0.0170240 secs] 787179K->525498K(6257408K), 0.0170970 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
    0.708: [GC0.708: [ParNew: 302055K->23874K(306688K), 0.0202970 secs] 798138K->522681K(6257408K), 0.0203600 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.759: [GC0.760: [ParNew: 296514K->26842K(306688K), 0.0238600 secs] 795321K->528371K(6257408K), 0.0239390 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    0.815: [GC0.815: [ParNew: 299482K->25343K(306688K), 0.0237580 secs] 801011K->529592K(6257408K), 0.0238030 secs] [Times: user=0.06 sys=0.01, real=0.02 secs]
    0.870: [GC0.870: [ParNew: 297983K->25767K(306688K), 0.0195800 secs] 802232K->532743K(6257408K), 0.0196290 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    0.921: [GC0.921: [ParNew: 298407K->21795K(306688K), 0.0196310 secs] 805383K->531488K(6257408K), 0.0196960 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.972: [GC0.972: [ParNew: 294435K->25910K(306688K), 0.0242780 secs] 804128K->538329K(6257408K), 0.0243440 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    1.028: [GC1.028: [ParNew: 298550K->21834K(306688K), 0.0235000 secs] 810969K->536979K(6257408K), 0.0235600 secs] [Times: user=0.06 sys=0.00, real=0.03 secs]
    1.083: [GC1.083: [ParNew: 294474K->26625K(306688K), 0.0188330 secs] 809619K->544497K(6257408K), 0.0188950 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    1.133: [GC1.133: [ParNew: 299265K->26602K(306688K), 0.0210780 secs] 817137K->547186K(6257408K), 0.0211380 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    1.185: [GC1.185: [ParNew: 299242K->26612K(306688K), 0.0236720 secs] 819826K->549922K(6257408K), 0.0237230 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    1.240: [GC1.241: [ParNew: 299252K->26615K(306688K), 0.0188560 secs] 822562K->552651K(6257408K), 0.0189150 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.291: [GC1.291: [ParNew: 299255K->26615K(306688K), 0.0195090 secs] 825291K->555378K(6257408K), 0.0195870 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.342: [GC1.342: [ParNew: 299255K->22531K(306688K), 0.0229010 secs] 828018K->554021K(6257408K), 0.0229610 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.396: [GC1.396: [ParNew: 295171K->24505K(306688K), 0.0265920 secs] 826661K->560810K(6257408K), 0.0266360 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    1.453: [GC1.453: [ParNew: 297145K->24529K(306688K), 0.0296490 secs] 833450K->563560K(6257408K), 0.0297070 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    1.514: [GC1.514: [ParNew: 297169K->27700K(306688K), 0.0259820 secs] 836200K->569458K(6257408K), 0.0260310 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.571: [GC1.572: [ParNew: 300340K->27666K(306688K), 0.0199210 secs] 842098K->572150K(6257408K), 0.0199650 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
    1.623: [GC1.623: [ParNew: 300306K->27658K(306688K), 0.0237020 secs] 844790K->574868K(6257408K), 0.0237630 secs] [Times: user=0.08 sys=0.00, real=0.02 secs]
    1.678: [GC1.678: [ParNew: 300298K->31737K(306688K), 0.0237820 secs] 847508K->581674K(6257408K), 0.0238530 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
    1.733: [GC1.733: [ParNew: 304377K->21022K(306688K), 0.0265400 secs] 854314K->573685K(6257408K), 0.0265980 secs] [Times: user=0.08 sys=0.00, real=0.02 secs]
    1.791: [GC1.791: [ParNew: 293662K->25359K(306688K), 0.0249520 secs] 846325K->580748K(6257408K), 0.0250050 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.847: [GC1.847: [ParNew: 297999K->19930K(306688K), 0.0195120 secs] 853388K->581179K(6257408K), 0.0195650 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.898: [GC1.898: [ParNew: 292570K->20318K(306688K), 0.0233960 secs] 853819K->584294K(6257408K), 0.0234650 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    1.953: [GC1.953: [ParNew: 292958K->20415K(306688K), 0.0233530 secs] 856934K->587117K(6257408K), 0.0234130 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    2.007: [GC2.007: [ParNew: 293055K->20439K(306688K), 0.0301410 secs] 859757K->589868K(6257408K), 0.0302070 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    2.068: [GC2.068: [ParNew: 293079K->20445K(306688K), 0.0289190 secs] 862508K->592600K(6257408K), 0.0289690 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    ^C2.129: [GC2.129: [ParNew: 293085K->29284K(306688K), 0.0218880 secs] 865240K->604166K(6257408K), 0.0219350 secs] [Times: user=0.09 sys=0.00, real=0.02 secs]
    Heap
      par new generation   total 306688K, used 40135K [0x000000067ae00000, 0x000000068fac0000, 0x000000068fac0000)
       eden space 272640K,   3% used [0x000000067ae00000, 0x000000067b898a78, 0x000000068b840000)
       from space 34048K,  86% used [0x000000068d980000, 0x000000068f619320, 0x000000068fac0000)
       to   space 34048K,   0% used [0x000000068b840000, 0x000000068b840000, 0x000000068d980000)
      concurrent mark-sweep generation total 5950720K, used 574881K [0x000000068fac0000, 0x00000007fae00000, 0x00000007fae00000)
      concurrent-mark-sweep perm gen total 21248K, used 2435K [0x00000007fae00000, 0x00000007fc2c0000, 0x0000000800000000)

  • Sync or Sleep – dozing Mac prevents long Apple TV syncs

    *Should iTunes prevent the host Mac going into sleep mode when syncing?*
    Lengthy syncs between iTunes and my Apple TV are interrupted by my Mac going to sleep. When I wake the laptop, unsurprisingly the sync has stopped.
    Am I right in thinking that iTunes usually keeps the computer awake as long as the sync is in progress? If so, what could have gone wrong? As an interim measure I have set the laptop remain awake when connected to power, but this is not ideal.
    If it helps the great minds of the forum, here's my setup:
    * MacBook Pro 15" unibody connected via WIFI
    * Airport Extreme N router
    * USB HD shared via Air Disk
    * TV connected to Airport via ethernet
    All software is fully updated across all devices.
    Help!

    Wardotron wrote:
    I was sure that iTunes prevented the laptop from going to sleep whilst a sync was in progress, but maybe not.
    that would be an itunes specific question, and not an appletv one.
    whether itunes doing something qualifies as the system doing something is another question (i would assume the same question would come up when syncing an ipod or iphone).
    maybe asking that specific question in the itunes forum would give some answers.

  • Long GC pauses due to 'wb processing'

    Hi all,
    I'm in the process of evaluating/optimizing JRockit for a standalone application we're developing with a really large heap (around 40GB), and after reading about EVERY guide and forum thread there is about GC tuning I think I got a pretty good understanding about whats going on in the JVM. But there's still one thing thats killing me and I cant seem to find a single mention of it anywhere, and thats 'wb processing' which happens during old collections while all threads are stopped, even with concurrent garbage collector. E.g.
    [INFO ][gcpause] (pause includes wb processing: 4142.326 ms, compaction: 6.892 ms (external), update ref: 0.002 ms)
    Most of the time the pauses are fine, maybe 100-200ms which is totally ok, but there are occasional spikes that take up several seconds (I've seen up to 15s), and since I'm trying to optimize for pausetime thats really, really bad.
    Now before I start going into details of our application and start doing JRA recordings or sending you logs and stuff: could you maybe tell me whats exactly happening during that phase? Is that a common problem or is it normal behaviour? And can it be avoided somehow?
    Any help is appreciated.
    Regards,
    Matthias

    When it comes to the stopping of Java threads there are usually one of two things that is the problem:
    1) Roll forwarding takes too long. This is at least easy to debug. Run with -Xverbose:thread=debug. You might get debug info saying:
    Thread X (NAME) stopped roll forwarding (5 times, rfLimit: 100), will sleep and try again.
    This is not a problem. However, if you see the info-level output saying
    Thread X (NAME) stopped roll forwarding (100 times, rfLimit: 100), will do unlimited roll forwarding
    Then you probably have a problem. What happens is that if we roll forward too long without finding a safe place to stop a Java thread, we release the thread, let it run for a short while and then try to stop it again. We do this a maximum of 100 times, then we do unlimited roll forwarding. If this happen, we can roll forward for a very long time.
    The solution for this is to run with the flag "-XXrollforwardretrylimit:-1". This will allow us to bail from a roll forward unlimited amount of times without falling back to unlimited roll forwarding.
    2) The other common problem with stopping of Java threads is that the application is blocking signals. JRockit is using signals to stop threads, but some operations stops signals on an OS level. The most common of these operations is reading and writing to a NFS share. This is a linux limitation.
    To debug this, I would suggest taking a JRA recording with Latency data (see http://download.oracle.com/docs/cd/E13188_01/jrockit/geninfo/diagnos/intromiscon.html#wp1077578). This will show file io events, and you can search for long latencies there.

  • Long GC pauses; full gc eventually takes 30 secs. (from Developer Forums)

    from http://forum.java.sun.com/thread.jspa?threadID=5145522
    jre 1.5.0_06,
    -Dcom.sun.management.jmxremote -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:NewRatio=3 -XX:MaxTenuringThreshold=3 -XX:NewSize=90M -XX:MaxNewSize=90M -Xmx800M
    windows 2003 server
    normally, full gc's occur with < 1 second durations, but eventually, the duration peaks at 30 seconds, then 7 seconds, then back to normal
    the heap was at about 290MB (far from full).
    setting maxgcpausemillis has no effect (something extraordinary happens?)
    there is no obvious pattern in the long pauses (not due to perm resizing or so)
    does anyone have experience with sudden long pauses?
    we could think of heap being swapped - but we don't know how to monitor this (and nobody was starting memory consuming applications at the time)
    does anyone know how we best monitor swapping on a windows?
    we are not able to use CMS immediately without testing for a period (had crashes due to http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6386633)
    here are samples of long pauses
    here are some long pauses (days apart - the gc log is embedded in our own loggings)
    061221 102753 0750 EV coreservice.taskoutput.container 75.537: [GC [PSYoungGen: 70133K->19827K(53632K)] 173056K->138149K(172736K), 11.5205585 secs] ##[
    070301 122542 0812 EV coreservice.taskoutput.container 6146.116: [Full GC [PSYoungGen: 768K->0K(90944K)] [PSOldGen: 286976K->213826K(259328K)] 287744K->213826K(350272K) [PSPermGen: 19432K->19432K(23040K)], 30.1171480 secs] ##[
    070305 163701 0140 EV coreservice.taskoutput.container [PSYoungGen: 1256K->0K(90176K)] [PSOldGen: 239309K->197132K(221632K)] 240565K->197132K(311808K) [PSPermGen: 19260K->19039K(23808K)], 20.3872070 secs] ##[
    tak!
    /aksel

    does anyone know how we best monitor swapping on awindows?I not sure if this is the best.
    You can use ctrl-shift-escape. Click on the processes tab and then goto the menu: View->Select Columns... and add "Page Fault Delta" column. I would also suggest you add the VM Memory size column too.

  • Extremely long GC pauses

    Hi,
    We're running the following cluster on a single machine:
    - 2 x extend nodes running at 1Gb
         - 6 x storage nodes running at 2Gb
         - Coherence 3.6.1
         - Java 6u16
    After a period of about 20 hours use we see massive GC pauses of between 246 and 1440 seconds in the storage enabled nodes with messages including "concurrent mode failure" and "promotion failed".
    Is anyone able to advise which of the the many GC parameters we should be using for a simple cluster such as ours?
    Thanks
    Richard

    Here is an extract of the GC log from the extend node:
    2012-01-18 20:19:16.136/23277.871 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 13187ms late, indicating that this JVM may be running slowly or experienced a long GC
    2012-01-18T20:19:16.417+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 39786722
    Max   Chunk Size: 5166196
    Number of Blocks: 1323
    Av.  Block  Size: 30073
    Tree      Height: 29
    Before GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 49152
    Max   Chunk Size: 49152
    Number of Blocks: 1
    Av.  Block  Size: 49152
    Tree      Height: 1
    [ParNew
    Desired survivor size 6684672 bytes, new threshold 1 (max 4)
    - age   1:   13317624 bytes,   13317624 total
    : 118016K->13056K(118016K), 29.7267156 secs] 603664K->577106K(1035520K)After GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 30371405
    Max Chunk Size: 5166196
    Number of Blocks: 768
    Av. Block Size: 39546
    Tree Height: 28
    After GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 49152
    Max Chunk Size: 49152
    Number of Blocks: 1
    Av. Block Size: 49152
    Tree Height: 1
    , 29.7272082 secs] [Times: user=0.84 sys=0.34, real=29.73 secs]
    2012-01-18 20:19:46.151/23307.886 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 25015ms late, indicating that this JVM may be running slowly or experienced a long GC
    2012-01-18T20:19:46.526+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 30371405
    Max   Chunk Size: 5166196
    Number of Blocks: 768
    Av.  Block  Size: 39546
    Tree      Height: 28
    Before GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 49152
    Max   Chunk Size: 49152
    Number of Blocks: 1
    Av.  Block  Size: 49152
    Tree      Height: 1
    [ParNew
    Desired survivor size 6684672 bytes, new threshold 1 (max 4)
    - age   1:   13336344 bytes,   13336344 total
    : 118016K->13056K(118016K), 28.6324107 secs] 682066K->651460K(1035520K)After GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 21568859
    Max Chunk Size: 5166196
    Number of Blocks: 383
    Av. Block Size: 56315
    Tree Height: 20
    After GC:
    Statistics for BinaryTreeDictionary:
    Total Free Space: 49152
    Max Chunk Size: 49152
    Number of Blocks: 1
    Av. Block Size: 49152
    Tree Height: 1
    , 28.6328374 secs] [Times: user=6.59 sys=0.27, real=28.62 secs]
    2012-01-18 20:20:15.149/23336.884 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 23998ms late, indicating that this JVM may be running slowly or experienced a long GC

  • Long GC pause - grey object rescan

    We observe occaisional GC pauses on our app server which seem to be caused by a "grey object rescan". In the past they ranged from 20-30s, but yesterday we had a 84s pause. That's a bit much...
    [GC 1593754.769: [ParNew: 450212K->46400K(471872K), 0.1076340 secs] 1984347K->1585395K(3019584K), 0.1079070 secs]
    1593757.147: [GC[YG occupancy: 339101 K (471872 K)]1593757.147: [Rescan (non-parallel) 1593757.147: [grey object rescan, 84.6104290
    secs]1593841.758: [root rescan, 0.3187110 secs], 84.9293110 secs]1593842.076: [weak refs processing, 0.0100700 secs]1593842.087: [cl ass unloading, 0.1208600 secs]1593842.207: [scrub symbol & string tables, 0.0189270 secs] [1 CMS-remark: 1538995K(2547712K)] 1878096 K(3019584K), 85.1028700 secs]
    My question is: what exactly causes this? Any way to avoid it?
    I've found a couple bugs that might be related: 6367204, 6298694. Although I don't think there we have any huge arrays.
    Or GC settings are as follows:
    /usr/java/jdk1.5.0_06/bin/java -server -Xms3000m -Xmx3000m -XX:LargePageSizeInBytes=2m -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSCompactWhenClearAllSoftRefs -XX:SoftRefLRUPolicyMSPerMB=200 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:-CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:NewSize=512m -XX:MaxNewSize=512m -XX:SurvivorRatio=8 -XX:PermSize=156m -XX:MaxPermSize=156m -XX:CMSInitiatingOccupancyFraction=60 -Xloggc:/var/ec/gc.log -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintClassHistogram -XX:+TraceClassUnloading -Duser.region=US -Dsun.rmi.dgc.client.gcInterval=86400000 -Dsun.rmi.dgc.server.gcInterval=86400000
    Any help/suggestions are highly appreciated. Thanks!

    When using the concurrent mark-sweep (cms) collector there is a phase where
    cms is looking for all the live objects (referred to as concucrent marking of live objects)
    in the heap while the application is running and likely changing objects in the heap.
    In order to assure correctness cms has a phase (referred to as the remark)
    where all the applications threads are stopped and it looks for
    objects that have changed while it was doing the concurrent marking. The "grey object
    rescan" refers to looking at the objects that have changed. This "grey object
    rescan" depends on the number of objects that are changed during the
    concurrent marking so the level of activity of the application can affect
    this phase.
    I note that you have turned off the parallel remark on the command line
    (-XX:-CMSParallelRemarkEnabled). If you've had problems with the
    parallel remark, then turning it on is not an option. If you have not had
    problems with parallel remark, turn it on and see if it helps.
    If you use the flag -XX:PrintCMSStatistics=1, you will get additional output.
    In it you can look for lines such as
    (re-scanned XXX dirty cards in cms gen)
    If XXX is smaller in the cases where the "grey object rescan" is shorter and
    XXX is larger in the cases where the "grey object rescan" is longer, then
    the problem is due to lots of activity by the application during the concurrent
    mark. If you're not on the 5.0 jdk and can move to it, please do. The parallel
    remark will be more stable.

  • June 2009 15" MBP - 5400 Hitachi Drive - Long BeachBall Pauses

    I've got a new late June Unibody 15" MBP with a 5400 rpm Hitachi drive and 4Gb RAM and I periodically experience long (30 or so second) freezes.
    I suspected (and continue to suspect) Safari 4. It seems that this pause always happens while I am in Safari. When it happens it always freezes iTunes output. Sometimes other windows (like terminal or mail are fine, sometimes they are frozen.)
    When these happen, I see no CPU load. I've started monitoring iostat to see if any i/o is going on and I see i/o go to zero when the freeze occurs. (I have an external drive attached for time machine, but in the example below it wasn't actively doing anything.)
    disk0 disk2 cpu load average
    KB/t tps MB/s KB/t tps MB/s us sy id 1m 5m 15m
    49.77 14 0.68 0.00 0 0.00 12 12 76 0.48 0.48 0.42
    12.34 28 0.33 0.00 0 0.00 10 10 80 0.41 0.46 0.42
    14.43 91 1.28 0.00 0 0.00 18 12 71 0.43 0.46 0.42
    17.96 35 0.61 0.00 0 0.00 12 11 77 0.83 0.55 0.45
    11.87 44 0.52 0.00 0 0.00 13 11 76 0.71 0.53 0.44
    7.87 297 2.28 0.00 0 0.00 13 12 75 0.82 0.56 0.45
    *0.00 0 0.00* 0.00 0 0.00 12 11 77 0.77 0.56 0.45 (iTunes is frozen during these samples)
    *0.00 0 0.00* 0.00 0 0.00 3 8 89 0.65 0.54 0.45
    86.55 2 0.19 0.00 0 0.00 5 9 86 0.55 0.52 0.44
    I've noticed that it occurs more often when I use my higher power GPU.
    I've tried resetting Safari and disabling top sites (although I haven't been successful at that.) I've also reinstalled Safari.
    Any ideas?
    Thanks,
    John

    It does appear that I am running that update. From System Profile:
    Hardware Overview:
    Model Name: MacBook Pro
    Model Identifier: MacBookPro5,3
    Processor Name: Intel Core 2 Duo
    Processor Speed: 2.8 GHz
    Number Of Processors: 1
    Total Number Of Cores: 2
    L2 Cache: 6 MB
    Memory: 4 GB
    Bus Speed: 1.07 GHz
    Boot ROM Version: MBP53.00AC.B03
    SMC Version (system): 1.48f2
    Serial Number (system): W89236DU644
    Hardware UUID: B06E8157-1F18-5D8B-AC83-6D515B508B3B
    Sudden Motion Sensor:
    State: Enabled
    Thanks,
    John

  • Long GC pauses; full gc eventually takes 30 seconds+

    jre 1.5.0_06,
    -Dcom.sun.management.jmxremote -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:NewRatio=3 -XX:MaxTenuringThreshold=3 -XX:NewSize=90M -XX:MaxNewSize=90M -Xmx800M
    normally, full gc's occur with < 1 second durations, but eventually, the duration peaks at 30 seconds, then 7 seconds, then back to normal
    the heap was at about 290MB (far from full).
    setting maxgcpausemillis has no effect (something extraordinary happens?)
    there is no obvious pattern in the long pauses (not due to perm resizing or so)
    does anyone have experience with sudden long pauses?
    we could think of heap being swapped - but we don't know how to monitor this (and nobody was starting memory consuming applications at the time)
    tak!
    /aksel

    EDIT: Also I think NewSize/MaxNewSize should be set to the same value as NewRatio (fourth of the heap)yes - there is some confusion there.. the 90MB is used by the vm though
    unfortunately we are not able to try out the cms immediately (as we are in production) - and even if this might be the end, we have to test the cms for a long time before entering production. we were hit by crashes caused by cms a year ago when we decided not to use it (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6386633)
    here are some long pauses (days apart - the gc log is embedded in our own loggings)
    061221 102753 0750 EV coreservice.taskoutput.container 75.537: [GC [PSYoungGen: 70133K->19827K(53632K)] 173056K->138149K(172736K), 11.5205585 secs] ##[
    070301 122542 0812 EV coreservice.taskoutput.container 6146.116: [Full GC [PSYoungGen: 768K->0K(90944K)] [PSOldGen: 286976K->213826K(259328K)] 287744K->213826K(350272K) [PSPermGen: 19432K->19432K(23040K)], 30.1171480 secs] ##[
    070305 163701 0140 EV coreservice.taskoutput.container [PSYoungGen: 1256K->0K(90176K)] [PSOldGen: 239309K->197132K(221632K)] 240565K->197132K(311808K) [PSPermGen: 19260K->19039K(23808K)], 20.3872070 secs] ##[                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Safari 6.0 developer tools automatically pause on javascript errors. Prevent this automatic pausing?

    When developing with the built-in tools in Safari 6.0, the developer tools automatically pause the scripts on minor errors, is there a way to prevent this default action, while still allowing manual pausing of scritps?
    Here's a photo of the breakpoint management tab in the Safari 6.0 developer tools:

    Hopefully this isn't stating the obvious... You have set the debugger to break on exceptions. Look in the breakpoint navigator for break on All Exceptions and break on All Uncaught Exceptions. I bet you have one or both enabled.

  • Upgrade R/3 4.7 to EHP 5 - Long time pause at Step 6.Downtime - TABIM_POST!

    At the step 6 . Downtime on TABIM_POST (import of additional requests) phase upgrade processed very long time now! Server time: 24/02/2012 10:30
    Status is: "Phase MAIN_NEWBAS/TABIM_POST is running..."
    Last log's files stoppped yestoday at time : 23/02/2012 20:25!
    Edited by: Baurzhan Lekerov on Feb 24, 2012 6:16 AM

    I'm create message to SAP that we 're don 't 've 2 methods at this step during Upgrade process:
    - GET_COMPS_FOR_ENH
    - GET_COMPS_FOR_SPOT
    for class CL_WB_ENH_UTILITIES.
    Can we're copy source code from another ECC 6.0 system  to our upgrade system manually?
    So, our upgrade R3 system - is a homogeneous copy from source R3 4.7 system.
    This test-trial upgrade system is not present in the system landscape with SolMan and now we're have a problem with Change request management - When I'm try activate corrections from note 1369430, next message occured:
    "You cannot assign any requests to project SID_P00001.
    Diagnosis
    You want to create a request in project SID_P00001, or assign an existing request to this project. However, you are not currently allowed to use this function for this project."
    How we can reset this requirement from ChaRM?
    Edited by: Baurzhan Lekerov on Feb 25, 2012 7:54 AM

  • Long Audio Pause Between Tracks. Can that be chang

    I get a three second pause of silence between all my audio tracks. For the life of me, I can't seem to find a way to get rid of the pauses. Any idea? Thanks! BillMessage Edited by bsherck on 03-05-200704:2 PM

    What is your player? My Zen Micro will take ~3 seconds to change if I manually do it, but if I wait until it changes itself, it seems to pre-load the next song and changes instantly at the end of a song.
    Maybe you need to Clean-up or reformat, also?

  • Long GC pause with 1.5

    We recently tried to upgrade one of our servers from 1.4.2_05 to 1.5 SP1. Everyhing worked fine for about 4 hours but then the application stopped for 90+ seconds, most likely doing garbage collection - the GC log entries show an old collection phase 4-0 pause time of 104654.538000 ms.
    The memory usage pattern did not change (no extensive memory usage) and the load on the system was low.
    Any idea why the application paused for an extensive period of time? Please find command line parameters, version info and relevant GC log entries below.
    Thanks,
    Malte
    Starting JVM: /usr/j2se/bin/java -server -Xms512m -Xmx1518m -Xgcprio:pausetime -Xgcreport -Xgcpause -Xnoclassgc -Xverbosetimestamp
    -Xverbose:memory -Duser.region=US -Djava.rmi.server.hostname=127.0.0.1 -Xverboselog:/var/ec/gc.log -Xss1m -Dresin.home=/opt/ec/resi
    n-3.0.8 -Djava.util.logging.manager=com.caucho.log.LogManagerImpl -Djavax.management.builder.initial=com.caucho.jmx.MBeanServerBuild
    erImpl com.caucho.server.resin.Resin -conf /opt/broker/conf/resin.conf-DONTTOUCH -stdout /usr/opt/var/ec/resin_stdout.log -stderr /
    usr/opt/var/ec/resin_stderr.log
    Starting JVM: /usr/j2se/bin/java -server -Xms512m -Xmx1518m -Xgcprio:pausetime -Xgcreport -Xgcpause -Xnoclassgc -Xverbosetimestamp
    -Xverbose:memory -Duser.region=US -Djava.rmi.server.hostname=127.0.0.1 -Xverboselog:/var/ec/gc.log -Xss1m -Dresin.home=/opt/ec/resi
    n-3.0.8 -Djava.util.logging.manager=com.caucho.log.LogManagerImpl -Djavax.management.builder.initial=com.caucho.jmx.MBeanServerBuild
    erImpl com.caucho.server.resin.Resin -conf /opt/broker/conf/resin.conf-DONTTOUCH -stdout /usr/opt/var/ec/resin_stdout.log -stderr /
    usr/opt/var/ec/resin_stderr.log
    [Wed Jun  8 02:22:32 2005][10594][memory ] 16394.810-16396.048: GC 855928K->637947K (982468K), 38.596 ms
    [Wed Jun  8 02:23:32 2005][10594][memory ] old collection phase 0-2 pause time: 4.982000 ms
    [Wed Jun  8 02:23:33 2005][10594][memory ] total mark time: 1206.293 ms
    [Wed Jun  8 02:23:34 2005][10594][memory ] old collection phase 4-5 pause time: 282.940000 ms
    [Wed Jun  8 02:23:34 2005][10594][memory ] (pause includes wb processing: 11.989 ms, compaction: 187.881 ms, update ref: 82.460 ms)
    [Wed Jun  8 02:23:34 2005][10594][memory ] old collection phase 5 pause time: 1.489000 ms
    [Wed Jun  8 02:23:34 2005][10594][memory ] total sweep time: 362.131 ms
    [Wed Jun  8 02:23:34 2005][10594][memory ] old collection phase 5-0 pause time: 3.600000 ms
    [Wed Jun  8 02:23:34 2005][10594][memory ] 16456.124-16457.694: GC 875844K->602150K (982468K), 293.011 ms
    [Wed Jun  8 02:23:36 2005][10594][memory ] old collection phase 0-2 pause time: 5.443000 ms
    [Wed Jun  8 02:23:37 2005][10594][memory ] total mark time: 1193.165 ms
    [Wed Jun  8 02:23:37 2005][10594][memory ] Changing GC strategy to single generation, concurrent mark and parallel sweep
    [Wed Jun  8 02:23:37 2005][10594][memory ] total sweep time: 188.485 ms
    [Wed Jun  8 02:23:37 2005][10594][memory ] thread waited for memory 1221.038 ms
    [Wed Jun  8 02:23:37 2005][10594][memory ] old collection phase 4-0 pause time: 201.460000 ms
    [Wed Jun  8 02:23:37 2005][10594][memory ] (pause includes wb processing: 11.796 ms, compaction: 89.731 ms, update ref: 174.185 ms)
    [Wed Jun  8 02:23:37 2005][10594][memory ] 16459.696-16461.079: GC 897617K->644996K (982468K), 206.903 ms
    [Wed Jun  8 02:23:43 2005][10594][memory ] old collection phase 0-2 pause time: 5.478000 ms
    [Wed Jun  8 02:23:44 2005][10594][memory ] total mark time: 1250.001 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ] total sweep time: 104636.154 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ] thread waited for memory 105716.032 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ] old collection phase 4-0 pause time: 104654.538000 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ] (pause includes wb processing: 17.199 ms, compaction: 104566.365 ms, update ref: 50.661 m
    s)
    [Wed Jun  8 02:25:29 2005][10594][memory ] 16467.014-16572.902: GC 873943K->692479K (982468K), 104660.016 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ] old collection phase 0-2 pause time: 7.802000 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ]
    [Wed Jun  8 02:25:29 2005][10594][memory ] Memory usage report
    [Wed Jun  8 02:25:29 2005][10594][memory ]
    [Wed Jun  8 02:25:29 2005][10594][memory ] young collections
    [Wed Jun  8 02:25:29 2005][10594][memory ] number of collections = 12
    [Wed Jun  8 02:25:29 2005][10594][memory ] total promoted = 5557728 (size 311089312)
    [Wed Jun  8 02:25:29 2005][10594][memory ] max promoted = 653552 (size 36719336)
    [Wed Jun  8 02:25:29 2005][10594][memory ] total GC time = 3.995 s
    [Wed Jun  8 02:25:29 2005][10594][memory ] mean GC time = 332.910 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ] maximum GC Pauses = 429.677 , 452.820, 471.384 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ]
    [Wed Jun  8 02:25:29 2005][10594][memory ] old collections
    [Wed Jun  8 02:25:29 2005][10594][memory ] number of collections = 304
    [Wed Jun  8 02:25:29 2005][10594][memory ] total promoted = 357976 (size 18460880)
    [Wed Jun  8 02:25:29 2005][10594][memory ] max promoted = 122556 (size 6577440)
    [Wed Jun  8 02:25:29 2005][10594][memory ] total GC time = 471.349 s (pause 156.340 s)
    [Wed Jun  8 02:25:29 2005][10594][memory ] mean GC time = 1550.490 ms (pause 514.276 ms)
    [Wed Jun  8 02:25:29 2005][10594][memory ] maximum GC Pauses = 540.426 , 557.177, 104654.538 ms
    [Wed Jun  8 02:25:29 2005][10594][memory ]
    [Wed Jun  8 02:25:29 2005][10594][memory ] number of concurrent mark phases = 304
    [Wed Jun  8 02:25:29 2005][10594][memory ] number of concurrent sweep phases = 290
    [Wed Jun  8 02:25:29 2005][10594][memory ] number of parallel sweep phases = 13

    We seem to be getting a similar thing happening using 1.4.2_05 and -X:UseParNewGC on solaris. It runs for about 40 mins in our production system then the ParNew gcs blow out to anywhere between 20 and 300 seconds or more!
    I'm stumped since there's no indication of any problems it just seems as though the vm has decided to change approaches or is contending for resources in some other area (e.g. threads).

  • Imac I3 hangs, 3 long beeps, pause, 3 long beeps, etc - requires hard restart to clear

    Recently added two 2-Gig memory sticks (now 4 of them).  One of the adds does not show mfr or serial number, but does show "OK"
    No other changes, but beeps don't stop until reboot.

    I think yoy still hve some bad RAM.
    The Memory test can really only be trusted if it finds a problem, not if it doesn't find a problem.
    Memtest OS X...
    http://www.memtestosx.org/joomla/index.php
    Rember is a freeware GUI for the memtest ...
    http://tech.kateva.org/2005/10/rember-freeware-memory-test-utility.html

  • ParNew long pauses in Tomcat 6 on T5120.

    I am running some simmulated user load for 50 concurent users on a Java application running inside Tomcat 6 on Sun's T5120 (8 processor 8G RAM) on Solaris 10. The utilization of the application seens to be ok (90-95%), but sometimes JVM goes into long GC pauses that are not acceptable for in my case. The longest pauses I can afford is about 10 seconds. The only other thing that's running on the machine is apache (mod_jk) that forwards the requests to the Tomcat process.
    My GC Settings:
    GC_VERBOSE_OPTS="-XX:+PrintTenuringDistribution -Xloggc:${CATALINA_HOME}/logs/gc_${TSTAMP}.log"
    GC_VERBOSE="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime $GC_VERBOSE_OPTS"
    MEMORY_SETTINGS="-Xms6912m -Xmx6912m -XX:NewSize=1728m -XX:MaxNewSize=1728m -XX:SurvivorRatio=8 -Xss256k -d64 -XX:PermSize=192M -XX:MaxPermSize=192M"
    GC_SETTINGS=" -XX:+DisableExplicitGC -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -X:ParallelGCThreads=6
    The output of verbose GC just before it goes into very long 55 second pause:
    76824.094: [GC 76824.094: [ParNew
    Desired survivor size 163027352 bytes, new threshold 16 (max 31)
    - age   1:   28658536 bytes,   28658536 total
    - age   2:   78258856 bytes,  106917392 total
    - age   3:   47935016 bytes,  154852408 total
    : 1592576K->152602K(1592576K), 0.7859117 secs] 4605449K->3213877K(6900992K), 0.7864152 secs]
    Total time for which application threads were stopped: 0.7876342 seconds
    Application time: 17.9823540 seconds
    Total time for which application threads were stopped: 0.0037981 seconds
    Application time: 0.0002300 seconds
    76842.868: [Full GC 76842.868: [ParNew
    Desired survivor size 163027352 bytes, new threshold 16 (max 31)
    - age   1:   16586944 bytes,   16586944 total
    - age   2:   24580024 bytes,   41166968 total
    - age   3:   77453264 bytes,  118620232 total
    - age   4:   41331496 bytes,  159951728 total
    : 1568283K->157516K(1592576K), 0.5095165 secs] 4629559K->3218791K(6900992K), 0.5099791 secs]
    Total time for which application threads were stopped: 0.5109060 seconds
    Application time: 21.7442444 seconds
    76865.132: [GC 76865.132: [ParNew
    Desired survivor size 163027352 bytes, new threshold 5 (max 31)
    - age   1:   70949544 bytes,   70949544 total
    - age   2:     325496 bytes,   71275040 total
    - age   3:   16995328 bytes,   88270368 total
    - age   4:   51961024 bytes,  140231392 total
    - age   5:   39539328 bytes,  179770720 total
    : 1573196K->176896K(1592576K), 4.8568739 secs] 4634471K->3402338K(6900992K), 4.8573679 secs]
    Total time for which application threads were stopped: 4.8667158 seconds
    Application time: 0.0000997 seconds
    76870.002: [GC [1 CMS-initial-mark: 3225442K(5308416K)] 3425594K(6900992K), 1.2304280 secs]
    Total time for which application threads were stopped: 1.2421993 seconds
    76871.233: [CMS-concurrent-mark-start]
    Application time: 3.9732836 seconds
    76875.212: [GC 76875.212: [ParNew
    Desired survivor size 163027352 bytes, new threshold 5 (max 31)
    - age   1:   89811184 bytes,   89811184 total
    - age   2:   47431856 bytes,  137243040 total
    - age   3:     229704 bytes,  137472744 total
    - age   4:   10895976 bytes,  148368720 total
    - age   5:   31371840 bytes,  179740560 total
    : 1592576K->176896K(1592576K), 19.5701529 secs] 4818018K->3616356K(6900992K), 19.6394327 secs]
    *Total time for which application threads were stopped: 19.6456021 seconds*
    Application time: 5.3914509 seconds
    76900.622: [GC 76900.622: [ParNew
    Desired survivor size 163027352 bytes, new threshold 3 (max 31)
    - age   1:   73583152 bytes,   73583152 total
    - age   2:   73057080 bytes,  146640232 total
    - age   3:   32981616 bytes,  179621848 total
    - age   4:      98336 bytes,  179720184 total
    - age   5:      12880 bytes,  179733064 total
    : 1592576K->176896K(1592576K), 54.7521974 secs] 5032036K->3921814K(6900992K), 54.7527833 secs]
    *Total time for which application threads were stopped: 55.1387374 seconds*
    Application time: 5.3171414 seconds
    76960.700: [GC 76960.700: [ParNew
    Desired survivor size 163027352 bytes, new threshold 3 (max 31)
    - age   1:   66359848 bytes,   66359848 total
    - age   2:   62758584 bytes,  129118432 total
    - age   3:   50477568 bytes,  179596000 total
    : 1592576K->176896K(1592576K), 7.1058428 secs] 5337494K->4095887K(6900992K), 7.1063802 secs]
    *Total time for which application threads were stopped: 7.1077796 seconds*
    Application time: 12.7908379 seconds
    Total time for which application threads were stopped: 0.0042312 seconds
    Application time: 0.3358069 seconds
    76980.938: [GC 76980.939: [ParNew
    Desired survivor size 163027352 bytes, new threshold 3 (max 31)
    - age   1:   69012896 bytes,   69012896 total
    - age   2:   56309432 bytes,  125322328 total
    - age   3:   54304696 bytes,  179627024 total
    : 1592576K->176896K(1592576K), 2.5164799 secs] 5512702K->4233158K(6900992K), 2.5169843 secs]
    Total time for which application threads were stopped: 2.5178621 seconds
    Application time: 14.8490191 seconds
    Total time for which application threads were stopped: 0.0070005 seconds
    Application time: 0.0003056 seconds
    76998.312: [Full GC 76998.313: [ParNew
    Desired survivor size 163027352 bytes, new threshold 16 (max 31)
    - age   1:   38875304 bytes,   38875304 total
    - age   2:   63941104 bytes,  102816408 total
    - age   3:   56090408 bytes,  158906816 total
    : 1592576K->156275K(1592576K), 1.1885482 secs] 5648848K->4265573K(6900992K), 1.1891583 secs]
    Total time for which application threads were stopped: 1.1899499 seconds
    Application time: 8.8337996 seconds
    I would appreciate some help tunning GC patameters to eliminate the long pauses.
    Edited by: MikhailPDX on May 6, 2008 11:53 AM

    I saw this posting before about pauses and I don't think I am having the same issue since I am using Java(TM) 2 Runtime Environment, Standard Edition 1.5.0_15-b04 Java HotSpot(TM) 64-Bit Server VM) where the mentioned pauses bug was supposedly fixed.
    Edited by: MikhailPDX on May 7, 2008 10:30 AM

Maybe you are looking for

  • 790FX-GD70 LAN Problems

    Hello all; I recently purchased a bunch of components on NCIX, and assembled the computer together, and everything is working extremely well - minus the LAN connections. I've never seen the LEDs light up, and all I'm ever getting in the windows prope

  • How to use multiple WSDL operations in One BPEL process Recieve Activity ?

    Is there anyway to attach multiple WSDL operations with a Single BPEL process ? How ?

  • Can't do a save as disc image

    I have iMovie '09 (8.0.6) iDVD 7.1.2 and am having a problem saving a large file (3.9gb) as a disc image. This file was made in iMovie with chapters and I "Shared to Media Browser" (I am using the .m4v file).I did a short (2 min.) test file in iDVD t

  • Color correcting w/ Effects tab......

    I've applied the color correction from the Effects tabl under video filter to a particular clip in a particular sequence. And, I'm attempting to simply start the clip at zero saturation and then end the clip at 100 saturation. I've set key frame poin

  • Additional fields for bp duplication check

    Hello, I'm in a process of implementing ADDRESS_SEARCH BADI. Is there any way to bring additianal fields (phone number and date of birth) to the BADI interface? Thanks