Extremely long GC pauses

Hi,
We're running the following cluster on a single machine:
- 2 x extend nodes running at 1Gb
     - 6 x storage nodes running at 2Gb
     - Coherence 3.6.1
     - Java 6u16
After a period of about 20 hours use we see massive GC pauses of between 246 and 1440 seconds in the storage enabled nodes with messages including "concurrent mode failure" and "promotion failed".
Is anyone able to advise which of the the many GC parameters we should be using for a simple cluster such as ours?
Thanks
Richard

Here is an extract of the GC log from the extend node:
2012-01-18 20:19:16.136/23277.871 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 13187ms late, indicating that this JVM may be running slowly or experienced a long GC
2012-01-18T20:19:16.417+0100: [GC Before GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 39786722
Max   Chunk Size: 5166196
Number of Blocks: 1323
Av.  Block  Size: 30073
Tree      Height: 29
Before GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 49152
Max   Chunk Size: 49152
Number of Blocks: 1
Av.  Block  Size: 49152
Tree      Height: 1
[ParNew
Desired survivor size 6684672 bytes, new threshold 1 (max 4)
- age   1:   13317624 bytes,   13317624 total
: 118016K->13056K(118016K), 29.7267156 secs] 603664K->577106K(1035520K)After GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 30371405
Max Chunk Size: 5166196
Number of Blocks: 768
Av. Block Size: 39546
Tree Height: 28
After GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 49152
Max Chunk Size: 49152
Number of Blocks: 1
Av. Block Size: 49152
Tree Height: 1
, 29.7272082 secs] [Times: user=0.84 sys=0.34, real=29.73 secs]
2012-01-18 20:19:46.151/23307.886 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 25015ms late, indicating that this JVM may be running slowly or experienced a long GC
2012-01-18T20:19:46.526+0100: [GC Before GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 30371405
Max   Chunk Size: 5166196
Number of Blocks: 768
Av.  Block  Size: 39546
Tree      Height: 28
Before GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 49152
Max   Chunk Size: 49152
Number of Blocks: 1
Av.  Block  Size: 49152
Tree      Height: 1
[ParNew
Desired survivor size 6684672 bytes, new threshold 1 (max 4)
- age   1:   13336344 bytes,   13336344 total
: 118016K->13056K(118016K), 28.6324107 secs] 682066K->651460K(1035520K)After GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 21568859
Max Chunk Size: 5166196
Number of Blocks: 383
Av. Block Size: 56315
Tree Height: 20
After GC:
Statistics for BinaryTreeDictionary:
Total Free Space: 49152
Max Chunk Size: 49152
Number of Blocks: 1
Av. Block Size: 49152
Tree Height: 1
, 28.6328374 secs] [Times: user=6.59 sys=0.27, real=28.62 secs]
2012-01-18 20:20:15.149/23336.884 Oracle Coherence GE 3.6.1.0 <D5> (thread=Cluster, member=1): Service guardian is 23998ms late, indicating that this JVM may be running slowly or experienced a long GC

Similar Messages

  • Extremely long(10 minute)  Full GC times

    I have a problem that seems unrelated to traditional GC tuning. Full Garbage Collection on my server (x86, Linux, 2 proc, 2GB memory, 1GB Heap) is taking in excess of 10 minutes. The frequency of the Full GC is about every 8-10 hours. I've combed some posts for issues related to long GC, but long is a relative term, and usually refers to 2-10 second range.
    My question is, what kinds of things could cause such an extremely long GC. My theory is that this particular machine is hosting some other applications, most notably a mysql server that is connected to remotely. With this setup, another application could be using resources on this primary server while the Full GC is executing, thus causing a thrashing scenario from a CPU perspective. But even so, it seems that this wouldn't cause Full GC to take 2 orders of magnitude longer.
    I've currently changed to the CMS garbage collector, so the long pauses are not a problem, but I'd really like to understand why such a long Full GC pause is even possible, and what can be done about it using the serial Collector

    Possibly the objects that are being garbage collected, are having their finalize() method called. It's not normally guaranteed that the GC will call an objects finalize() method....but you never know.
    In that case, the finalize() method(s) maybe be timing out, eg. trying to close database or network connections, and taking 60 seconds or more to time out individually.
    It's just a wild guess...
    regards,
    Owen

  • Extremely long "reading" times in media browser

    We are running CC2014 on 3 edit systems with a shared 10Gbit 36TB Synology NAS.  Each workstation has approximately 12TB of local raid storage as well. We shoot P2 primarily and I have a little over 17TB archived on the Synology.  Lately, we have had extremely long "reading" times in Media Browser on all the machines.  I don't quite understand how Premiere is indexing and displaying as sometimes the "reading" time is relatively quick, ie. under 10 seconds and then other times it can take 3-5 minutes to display clips within a directory.
    My Directory Structure is:
    Media
         P2 footage
              (P2 folders/ie. cards are all saved with their shoot dates here)
              Our total archive of P2 footage is around 70 cards.
    It appears that when Media Browser is pointed to a P2 Card folder it is indexing the entire "P2 footage directory, ie. all 70 cards", rather than just the individual folder containing 1 P2 card's footage. Ist this the case?
    Would I get better read speeds in Media Browser if I organized my P2 footage into more directories, ie. by year and quarter or by year and month?  
    Really need to know how the indexing works.  Does premiere cache preview files for each clip and if so, how long are these previews valid.  It seems that when I media browse a new P2 Card/Folder, it has forgotten any previous preview files that have been made and cached.
    Any explanation or help would be appreciated.
    We have copied large numbers of cards/folders from the Synology to our local raids to see if the latency was due to the network but the results are the same when we media browse on the local raid copies.

    We are working on this issue. Could you open a support case and submit your
    project so we can have support see if it is fixed in a later release?
    br
    William Wheeler wrote:
    Is anyone experiencing long deploy times using Eclipse with the Portal/XML Beans facets...it takes approx 30 mins to deploy our ear file. The ear contains:
    3 large schemas projects (using XML Bean Builder facets)
    1 Control util project
    1 Portal Web Project
    1 Ear Project
    3 Util projects
    Is there some validation options that can be turned off or other actions to speed this up?
    Bill Wheeler

  • Using Table.Join formula takes extremly long time to get results.(Why Query Folding doesn't work?)

    Hi,
    I built a query with 4 tables inside (load from Oracle DB and two of them are quite big, more than millions of rows). After filtering, I tried to build relationships between tables using Table.Join formula. However, the process took extremly long time to
    bring out results (I ended the process after 15 mins' processing). There's a status bar kept updating while the query was processing, which is showed as  . I suppose
    this is because the query folding didn't working, so PQ had to load all the data to local memory first then do the opertion, instead of doing all the work on the source system side. Am I right? If yes, is there any ways to solve this issue?
    Thanks.
    Regards,
    Qilong 

    Hi Curt,
    Here's the query that I'm refering,
    let
        Source = Oracle.Database("reporting"),
        AOLOT_HISTS = Source{[Schema="GEN",Item="MVIEW$_AOLOT_HISTS"]}[Data],
        WORK_WEEK = Source{[Schema="GEN",Item="WORK_WEEK"]}[Data],
        DEVICES = Source{[Schema="GEN",Item="MVIEW$_DEVICES"]}[Data],
        AO_LOTS = Source{[Schema="GEN",Item="MVIEW$_AO_LOTS"]}[Data],
        Filter_WorkWeek = Table.SelectRows(WORK_WEEK, each ([WRWK_YEAR] = 2015) and (([WORK_WEEK] = 1) or ([WORK_WEEK] = 2) or ([WORK_WEEK] = 3))), 
        Filter_AlotHists = Table.SelectRows(AOLOT_HISTS, each ([STEP_NAME] = "BAKE" or [STEP_NAME] = "COLD TEST-IFLEX" or [STEP_NAME] = "COLD TEST-MFLEX") and ([OUT_QUANTITY] <> 0)),
        #"Added Custom" = Table.AddColumn(Filter_AlotHists, "Custom", each Table.SelectRows(Filter_WorkWeek, (table2Row) => [PROCESS_END_TIME] >= table2Row[WRWK_START_DATE] and [PROCESS_END_TIME] <= table2Row[WRWK_END_DATE])),
        #"Expand Custom" = Table.ExpandTableColumn(#"Added Custom", "Custom", {"WRWK_YEAR", "WORK_WEEK", "WRWK_START_DATE", "WRWK_END_DATE"}, {"WRWK_YEAR", "WORK_WEEK",
    "WRWK_START_DATE", "WRWK_END_DATE"}),
        Filter_AolotHists_byWeek = Table.SelectRows(#"Expand Custom", each ([WORK_WEEK] <> null)),
        SelectColumns_AolotHists = Table.SelectColumns(Filter_AolotHists_byWeek,{"ALOT_NUMBER", "STEP_NAME", "PROCESS_START_TIME", "PROCESS_END_TIME", "START_QUANTITY", "OUT_QUANTITY", "REJECT_QUANTITY",
    "WRWK_FISCAL_YEAR", "WRWK_WORK_WEEK_NO"}),
        Filter_Devices= Table.SelectRows(DEVICES, each ([DEPARTMENT] = "TEST1")),
        SelectColumns_Devices = Table.SelectColumns(Filter_Devices,{"DEVC_NUMBER", "PCKG_CODE"}),
        Filter_AoLots = Table.SelectRows(AO_LOTS, each Text.Contains([DEVC_NUMBER], "MC09XS3400AFK") or Text.Contains([DEVC_NUMBER], "MC09XS3400AFKR2") or Text.Contains([DEVC_NUMBER], "MC10XS3412CHFK") or Text.Contains([DEVC_NUMBER],
    "MC10XS3412CHFKR2")),
        SelectColumns_AoLots = Table.SelectColumns(Filter_AoLots,{"ALOT_NUMBER", "DEVC_NUMBER", "TRACECODE", "WAFERLOTNUMBER"}),
        TableJoin = Table.Join(SelectColumns_AolotHists, "ALOT_NUMBER", Table.PrefixColumns(SelectColumns_AoLots, "AoLots"), "AoLots.ALOT_NUMBER"),
        TableJoin1 = Table.Join(TableJoin, "AoLots.DEVC_NUMBER", Table.PrefixColumns(SelectColumns_Devices, "Devices"), "Devices.DEVC_NUMBER")
    in
        TableJoin1
    Could you please give me some hints why it needs so long to process?
    Thanks.

  • Anybody having probs w/extremely long synchs?

    I lost my 1G Nano 2 weeks ago, two days before the 2G Nano's went on sale, so I bought one. Did the "upgrade" to iTunes v7 (and subsequently v7.0.1) and have also updated my Nano firmware to 1.0.2, so I think I'm running the latest versions. I am, however, experiencing extremely long synchs with the new Nano. Generally what I get is about a 20-second "handshake" where I can see the synch wheel on the Nano display spinning, then it goes away for a LONG time. Meanwhile iTunes just sits there using nearly 100% of my resources and saying "Updating iPod. Do not disconnect". Generally after somewhere between 5 and 10 minutes goes by, the synch wheel reappears on the Nano screen and then if songs need to be transferred that happens (at normal speed) and then everything's over.
    So, what this means is that it takes usually about 10 minutes for the thing to synch, even if no songs, podcasts, etc. are being added or deleted. The weird thing is that every 6 or 7 synchs I get a quick synch like I should. I can't figure out when this happens and have even synched the thing multiple times, leaving it in the dock and just repeatedly selecting "synch" from the Device menu, and not using the computer for anything else. I'll get a few long synchs, then one normal one, then back to long ones.
    Very frustrating.
    Thanks for any help!

    You're software's definitely up to date for your iPod.
    I've seen a couple of reports of people having long syncs with the latest iTunes. From what I've seen, the newer iTunes seems to take more of the computer's processor than the older iTunes versions did.
    If your PC is a desktop make sure you have the iPod plugged in to a USB port on the rear of the computer. You might also want to make sure you have the latest drivers for your computer installed... you can check that in Device Manager (double click on what you want to check, and you'll see the update button).
    To get to Device Manager, click on Start, right click on My Computer, and click on Manage. Then, click on Device Manager in the left-hand column.
    Sorry it's not the magic fix you're probably looking for, but I hope it helps for the time being.
    CG

  • [SOLVED] pacman -Syu took an extremely long time today

    Hey,
    this is the first time I've encountered such an issue:
    today I upgraded as usual (pacman -Syu) but the process took an extremely long time
    (~20 minutes). The last time I upgraded was three days ago.
    The culprits seem to have been the libreoffice-still and texlive packages.
    Has anyone else had this problem? What could have caused this?
    Last edited by Aton (2014-08-31 09:40:25)

    hoschi wrote:Note to myself: Texlive is huge, really huge.
    Unless you've installed https://www.archlinux.org/groups/any/texlive-lang/ and https://www.archlinux.org/packages/extr … ontsextra/ too, texlive is not that big.

  • I have downloaded many themes and it seems as though something is wrong in my toolbar. i am not seeing a refresh button and others, meanwhile my address bar is extremely long like it needs to be made smaller so that my other icons will show

    i am not seeing all of my icons in the top toolbar. i am using green kitties, there is only 1 kitty showing. i do not have a refresh button, nor do i have a drop down button to go back to previous pages. my address bar is extremely long and i can't shorten it to allow missing icons to be displayed.
    does anyone have any suggestions for me please?
    thanks

    Clean the dock connectors.
    Try different sync cables.

  • Regex Pattern Match in an extremely long string

    I need to search a file containing 1 extremely long line (approximately 1 million characters), The pattern I want to search is "ABC" as long as it appears at least n times whatever user input as n. I need the position of where this pattern is found. How to best do this? I tried to break the input into blocks of 100000 characters at a time as too many characters read cause the 'java out of memory' error to occur.
    Then I converted this to a string in order to use REGEX to search. My problem is how to ensure that the last few characters of the current block is also being searched too? How to write the regex expression to do this? Will breaking the input file into multiple lines help?
    eg:
    Searching for ABC as long as it appears at least 3 times continuously ie (ABCABCABC)
    Original Line = XXXXXXXABCABCABCXXXXXXABCX
    The first block of 10 characters read is XXXXXXXABCABC
    The second block of 10 characters read is ABCXXXXXXABCX
    The search result should be position 7 and position 22

    If the sequence of characters is longer than a few hundred KB, then turning it into a String requires you to have enough heap space available in the JVM to store the entire String.
    If that is a problem, an alternative solution is to have a while loop over an InputStream that reads from the source of characters (a file, a network connection, stdin etc.) and looks for the string. Keep a ring buffer the size of the query string, and read the data from the InputStream into it. Then for each character read, compare the content of the ring buffer to the query string.
    This way you will not use more heap space than the size of the query-string, and the size of whatever buffer you use in your InputStream (8KB for the empty constructor of BufferedInputStream at the moment) plus the odds'n ends from the implementation.

  • Prevent long GC pauses

    Hello,
    I'm using jdk1.6.
    The problem is, that sometimes I'm facing a long gc pause, specially during "concurrent mode failure".
    Is there any option for jvm to set a max AppStopTime? And what could be the impact setting such a option ?
    Here are the jvm settings I'm currently using:
    -Xms4G -Xmx4G -XX:MaxNewSize=128M -XX:NewSize=128M -XX:MaxPermSize=256M -XX:PermSize=256M"
    -server -showversion -Xbatch -XX:+UseLargePages -XX:LargePageSizeInBytes=32M
    -XX:+UseParNewGC -XX:ParallelGCThreads=4 -XX:ParallelGCToSpaceAllocBufferSize=192k -XX:SurvivorRatio=6 -XX:TargetSurvivo
    rRatio=75 -XX:+UseConcMarkSweepGC -XX:CMSMarkStackSize=32M -XX:CMSMarkStackSizeMax=32M -XX:+CMSParallelRemarkEnabled -XX:-CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC'
    Afaik there is no other chance to improve gc settings. Do you see any other possibilities ?
    Thanks,
    Christian

    Depending on how long lived the majority of the objects in your heap are you can better adjust the sizes of the new and old generation spaces. If short lived, make your new gen space larger. You could make it as large as half your total heap using "-XX:NewRatio=1". (NewRatio would replace using MaxNewSize)
    How do you know if objects are short lived or long lived? If you watch the action in your heap with VisualGC/Visual VM, and in the display you see everything get cleaned up in the eden & survivor spaces with each GC, then you probably have mostly short lived objects.
    You may also see that your survivor spaces are too small. If too small, objects get promoted to the old generation space too quickly, filling up your old gen space unnecessarily, and creating long GC pauses.
    The best situation is where eden and the survivior spaces are sized such that as many as possible objects get cleaned up on an ongoing basis without them moving to the old generation space as they age.

  • Ipad updated to 7.1 but after update takes extremely long time to bring up itunes, everything else works fine

    I updated my ipad to 7.1 this weekend.  I am using the same wireless network, etc.....nothing else has changed but it takes an extremely long time to load itunes on the ipad.  Everything from the store itself and the items I have purchased take anything rom 3 to 5 minutes to load.
    I have reset the ipad, reset the "network" from the router to the option on the ipad.
    Any other suggestions?

    Check App Store
    1. Tap "Settings"
    2. Tap "iTunes & App Stores"
    3. Tap on your Apple ID
    4.Tap "View Apple ID"
    5. Enter your user name and password.
    6. Tap "Country/Region."
    7. Tap "Change Country/Region"
    8. Select the country/region where you are located.
    9. Tap "Done".
    Note: If the change doesn't take effect, sign out of account and sign in again.

  • Extremely long compression time

    I am trying to compress a 6 minute video into a quicktime movie. It starts out saying it will only take 3 minutes to share. Then it quickly moves up to 17 minutes before coming to a halt.
    Anybody have any ideas on what I am doing wrong or is this common? On previous videos this has never happened. I've tried doing a different video and it is also taking an extremely long time to compress.
    Thanks for the help

    I think possibly it could have something to do with my video settings. I don't know much about video settings.
    We are producing these videos that range from 7-30min from a video camera. We don't need audio (already took that out) and we don't need the best of quality on the video. It obviously needs to be decent enough to see what is going on without straining, but beyond that I don't need much.
    Does anybody have any suggestions on the type of video settings I could use in the expert settings? I believe iMovie is trying to compress much more information than I really need. Thanks for the help.

  • Extremely long video preview export for Audition

    I'm using Premiere Pro CC, and since my last update (I'm on Build 8.2.0 (65)) I have had extremely long export times when attempting to roundtrip to Audition. For example, a 10 minute sequence can take 4 hours before I cancel and send to Audition without video. I know there have been other discussions about this -- is there a solution?

    My version of Photoshop ( 13.0.1 x 64) doesn't have this option listed in the File/export menu. Am I missing something?

  • Iphone 5 -? Extremely long "finishing sync" message on PC, Why

    Extremely long "finishing sync" message on PC, Why

    I've noticed this too at times. But it DOES eventually finish.

  • Extreme long execution time

    Hi,
    lately, I realized that Jubula needs extremely long for all the progresses...
    For example, deleting TestResults or after execution of TestCases, the Progress "Collecting Information" and "Writing Report to Database" needs about 15-20 minutes!!
    I use the embedded database and html toolkit and my project is not big at all!
    I already had larger projects and never experienced that long progresses!
    Any ideas/suggestions why Jubula is suddenly working that slow?
    Thanks

    Hi Nicole,
    Thanks for your question! I think the problem is that the h2 database isn't designed for productive use - it doesn't scale well and will indeed get slower and slower with time (not necessarily size of the project, just time).
    If you need to keep using it then you can do the following:
    - stop Jubula
    - perform an export all to export all of the projects
    - you can also copy the database files (from your home directory under .jubula) to back them up
    - delete the database files from your home directory
    - connect to the database: this will recreate the database
    - then you can import the projects you exported
    Like I say, this isn't the preferred way of working, but the steps above should help with the current performance. Bear in mind that you should ensure that you have backups before you delete the database!
    Hope that helps,
    Alex

  • Strange Long ParNewGC Pauses During Application Startup

    Recently we started seeing long ParNewGC pauses when starting up Kafka that were causing session timeouts:
    [2015-04-24 13:26:23,244] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
    2.111: [GC (Allocation Failure) 2.111: [ParNew: 136320K->10236K(153344K), 0.0235777 secs] 648320K->522236K(2080128K), 0.0237092 secs] [Times: user=0.03 sys=0.01, real=0.02 secs]
    2.599: [GC (Allocation Failure) 2.599: [ParNew: 146556K->3201K(153344K), 9.1514626 secs] 658556K->519191K(2080128K), 9.1515757 secs] [Times: user=18.25 sys=0.01, real=9.15 secs]
    [2015-04-24 13:26:33,443] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
    After much investigation I found that the trigger was the allocation of a 500M static object early in the startup code.  It of course makes no sense that a single large static object in Old memory would impact ParNew collections, but, it does seem to.  I have created a bug report, but, it is still under investigation.
    I have reproduced the problem with a simple application on several Linux platforms including an EC2 instance and the following JREs:
    OpenJDK: 6, 7, and 8
    Oracle: 7 and 8
    Oracle 6 does not seem to have an issue.  All the ParNewGC times are small.
    Here is the simple program that demonstrates the issue:
    import java.util.ArrayList;
    public class LongParNewPause {
       static byte[] bigStaticObject;
       public static void main(String[] args) throws Exception {
       int bigObjSize = args.length > 0 ? Integer.parseInt(args[0]) : 524288000;
       int littleObjSize = args.length > 1 ? Integer.parseInt(args[1]) : 100;
       int saveFraction  = args.length > 2 ? Integer.parseInt(args[2]) : 10;
       bigStaticObject = new byte[bigObjSize];
      ArrayList<byte[]> holder = new ArrayList<byte[]>();
       int i = 0;
       while (true) {
       byte[] local = new byte[littleObjSize];
       if (i++ % saveFraction == 0) {
      holder.add(local);
    I run it with the following options:
    -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx2G -Xms2G
    Note that I have not seen the issue with 1G heaps.  4G heaps exhibit the issue (as do heaps as small as 1.2G)
    Here is the output:
    0.321: [GC (Allocation Failure) 0.321: [ParNew: 272640K->27329K(306688K), 0.0140537 secs] 784640K->539329K(2063104K), 0.0141584 secs] [Times: user=0.05 sys=0.02, real=0.02 secs]
    0.368: [GC (Allocation Failure) 0.368: [ParNew: 299969K->34048K(306688K), 0.7655383 secs] 811969K->572321K(2063104K), 0.7656172 secs] [Times: user=2.89 sys=0.02, real=0.77 secs]
    1.165: [GC (Allocation Failure) 1.165: [ParNew: 306688K->34048K(306688K), 13.8395969 secs] 844961K->599389K(2063104K), 13.8396650 secs] [Times: user=54.38 sys=0.05, real=13.84 secs]
    15.036: [GC (Allocation Failure) 15.036: [ParNew: 306688K->34048K(306688K), 0.0287254 secs] 872029K->628028K(2063104K), 0.0287876 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
    15.096: [GC (Allocation Failure) 15.096: [ParNew: 306688K->34048K(306688K), 0.0340727 secs] 900668K->657717K(2063104K), 0.0341386 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    Even stranger is the fact that the problem seems to be limited to objects in the range of about 480M to 512M.  Specifically:
    [503316465,536870384]
    Values outside this range appear to be OK.  Anyone have any thoughts?  Can you reproduce the issue on your machine?

    I have started a discussion on this issue on the hotspot-gc-dev list:
    Strange Long ParNew GC Pauses (Sample Code Included)
    One of the engineers on that list was able to reproduce the issue and there is some discussion there about what might be going on.  I am a GC novice, but, am of the opinion that there is a bug to be found in the ParNew GC code introduced in Java 7.
    Here is a more frightening example.  The ParNew GCs keeping getting longer and longer - it never stabilized like the previous example I sent did.  I killed the process once the ParNew GC times reached almost 1 minute each.
    Bad Case - 500M Static Object:
    java -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx6G -Xms6G LongParNewPause $((500*1024*1024)) 100 100
    0.309: [GC0.309: [ParNew: 272640K->3028K(306688K), 0.0287780 secs] 784640K->515028K(6257408K), 0.0288700 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
    0.372: [GC0.372: [ParNew: 275668K->7062K(306688K), 0.0228070 secs] 787668K->519062K(6257408K), 0.0228580 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    0.430: [GC0.430: [ParNew: 279702K->11314K(306688K), 0.0327930 secs] 791702K->523314K(6257408K), 0.0328510 secs] [Times: user=0.08 sys=0.01, real=0.03 secs]
    0.497: [GC0.497: [ParNew: 283954K->15383K(306688K), 0.0336020 secs] 795954K->527383K(6257408K), 0.0336550 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
    0.565: [GC0.565: [ParNew: 288023K->21006K(306688K), 0.0282110 secs] 800023K->533006K(6257408K), 0.0282740 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
    0.627: [GC0.627: [ParNew: 293646K->26805K(306688K), 0.0265270 secs] 805646K->538805K(6257408K), 0.0266220 secs] [Times: user=0.07 sys=0.01, real=0.03 secs]
    0.688: [GC0.688: [ParNew: 299445K->20215K(306688K), 1.3657150 secs] 811445K->535105K(6257408K), 1.3657830 secs] [Times: user=3.97 sys=0.01, real=1.36 secs]
    2.087: [GC2.087: [ParNew: 292855K->17914K(306688K), 6.6188870 secs] 807745K->535501K(6257408K), 6.6189490 secs] [Times: user=19.71 sys=0.03, real=6.61 secs]
    8.741: [GC8.741: [ParNew: 290554K->17433K(306688K), 14.2495190 secs] 808141K->537744K(6257408K), 14.2495830 secs] [Times: user=42.34 sys=0.10, real=14.25 secs]
    23.025: [GC23.025: [ParNew: 290073K->17315K(306688K), 21.1579920 secs] 810384K->540348K(6257408K), 21.1580510 secs] [Times: user=70.10 sys=0.08, real=21.16 secs]
    44.216: [GC44.216: [ParNew: 289955K->17758K(306688K), 27.6932380 secs] 812988K->543511K(6257408K), 27.6933060 secs] [Times: user=103.91 sys=0.16, real=27.69 secs]
    71.941: [GC71.941: [ParNew: 290398K->17745K(306688K), 35.1077720 secs] 816151K->546225K(6257408K), 35.1078600 secs] [Times: user=130.86 sys=0.10, real=35.11 secs]
    107.081: [GC107.081: [ParNew: 290385K->21826K(306688K), 41.4425020 secs] 818865K->553022K(6257408K), 41.4425720 secs] [Times: user=158.25 sys=0.31, real=41.44 secs]
    148.555: [GC148.555: [ParNew: 294466K->21834K(306688K), 45.9826660 secs] 825662K->555757K(6257408K), 45.9827260 secs] [Times: user=180.91 sys=0.14, real=45.98 secs]
    194.570: [GC194.570: [ParNew: 294474K->21836K(306688K), 51.5779770 secs] 828397K->558485K(6257408K), 51.5780450 secs] [Times: user=204.05 sys=0.20, real=51.58 secs]
    246.180: [GC246.180: [ParNew^C: 294476K->18454K(306688K), 58.9307800 secs] 831125K->557829K(6257408K), 58.9308660 secs] [Times: user=232.31 sys=0.23, real=58.93 secs]
    Heap
      par new generation   total 306688K, used 40308K [0x000000067ae00000, 0x000000068fac0000, 0x000000068fac0000)
       eden space 272640K,   8% used [0x000000067ae00000, 0x000000067c357980, 0x000000068b840000)
       from space 34048K,  54% used [0x000000068b840000, 0x000000068ca458f8, 0x000000068d980000)
       to   space 34048K,   0% used [0x000000068d980000, 0x000000068d980000, 0x000000068fac0000)
      concurrent mark-sweep generation total 5950720K, used 539375K [0x000000068fac0000, 0x00000007fae00000, 0x00000007fae00000)
      concurrent-mark-sweep perm gen total 21248K, used 2435K [0x00000007fae00000, 0x00000007fc2c0000, 0x0000000800000000)
    Good Case - 479M Static Object:
    java -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Xmx6G -Xms6G LongParNewPause $((479*1024*1024)) 100 100
    0.298: [GC0.298: [ParNew: 272640K->3036K(306688K), 0.0152390 secs] 763136K->493532K(6257408K), 0.0153450 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.346: [GC0.346: [ParNew: 275676K->7769K(306688K), 0.0193840 secs] 766172K->498265K(6257408K), 0.0194570 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    0.398: [GC0.398: [ParNew: 280409K->11314K(306688K), 0.0203460 secs] 770905K->501810K(6257408K), 0.0204080 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.450: [GC0.450: [ParNew: 283954K->17306K(306688K), 0.0222390 secs] 774450K->507802K(6257408K), 0.0223070 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.504: [GC0.504: [ParNew: 289946K->18380K(306688K), 0.0169000 secs] 780442K->508876K(6257408K), 0.0169630 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
    0.552: [GC0.552: [ParNew: 291020K->26805K(306688K), 0.0203990 secs] 781516K->517301K(6257408K), 0.0204620 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.604: [GC0.604: [ParNew: 299445K->21153K(306688K), 0.0230980 secs] 789941K->514539K(6257408K), 0.0231610 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.659: [GC0.659: [ParNew: 293793K->29415K(306688K), 0.0170240 secs] 787179K->525498K(6257408K), 0.0170970 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
    0.708: [GC0.708: [ParNew: 302055K->23874K(306688K), 0.0202970 secs] 798138K->522681K(6257408K), 0.0203600 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.759: [GC0.760: [ParNew: 296514K->26842K(306688K), 0.0238600 secs] 795321K->528371K(6257408K), 0.0239390 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    0.815: [GC0.815: [ParNew: 299482K->25343K(306688K), 0.0237580 secs] 801011K->529592K(6257408K), 0.0238030 secs] [Times: user=0.06 sys=0.01, real=0.02 secs]
    0.870: [GC0.870: [ParNew: 297983K->25767K(306688K), 0.0195800 secs] 802232K->532743K(6257408K), 0.0196290 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    0.921: [GC0.921: [ParNew: 298407K->21795K(306688K), 0.0196310 secs] 805383K->531488K(6257408K), 0.0196960 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    0.972: [GC0.972: [ParNew: 294435K->25910K(306688K), 0.0242780 secs] 804128K->538329K(6257408K), 0.0243440 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    1.028: [GC1.028: [ParNew: 298550K->21834K(306688K), 0.0235000 secs] 810969K->536979K(6257408K), 0.0235600 secs] [Times: user=0.06 sys=0.00, real=0.03 secs]
    1.083: [GC1.083: [ParNew: 294474K->26625K(306688K), 0.0188330 secs] 809619K->544497K(6257408K), 0.0188950 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    1.133: [GC1.133: [ParNew: 299265K->26602K(306688K), 0.0210780 secs] 817137K->547186K(6257408K), 0.0211380 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
    1.185: [GC1.185: [ParNew: 299242K->26612K(306688K), 0.0236720 secs] 819826K->549922K(6257408K), 0.0237230 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    1.240: [GC1.241: [ParNew: 299252K->26615K(306688K), 0.0188560 secs] 822562K->552651K(6257408K), 0.0189150 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.291: [GC1.291: [ParNew: 299255K->26615K(306688K), 0.0195090 secs] 825291K->555378K(6257408K), 0.0195870 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.342: [GC1.342: [ParNew: 299255K->22531K(306688K), 0.0229010 secs] 828018K->554021K(6257408K), 0.0229610 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.396: [GC1.396: [ParNew: 295171K->24505K(306688K), 0.0265920 secs] 826661K->560810K(6257408K), 0.0266360 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    1.453: [GC1.453: [ParNew: 297145K->24529K(306688K), 0.0296490 secs] 833450K->563560K(6257408K), 0.0297070 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    1.514: [GC1.514: [ParNew: 297169K->27700K(306688K), 0.0259820 secs] 836200K->569458K(6257408K), 0.0260310 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.571: [GC1.572: [ParNew: 300340K->27666K(306688K), 0.0199210 secs] 842098K->572150K(6257408K), 0.0199650 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
    1.623: [GC1.623: [ParNew: 300306K->27658K(306688K), 0.0237020 secs] 844790K->574868K(6257408K), 0.0237630 secs] [Times: user=0.08 sys=0.00, real=0.02 secs]
    1.678: [GC1.678: [ParNew: 300298K->31737K(306688K), 0.0237820 secs] 847508K->581674K(6257408K), 0.0238530 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
    1.733: [GC1.733: [ParNew: 304377K->21022K(306688K), 0.0265400 secs] 854314K->573685K(6257408K), 0.0265980 secs] [Times: user=0.08 sys=0.00, real=0.02 secs]
    1.791: [GC1.791: [ParNew: 293662K->25359K(306688K), 0.0249520 secs] 846325K->580748K(6257408K), 0.0250050 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.847: [GC1.847: [ParNew: 297999K->19930K(306688K), 0.0195120 secs] 853388K->581179K(6257408K), 0.0195650 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    1.898: [GC1.898: [ParNew: 292570K->20318K(306688K), 0.0233960 secs] 853819K->584294K(6257408K), 0.0234650 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
    1.953: [GC1.953: [ParNew: 292958K->20415K(306688K), 0.0233530 secs] 856934K->587117K(6257408K), 0.0234130 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]
    2.007: [GC2.007: [ParNew: 293055K->20439K(306688K), 0.0301410 secs] 859757K->589868K(6257408K), 0.0302070 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    2.068: [GC2.068: [ParNew: 293079K->20445K(306688K), 0.0289190 secs] 862508K->592600K(6257408K), 0.0289690 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
    ^C2.129: [GC2.129: [ParNew: 293085K->29284K(306688K), 0.0218880 secs] 865240K->604166K(6257408K), 0.0219350 secs] [Times: user=0.09 sys=0.00, real=0.02 secs]
    Heap
      par new generation   total 306688K, used 40135K [0x000000067ae00000, 0x000000068fac0000, 0x000000068fac0000)
       eden space 272640K,   3% used [0x000000067ae00000, 0x000000067b898a78, 0x000000068b840000)
       from space 34048K,  86% used [0x000000068d980000, 0x000000068f619320, 0x000000068fac0000)
       to   space 34048K,   0% used [0x000000068b840000, 0x000000068b840000, 0x000000068d980000)
      concurrent mark-sweep generation total 5950720K, used 574881K [0x000000068fac0000, 0x00000007fae00000, 0x00000007fae00000)
      concurrent-mark-sweep perm gen total 21248K, used 2435K [0x00000007fae00000, 0x00000007fc2c0000, 0x0000000800000000)

Maybe you are looking for

  • How can I change my bank details on my iTunes account?

    I can't change my bank details on my iPad??

  • How to insert current date

    I'm working on a bulletin template and would like to have a text box with the current date automatically generated and placed in it.  Is this possible?  If so, how do I go about doing this?

  • No_messaging_url_found: Unable to find URL for Adapter Engine

    Hi, I am facing the same problem. I have configured a RFC-File scenario. I get the error"no_messaging_url_found: Unable to find URL for Adapter Engine af.<SID>.<server>" and <INTERNAL.AE_DETAILS_GET_ERROR> I get the error "no_messaging_url_found: Una

  • Pie chart value and percentage

    is it possible to show both value and percentage when holding mouse cursor over the pie chart? I find there is no option for it. thank you very much!

  • Inaccurate EXPLAIN PLAN...!!!

    This is frightening... I've been working on a nasty query on a poorly designed data model and checking my work by hitting the Explain Plan button in SQLDeveloper. I was scratching my head over tons of hash joins and such, when all of a sudden I had a