Reducing full Garbage Collection frequency.

I've been trying to improve the performance while inserting a large number of records to an embedded H2 database. Monitoring memory useage suggests that it's being used rather innefficiently. A lot of the objects created by H2 seem to find their way into "tenured" space before being freed. Full mark-and-sweap garbage collections are occuring every couple of seconds, despite the fact that only about 10% of the available heap is occupied.
Any advise on tuning the garbage collector to improve throughput in this case?

I think if surviving objects max out the to-space (part of the young generation) the remaining objects are copied into the tenured generation.
You can get more info via:
-XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGC -Xloggc:<filename>

Similar Messages

  • Can I force full garbage collection?

    Hi, my program is memory bound , as users load more files more memory is required , if the user decides to load a very large number of files they will eventually run out of memory. So I am trying to detect when there is less 15% of heap memory left, then force garbage collection and if it cant free up more than 15% of the heap I will stop the user from loading any more files. But the problem is though I call System.gc() to try and force a full garbage collect it rarely
    retrieve enough memory to get below the 15% limit. But using the Yourkit Profiler I can select the Force GarbageCollection option and this always manages to free up more memory to get the figure under the 15% limit. in support of this I found that sometimes my program stop me loading more files when there is still quite a bit available.
    So my questions are.
    1. I know System.gc() is only hint to garbage collect, but the docs imply it only replys after the garbage collection (if any) has been done, is this right or do I have to wait.
    2. Is there any way to Force complete Garbage Collectionas profiler appears to do.
    3. is there a VM option I could set instead to force the JVM to completely garbage collect at say 83% so that if I then polled that 85% of heap was being used I would know that it really was, and I wouldnt need to bother trying to garbage collect further. (Im using Suns 1.6. JVM on Windows and Linux, and Apples 1.5 or 1.6 JVM on Macs)
    public static void checkMemoryWhilstLoadingFiles() throws LowMemoryException
            MemoryUsage mu = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
            //No max defined future proofing
            if(mu.getMax()==-1)
                return;
            if (mu.getUsed() > (mu.getMax()  *  0.85f))
                MainWindow.logger.warning("Memory low:" + mu);
                System.gc();
                MainWindow.logger.warning("Memory low gc1:" + ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
                System.gc();
                MainWindow.logger.warning("Memory low gc2:" + ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
                System.gc();
                MainWindow.logger.warning("Memory low gc3:" + ManagementFactory.getMemoryMXBean().getHeapMemoryUsage());
                mu = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
                if (mu.getUsed()  > (mu.getMax()  *  0.85f))
                    MainWindow.logger.severe("Memory too low:" + mu);
                    throw new LowMemoryException("Running out of memory:"+mu.getUsed());
                else
                     MainWindow.logger.warning("Memory usage reduced to:" + mu);   
        }thanks for any help Paul
    Edited by: paultaylor on 27-Jun-2008 11:10

    On all of the current Sun HotSpot JVM's, calling System.gc() will cause a full compacting collection. Unless you have -XX:+DisableExplicitGC on your command line, in which case the call is a noop. Or if you are running the mostly-concurrent collector (-XX:+UseConcMarkSweepGC) and have the -XX:+ExplicitGCInvokesConcurrent flag on your command line, in which case calling System.gc() will start a concurrent collection (and the calling thread will block until the cycle is finished).
    But calling System.gc() isn't enough to recover all the space that might be recovered. For example, System.gc() will identify objects that are unreferenced but need to have their finalize() methods called before their space becomes available again. So one call to System.gc() won't recover their space. Those finalize() methods need some cycles to run in, so back-to-back (or back-to-back-to-back :-) calls to System.gc() won't help. If you use a lot of finalize() methods, you should leave a lot of time for the finalize() methods to run between the calls to System.gc(). (Better would be to convert your code to use WeakReferences and run your own reference processing queues, and then you could tell when you were done processing references. But that's real work.) Some people try calling System.runFinalization() and wait for that to return, but that has at least two failure modes (details left to the reader).
    In addition, there are details like: if there is still 15% of the heap free, then we won't aggressively clear SoftReferences when you call System.gc(). We might if you waited until the heap was full and we collected it on our own, since we know how much free space there will be after a collection at the point where we are choosing which SoftReferences to clear, and use that to decide how aggressively to clear SoftReferences.
    There is no method to force the collector to do a compacting collection at, say 85% full. There is an option to have the mostly-concurrent collector start a collection cycle that way. But there's no way to find out if a collection cycle is running.
    You are skating on the edge of the qualities of service offered by the different collectors in the various JVM's available. That weakens your ability to "write once, run anywhere".

  • Full Garbage Collection Problem

    Hi All,
    We are working on NetWeaver Application Server JAVA 7.0
    I am getting an error message in of the the EWA reports for JAVA system. The red alert says as below:
    The maximum ratio of full garbage collections to total garbage collections in the reported interval was higher than 90%.
    In order to solve the above problem, I increased Heap Memory for all JAVA Server nodes to 3072 (earlier it was 2048 for all the server nodes). However, still I am getting same error in EWA report.
    Can any one help me in further analysing and solving the above problem?
    Your help is appreciated.

    Here are links to some of the tools. I have worked with [IBM GC for IBM JVM|http://www.ibm.com/developerworks/java/library/j-ibmtools2/index.html]. You may have to try others that can read Sun JVM's GC log.
    http://www.tagtraum.com/gcviewer.html
    http://www.yourkit.com/overview/index.jsp
    https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPJMETER
    http://java.sun.com/performance/jvmstat/visualgc.html

  • Full garbage collection issue, not releasing/flagging memory

    I have the following problem running on a multi-cpu windows server with Java 1.4.2_05 using WebLogic 8.1:
    During a lifecyle of the web application (under load, but not to heavy) memory usage seems ok and garbage collection is called regularly. Suddenly, the used heap starts to rize very fast and after a while, even a full garbage collection cylce, does not release any memory anymore.
    I am sure that, from our coding, we release memory ok, and normally we should only use about 5 to 10 mb for each user max (with0 normal defnew garbage collections).
    I tried changing the garbage collection parameters, but this does not solve the problem. Best scenario was with the concurrent collector and I got this output at +/- the end:
    [GC 100202K->93511K(115628K), 0.0091472 secs]
    [GC 148480K->139612K(163808K), 0.0225914 secs]
    [Full GC[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor289]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor290]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor273]
    153750K->133006K(164064K), 1.2434402 secs]
    [GC 148939K->137948K(203264K), 0.0223085 secs]
    [GC 188789K->177116K(203264K), 0.0180729 secs]
    [Full GC[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor312]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor322]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor309]
    189788K->170264K(203264K), 1.1851945 secs]
    [Full GC 203228K->203227K(203264K), 1.2876122 secs]
    [Full GC 203263K->203233K(203264K), 1.3354548 secs]
    [Full GC 203263K->203258K(203264K), 1.2873518 secs]
    <Jan 17, 2007 9:40:40 AM EST> <Error> <HTTP> <BEA-101017> <[ServletContext(id=33114655,name=console,context-path=/console)] Root cause of ServletException.
    java.lang.OutOfMemoryError
    >
    [Full GC 203263K->203233K(203264K), 1.2814516 secs]
    [Full GC 203233K->203231K(203264K), 1.6029044 secs]
    [Full GC 203263K->203242K(203264K), 1.3081352 secs]
    <Jan 17, 2007 9:41:51 AM EST> <Emergency> <WebLogicServer> <BEA-000210> <The WebLogic Server is no longer listening for connections.>
    [Full GC 203263K->203247K(203264K), 1.3161194 secs]
    [Full GC 203263K->203249K(203264K), 1.2954988 secs]
    [Full GC 203263K->203247K(203264K), 1.6423404 secs]
    <Jan 17, 2007 9:41:57 AM EST> <Alert> <WebLogicServer> <BEA-000218> <Server shutdown has been requested by <WLS Kernel>>
    [Full GC 203263K->203250K(203264K), 1.3161025 secs]
    Another strange item is: I maximized the amount of memory it uses to 512m with the Xmx parameter, I am almost sure that that one is used, but it never gets higher than 203M? Does anyone know why this is?
    Another strange item: the monitoring in the weblogic code indicates 32MB of usage (relative memory usage seems to be ok, but the quanity indication is just plain wrong) with 15 threads running.
    This problem does not exist when using JBoss 4.0.2 or 4.0.3 (standard j2ee settings).
    If anyone has an idea or can help me, I would appreciate it very very much. :)

    Hi ,
    Is this issue resolved ?
    we are facing same problem.
    1. We have checked the CPU and memory utilization everything is normal
    2. GC logs showing FULL GC calls continuously
    3. After restart the resin server system is working normally.
    Environment detail
    Resin ./resin-pro-3.0.18 on suse Linux
    Java JDK1.4.2_08
    Please suggest

  • How to specify when Full Garbage Collections occur in the Old Generation

    Hi. We seem to be having a problem with a number of JVMs (1.5.0_17-b04) that run a component of a Document Management application. This component stores a large amount of information in caches which reside in the Old Generation. Although these cache sizes can be somewhat controlled by the application, they are currently taking about 85% of the Old Generation space. Fortunately, very few objects get tenured into the Old Generation - they all are cleaned up in the New Generation space.
    The problem we are seeing is that with the Old Generation at 85% full, there are constant full GC's occurring. Since the caches cannot be removed, the system frantically tries to remove objects that can't be removed.
    We have three solutions in mind. The first is to increase the memory allocation to the Old Generation so that the caches take a smaller percentage of the available memory allocation. The second would be to decrease the size of the caches; but this is set more by the number of documents in the application and cannot be made much smaller.
    The third solution is to configure the JVM so that Garbage Collections in the Old Generation do not occur until the memory is more than a specific percentage of memory in the Old Generation. We would then set this percentage to be higher than the amount of memory being used by the caches.
    So, is it possible to tell the JVM to only run a Full GC when the memory in the Old Generation is greater than a specific value (say 85% full)?
    Thanks for your help.
    Andre Fischer.

    afischer wrote:
    The third solution is to configure the JVM so that Garbage Collections in the Old Generation do not occur until the memory is more than a specific percentage of memory in the Old Generation. We would then set this percentage to be higher than the amount of memory being used by the caches.
    So, is it possible to tell the JVM to only run a Full GC when the memory in the Old Generation is greater than a specific value (say 85% full)?Switch to the CMS collector.
    -XX:+UseConcMarkSweepGC
    -XX:CMSInitiatingOccupancyFraction=86

  • Full Garbage Collection

    Hi Friends,
    I'm using weblogic workflow for my project. Last night i got one error, once i gone throw the bea ...i come to know that the error will comes because of "once the application calls webservice and the webservice intern calls the EJB stateless or stateful will fail". But my application is not using any sort of webservices. So i try to find the problem finally i found tht the problem is because of Garbage collection of Heap Size. It is taking 3.8508577 sec's. I feel in this time the JVM thread gets heighest priority and it is killing the application thread which is to be executed as usal.
    Can u guide me how to catch this exception so tht my application won;t get affected. The actual error says like this which is related to Garbage Collection...[Full GC 313152K -> 105060K (1004928K), 3.8508577 secs]. I'm using JDK 1.4.
    Thaks & Regards
    [email protected]

    Replies in this thread.

  • Simulate full garbage collection pauses

    Hi,
    I'm doing some performance tuning for our Java systems, and one thing I want to test is how the applications behave when "stop-the-world" Full GC pauses of various different durations occur in our JVMs. We have seen some problems in our systems where longer GC pauses have negative effects. However, it is difficult to recreate these conditions using the applications themselves. Does anyone know if it is possible to force a JVM to do a "stop-the-world" pause of a specified duration, that would simulate a GC?
    Notes -
    1) I'm not talking about forcing a GC - I know how to do this but this give no control on the duration of the GC. Appreciate might be able to work a solution by writing a program to use up X amount of memory followed by a forced GC, but hoping for something simpler!
    2) We have already exhaustively tuned our JVMs to reduces frequency and duration of Full GC as much as possible but they do still need to occur occassionally. Have engaged with forums alredy on how best to do this, so no need for more info on this!
    3) We are using multiple JVMs which communicate with each other (RMI, etc.), and a large part of what I'm testing is the effect on JVM A of a long GC pause in JVM B while A is communicating with B
    4) JVM version is 1.5.0_u18, normally running on Solaris 9/10
    Thanks in advance for any insights!
    Regards,
    Adrian

    Hi,
    The docs here:
    [http://docs.sun.com/app/docs/doc/806-1367/6jalj6mv1?a=view|http://docs.sun.com/app/docs/doc/806-1367/6jalj6mv1?a=view]
    seem to suggest something like what I want to do could have been achieved in Java 1.2, by sending SIGQUIT to the process. However, in Java 5, SIGQUIT just creates a thread dump. Anyone know if its possible to achive the behaviour from that link in Java 5 by any other means / signal?
    Regards,
    Adrian
    Edited by: AdrianFitz on 05-Mar-2010 13:40

  • Garbage collection Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01

    "Hi,
    I try and understand the mechanism of garbage collection of the Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01.
    There is description of this mechanism in the pdf file : "memory management and garbage collection" available at the paragraph "Java performance tuning tutorial" at the page :
    http://h21007.www2.hp.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1,1701,1607,00.html
    Regarding my question :
    Below is an extract of the log file of garbage collections. This extract has 2 consecutive garbage collections.
    (each begins with "<GC:").
    <GC: 1 387875.630047 554 1258496 1 161087488 0 161087488 20119552 0 20119552
    334758064 238778016 335544320
    46294096 46294096 46399488 5.319209 >
    <GC: 5 387926.615209 555 1258496 1 161087488 0 161087488 0 0 20119552
    240036512 242217264 335544320
    46317184 46317184 46399488 5.206192 >
    There are 2 "full garbage collections", one of reason "1" and one of reason "5".
    For the first one "Old generation After " =238778016
    For the second "Old generation After " =238778016
    Thus, "Old generation Before garbage collection" of the second is higher than "Old generation After garbage collection". Why?
    I expected all objects to be allocated in the "Eden" space. And therefore I did not expect to s

    I agree but my current Hp support is not very good on JVM issues.
    Rob Woollen <[email protected]> wrote:
    You'd probably be better off asking this question to HP.
    -- Rob
    Martial wrote:
    The object of this mail is the Hewlett-Packard 1.3.1.01 Hotspot JavaVirtual Machine
    release and its garbage collection mechanism.
    I am interested in the "-Xverbosegc" option for garbage collectionmonitoring.
    I have been through the online document :
    http://www.hp.com/products1/unix/java/infolibrary/prog_guide/java1_3/hotspot.html#-Xverbosegc
    I would like to find out more about the garbage collection mechanismand need
    further information to understand the result of the log file generatedwith the
    "-Xverbosegc"
    For example here is an extract of a garbage collection log file generatedwith
    Hewlett-Packard Hotspot Java Virtual Machine. Release 1.3.1.01.
    These are 2 consecutive rows of the files :
    <GC: 5 385565.750251 543 48 1 161087488 0 161087488 0 0 20119552 264184480255179792
    335544320 46118384 46118384 46137344 5.514721 >
    <GC: 1 385876.530728 544 1258496 1 161087488 0 161087488 20119552 020119552 334969696
    255530640 335544320 46121664 46106304 46137344 6.768760 >
    We have 2 full garbage collections, one of Reason 5 and the next oneof Reason
    1.
    What happened between these 2 garbage collections as we got : "Oldgeneration
    After" of row 2 is higher than "Old generation Before" of row 1? Iexpected Objects
    to be initially allocated in eden and so we could not get "old generation2modified
    between the end of one garbage collection and before the next one.
    Could you please clarify this issue and/or give more information aboutgarbage
    collection mechanisms with the Hewlett-Packard Hotspot Java VirtualMachine. Release
    1.3.1.01.

  • EWA Alert A- high ratio of full garbage

    Hi All,
    I have been receiving "RED" EWA report for pure JAVA stack from past 3 weeks. Earlier also I had got it and then the heap memory size was 1024. I had increased to 3GB and then I started getting a "YELLO" EWA report alert. However, from past 3 weeks I have been getting a red EWA report for full Garbage Collection problem.
    I had gone throug some of the threads here and found some usefule info like, Paul mentioned in below link:
    [EWA Alert A- high ratio of garbage collection|EarlyWatch Alert A - high ratio of full garbage;
    How have gone to th dev_server0 file and saw some of the garbage collections there. I am quite unable to get the details mentioned there like:
    1.
    Mon Dec 12 07:43:12 2011
    796020.432: [GC 796020.432: [ParNew: 319604K->28169K(320128K),
    0.0588451 secs] 2671315K->2382219K(3116672K), 0.0589239 secs]
    2.
    Thu Dec 08 10:40:24 2011
    461123.817: [Full GC 461123.817: [Tenured
    Thu Dec 08 10:40:25 2011
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor150]
    [Unloading class sun.reflect.GeneratedMethodAccessor543]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor220]......
    I think the first one is the small garbage collections and the latter one is the "Full GC" as the name itself says.---Correct me if I am wrong.
    Can any one help me in understanding the above details?
    What should I do to remove this from my EWA alert?
    Regards,
    Faisal

    Hi,
    I have gone through the server nodes , std_server<n>.out (I have 3 server nodes) and found that on today's date, there are Full GC written in the logs multiple times. But can any one help me in diagnosing this and reaching out to the cause of this problem?
    Regards
    Faisal

  • Garbage Collection Pauses in a client application: Trashing.

    Hi,
    we have put in production a financial trading application written in Java. We are experimenting a strange behaviour of the garbage collector. We are experimenting GC pauses every 4-5 hours which raise CPU time to 100% for 30 seconds. These pauses are not-deterministic. Memory consumption of the entire application is very low (about 30MB).
    This is a strange beheaviour we have detected in the log file of GC:
    5020.027: [GC 27847K->14659K(63936K), 0.0086360 secs]
    5020.036: [Inc GC 14659K->24546K(63936K), 0.0149416 secs]
    5020.107: [GC 27842K->14658K(63936K), 0.0086947 secs]
    5020.116: [Inc GC 14658K->24546K(63936K), 0.0094716 secs]
    5020.181: [GC 27842K->14658K(63936K), 0.0086846 secs]
    5020.190: [Inc GC 14658K->24546K(63936K), 0.0095778 secs]
    5020.255: [GC 27842K->14658K(63936K), 0.0102155 secs]
    5020.266: [Inc GC 14658K->24546K(63936K), 0.0084659 secs]
    5020.335: [GC 27842K->14658K(63936K), 0.0088606 secs]
    5020.344: [Inc GC 14658K->24554K(63936K), 0.0084514 secs]
    as you can see in one second the GC and the IncGC are called many times. WHY?
    We have read articles and tried all various known settings. But we didn't find any solution and at the moment we don't know how to move.
    Thank you for your hint
    Piero

    Which settings have you tried? By the looks of things you've not set any of your JVM memory variables.
    Try bumping up your maximum heap space using -Xmx.
    If you are running on a nice server try using -Xmx256m -Xmx256m -Xmn64m, these should help for a start.
    Also why are you using the "incremental garbage collector" when you have such a tiny application as you can see a full garbage collection is only taking 0.01 seconds.
    Also try reading this http://java.sun.com/docs/hotspot/gc/

  • Garbage Collection Mysteries

    I have an application that consumes huge amounts of heap at time, on the order of 100 MB. I added some code to call System.gc() when I dispose of the window that is responsible for making such huge allocations. My understanding is that System.gc() is supposed to do a full garbage collection, but when I am watching the heap in the Eclipse profiler it does not seem to reclaim very much. In almost ever case, when I click the garbage collect button in the profiler it seems to do a much better job of collecting a lot.
    What is the difference between explicitly invoking the garbage collector from the Eclipse profiler and calling System.gc() in code when the window is disposed?
    How can I get my code to automatically make the garbage collector work so well?
    Cheers, Eric

    jschell wrote:
    Might note however that calling gc() is unlikely to do anything to make your application better.Ain't it the truth. In fact it may even make it worse. Sane applications should not need manual GC calls - the runtime does what it is supposed to do itself and if it doesn't you should investigate and fix the problem that is preventing it from doing what it is supposed to do. First making sure that there is in fact an issue of course, perhaps the app simply needs 100mb of heap space at some point in the lifetime of your application. It isn't exactly a HUGE amount of memory, especially in a Java or .NET VM environment.

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • Cache query results in too much garbage collection activity

    Oracle Coherence Version 3.6.1.3 Enterprise Edition: Production mode
    JRE 6 Update 21
    Linux OS 64 bit
    The application is using a object Customer having following structure
    Customer(CustID, FirstName, LastName, CCNumber, OCNumber)
    Each property of Customer is a inner classes having getValue as one of the methods retuning a value. The getValue method of CCNumber and OCNumber return a Long value. There are 150m instances of Customer in cache. To hold this much data in cache we are running several nodes on 2 machines.
    The following code is used to create indexes on CCNumber and OCNumber:
         ValueExtractor[] valExt = new ValueExtractor[]{
              new ReflectionExtractor("getCCNumber"), new ReflectionExtractor("getValue")};
         ChainedExtractor chExt = new ChainedExtractor(valExt);
         Long value = new Long(0);
         Filter f = new NotEqualsFilter(chExt, value);
         ValueExtractor condExtractor = new ConditionalExtractor(f, chExt, true);
         cache.addIndex(condExtractor, false, null);The client code queries the cache with following code:
         ValueExtractor[] valExt1 = new ValueExtractor[]{
              new ReflectionExtractor("getCCNumber"), new ReflectionExtractor("getValue")};
         ChainedExtractor chExt1 = new ChainedExtractor(valExt1);
         EqualsFilter filter1 = new EqualsFilter(chExt1, ccnumber);
         ValueExtractor[] valExt2 = new ValueExtractor[]{
              new ReflectionExtractor("getOCNumber"), new ReflectionExtractor("getValue")};
            ChainedExtractor chExt2 = new ChainedExtractor(valExt2);
         EqualsFilter filter2 = new EqualsFilter(chExt2, ocnumber);
         AnyFilter anyFilter = new AnyFilter(new Filter[]{filter1, filter2})
         cache.entrySet(anyFilter);The observation is that for 20 client threads the application performs well(avg response time = 200ms) but as the number of client threads increases the application performance goes down disproportionately(query returns anywhere between 1000ms to 8000ms for 60 threads). I think this is because of the eden space filling up very fast when the number of client thread goes up. The number of collections per second goes up with the number of client threads. There are almost 2-3 ParNew collections every second when there are 60 client threads where as only 1 collection per second for 20 client threads. Even 100-200ms pause degrades the overall query performance.
    My question is why coherence is creating so many objects that fills up eden so fast? Is there anything I need to do in my code?

    Hi Coh,
    The reason for so much garbage is that you are using ReflectionExtractors in you filters, I assume you do not have any indexes on your caches either. This means that each time you execute a query Coherence has to scan the cache for matches to the filter - like a full table scan in a DB. For each entry in the cache Coherence has to deserialize that entry into a real object then using reflection call the methods in the filters. Once the query is finished all these deserialized objects are garbage that needs to be collected. For a big cache this can be a lot of garbage.
    You can change to POF extractors to save the deserialization step which should reduce the garbage quite a bit, although not eliminate it. You could also use indexes, which should eliminate pretty much more of the garbage you are seeing during queries.
    JK

  • Aggressive Heap and garbage collection in 1.4.2

    We are trying to monitor the memory usage and the garbage collection for one of our application servers. We are using JDK 1.4.2 b-19 on solaris 8. The machine has 4 900 Mhz CPUs and 8GB ram. The VM options are:
    -server -XX:+AggressiveHeap -Xms3512m -Xmx3512m -verbose:gc
    -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCTimestamps.The first full GC kicks in when the old generation gets full which is understandable. After that however, the full gc kicks in much more frequently and even the gc pauses are fairly long. Below are some of the output from gc.
    For the first GC the old generation was almost full.
    PSYoungGen total 1379840K, used 1324288K
    PSOldGen total 2123648K, used 2103378K
    For the second GC, young generation is 25MB out of 1300MB and old generation is 1100MB out of 2100MB. Still the full gc kicked in and took 23 secs.
    PSYoungGen total 1356608K, used 25856K
    PSOldGen total 2123648K, used 1169626K
    I understand that the adaptive sizing is kicking in (by default) but it should make life better not worse. If the full GC could wait till the old generation gets full again, I will get lesser number of full gc pauses and few and far in between.
    Any idea as to what I can do?
    thanks
    vinay
    {Heap before GC invocations=108:
    Heap
    PSYoungGen      total 1379840K, used 1324288K [0x1a000000, 0x73e20000, 0x73e20000)
      eden space 228096K, 100% used [0x1a000000,0x27ec0000,0x27ec0000)
      from space 1151744K, 95% used [0x27ec0000,0x6ad40000,0x6e380000)
      to   space 92800K, 0% used [0x6e380000,0x6e380000,0x73e20000)
    PSOldGen        total 2123648K, used 2103378K [0x73e20000, 0xf5800000, 0xf5800000)
      object space 2123648K, 99% used [0x73e20000,0xf4434af8,0xf5800000)
    PSPermGen       total 31872K, used 31768K [0xf5800000, 0xf7720000, 0xf9800000)
      object space 31872K, 99% used [0xf5800000,0xf7706200,0xf7720000)
    2842.142: [Full GC 3427666K->1083494K(3503488K), 26.6495189 secs]
    Heap after GC invocations=108:
    Heap
    PSYoungGen      total 1379840K, used 0K [0x1a000000, 0x73e20000, 0x73e20000)
      eden space 1287040K, 0% used [0x1a000000,0x1a000000,0x688e0000)
      from space 92800K, 0% used [0x6e380000,0x6e380000,0x73e20000)
      to   space 92800K, 0% used [0x688e0000,0x688e0000,0x6e380000)
    PSOldGen        total 2123648K, used 1083494K [0x73e20000, 0xf5800000, 0xf5800000)
      object space 2123648K, 51% used [0x73e20000,0xb6039978,0xf5800000)
    PSPermGen       total 63744K, used 31768K [0xf5800000, 0xf9640000, 0xf9800000)
      object space 63744K, 49% used [0xf5800000,0xf7706200,0xf9640000)
    {Heap before GC invocations=114:
    Heap
    PSYoungGen      total 1356608K, used 25856K [0x1a000000, 0x73e20000, 0x73e20000)
      eden space 1240576K, 0% used [0x1a000000,0x1a000000,0x65b80000)
      from space 116032K, 22% used [0x6ccd0000,0x6e610000,0x73e20000)
      to   space 116032K, 0% used [0x65b80000,0x65b80000,0x6ccd0000)
    PSOldGen        total 2123648K, used 1169626K [0x73e20000, 0xf5800000, 0xf5800000)
      object space 2123648K, 55% used [0x73e20000,0xbb456858,0xf5800000)
    PSPermGen       total 65536K, used 31773K [0xf5800000, 0xf9800000, 0xf9800000)
      object space 65536K, 48% used [0xf5800000,0xf7707578,0xf9800000)
    3153.022: [Full GC 1195482K->1149586K(3482688K), 23.3155823 secs]
    Heap after GC invocations=114:
    Heap
    PSYoungGen      total 1359040K, used 0K [0x1a000000, 0x73e20000, 0x73e20000)
      eden space 1245440K, 0% used [0x1a000000,0x1a000000,0x66040000)
      from space 113600K, 0% used [0x6cf30000,0x6cf30000,0x73e20000)
      to   space 113600K, 0% used [0x66040000,0x66040000,0x6cf30000)
    PSOldGen        total 2123648K, used 1149586K [0x73e20000, 0xf5800000, 0xf5800000)
      object space 2123648K, 54% used [0x73e20000,0xba0c4830,0xf5800000)
    PSPermGen       total 65536K, used 31773K [0xf5800000, 0xf9800000, 0xf9800000)
      object space 65536K, 48% used [0xf5800000,0xf7707578,0xf9800000)
    }

    We also had similar issues...If you are using RMI it might cause Full GC. This is the snippet from http://java.sun.com/docs/hotspot/gc/
    <snip>
    Garbage can't be collected in these distributed applications without occasional local collection, so RMI forces periodic full collection. The frequency of these collections can be controlled with properties. For example,
    java -Dsun.rmi.dgc.client.gcInterval=3600000
    -Dsun.rmi.dgc.server.gcInterval=3600000
    </snip>
    By default this is 1 min. So try changing this and try it out.

  • Disabling RMI Explicit Garbage Collection

    I have an application that uses RMI. From different articles on the javasoft web site, I have read that RMI explicitly does a Full RMI every minute (default setting). This is undesirable, b/c these full GCs are causing some objects I need to use later to be garbage collected. From the site, the options are to:
    1) Disable Full GC
    2) Delay Full GC
    For point 1), what are the consequences of disabling explicit full GCing that RMI does? Documentation on the javasoft website only says that "this may also cause some objects to take much longer to be reclaimed" etc. To me, I would think that disabling the explicit Full GC is a bad idea for RMI applications (unless they are really short lived, but that is not my case). It would seem safer to go with 2) option to delay the Full GCs since it's not entirely clear what the consequences are of completey disabling explicit GCs, but this still causes objects I still need to use later to be garbage collected. The last option which I feel is not very elegant, would be to just set something to reference or use that object that is being gc'ed so when a Full GC is performed it doesn't get cleaned up. Does anyone have any experience with this or have any better suggestions?
    I am using Java 1.4.2 currently, but am also interested in learning what would be good for Java 1.2.X and 1.3.X as well.
    Thanks!
    These are the links that I have already visited or docs that I have already read on this topic:
    1) Java 2 Platform, Standard Edition v 1.4 Performance and Scalability Guide
    http://java.sun.com/j2se/1.4/performance.guide.html
    2) Frequently Asked Questions about Garbage Collection in the HotspotTM JavaTM Virtual Machine
    http://java.sun.com/docs/hotspot/gc1.4.2/faq.html
    3) Improving Java Application Performance and Scalability by Reducing Garbage Collection Times and Sizing Memory Using JDK 1.4.1
    http://developers.sun.com/techtopics/mobility/midp/articles/garbagecollection2/#11.1.3.15
    4) Garbage Collection for Remote Objects (1.4.2)
    http://java.sun.com/j2se/1.4.2/docs/guide/rmi/spec/rmi-arch4.html

    Towards the end of my pot I wrote:
    The last option which I feel is not very elegant, would be to just set something to reference or use that object that is being gc'ed so when a Full GC is performed it doesn't get cleaned up. Does anyone have any experience with this or have any better suggestions?
    I realize that I can reference these objects, but I don't think that is really a great solution. This would mean deliberately referencing them (a hack at a solution in my view) when really it's the explicity GC running every minute (b/c of RMI) that is collecting them. If I turn the explicit GC off, my objects are not collected like this.
    My question was really does anyone have any better ideas or experience with RMI's explicit garbage collecting? I don't need to read a tutorial to tell me how to keep my objects alive by referencing them. Thanks.

Maybe you are looking for

  • Status icons in menu bar flickering

    Since installing Mac OS X 10.7.2 on my 15" MacBook Pro (early 2011), the status icons in de menu bar are constantly flickering. This is a video of the icons flickering: http://tinypic.com/r/314qopc/7 . Does anyone know how to resolve this issue? Than

  • How can I set the HDMI output to 1366x768?

    1024x768 is an option from the ATV to the HDMI Port. But my TV is 16:9 and has the resolution of 1366x768 when I plug a computer into the HDMI port. If 1366x768 is not currently possible from the ATV, could apple be so kind as to add it, thanks, Jona

  • Purchased music appears in iTunes on Mac but not on iPhone

    Hi all, Have recently signed up to iTunes Match but I have a silly question. All of my songs now appear in the Music App on my iPhone except for all my purchased music which appears only in iTunes on my Mac. Am I doing something wrong? cheers Jools B

  • Is there apple support chat available after business hours

    I need to reset my security questions for iTunes because I forgot the answers and it is after business hours do I have to wait until morning to get help?

  • Safari Cookies - Removing Website Data

    I dont know if Im going to be able to explain this correctly but as usual an Apple tech started fooling around with my computer and as usual created new problems. I know that you can go to Safari Preferences/Privacy/Remove Website Data and click on D