Memory Leak / Strange Garbage Collection

Help!
We are having strange problems that appear to be related to a memory leak. The
strange part is that even if we don't hit the site, it appears to leak. Can someone
please explain the output below: Notice how freespace is decreasing, even with
no direct site activity:
[GC 48857K->35514K(130560K), 0.0136978 secs]
[GC 49018K->35548K(130560K), 0.0144821 secs]
[GC 49052K->35550K(130560K), 0.0128796 secs]
[GC 49054K->35549K(130560K), 0.0121789 secs]
[GC 49053K->35547K(130560K), 0.0126394 secs]
[GC 49051K->35582K(130560K), 0.0161642 secs]
[GC 49086K->35770K(130560K), 0.0209171 secs]
[GC 49247K->36005K(130560K), 0.0188181 secs]
[GC 49509K->36198K(130560K), 0.0129967 secs]
etc...
If I understand the numbers correctly, we have less and less free space available.
If anyone has any insights into this it will be greatly appreciated. We have problems
moving into production.
Our environment: Solaris 8, Jdk1.3.1, WL 5.1
Chris

Chris - turn off verbose GC and and don't worry about it.
Visit java.sun.com and read all about Java and Garbage Collection and JVMs.
Weblogic does 'stuff' all on it's own even when it is not being accessed - just
like your refrigerator runs when are on vacation - (please tell me you don't worry
about that too). Objects get created and deleted. There is no pressing need for
the garbage collector to recover every scrap of unused memory - so it doesn't.
When the JVM does desperately need memory, it will run a Full GC and recover (almost)
all of that.
Then again it's nice to see someone who is curious about how the darn thing works.
:) Mike
"Chris" <[email protected]> wrote:
>
Thanks for the information. I guess I didn't understand it properly.
Is there a
reason why the numbers keep increasing, even with no site activity? It
looks like
there is less and less free space every few minutes....? After running
the whole
night after posting the original message, the numbers now look like:
[GC 55586K->42276K(130560K), 0.0136978 secs]
ie. Just keeps going up. Why does it increase? Thanks for any explanations!
Dimitri Rakitine <[email protected]> wrote:
You are not 'leaking memory' (hopefully!) - these are minor collections
(quickly
copying objects which lived long enough to the old generation portion
of the heap
and reclaiming space used by objects which died young) - wait until
major collection
(when it says [Full GC ...]).
Chris <[email protected]> wrote:
Help!
We are having strange problems that appear to be related to a memoryleak. The
strange part is that even if we don't hit the site, it appears to
leak.
Can someone
please explain the output below: Notice how freespace is decreasing,even with
no direct site activity:
[GC 48857K->35514K(130560K), 0.0136978 secs]
[GC 49018K->35548K(130560K), 0.0144821 secs]
[GC 49052K->35550K(130560K), 0.0128796 secs]
[GC 49054K->35549K(130560K), 0.0121789 secs]
[GC 49053K->35547K(130560K), 0.0126394 secs]
[GC 49051K->35582K(130560K), 0.0161642 secs]
[GC 49086K->35770K(130560K), 0.0209171 secs]
[GC 49247K->36005K(130560K), 0.0188181 secs]
[GC 49509K->36198K(130560K), 0.0129967 secs]
etc...
If I understand the numbers correctly, we have less and less free
space
available.
If anyone has any insights into this it will be greatly appreciated.We have problems
moving into production.
Our environment: Solaris 8, Jdk1.3.1, WL 5.1
Chris--
Dimitri

Similar Messages

  • Memory Leak Detector data collection timing

    Hello,
    I am Yoshizo Aori working at HP Japan.
    I would like to know timing of data collection for
    updating object type byte size increase rate.
    Data collection at garbage collection or any other timing?
    Is it possible to change the timing of the data collection?

    Yoshizo,
    The actual "Growth(bytes/sec)" column is updated along with the other columns at every normal GC and currently, independent of any GC, also every ten seconds. This interval is not yet configurable. (While the trend analysis is running, it is possible to manually press "Refresh" to get shorter time between updates.)
    However, the Growth column is primarily calculated using historic data collected during normal GCs while the trend analysis is active. Only if the historic data shows a difference in heap usage, the current value shown in the "Size (KB)" column is taken into account. The effect is that the "Refresh" button only updates Growth column rows that have had non-zero values.
    This also means that if an application leaks slowly and doesn't generate enough garbage to trigger a GC in the near future, you may not notice it by looking at the Growth column. If so, it is possible to trigger a GC, and thus a possible collection of historic data, by selecting "Garbage Collect" from the Action menu.
    Remember though, since the Growth column represents the growth rate over the entire time that the trend analysis has been running, you may want to avoid very long running analyses. In fact, after a while, historic data is not collected every GC but more and more seldom.

  • Getting memory dump without garbage collection

    Hi all.
    Does anyone know of a way to get a memory dump from the Sun JVM (version 1.5.0_11 for Windows) without garbage collection occurring first? I've tried the -XX:+PrintClassHistogram option, but this always seems to garbage collect before printing the histogram.
    My problem is that I see heap usage increasing very rapidly in the JVM, then garbage collection occurs and reduces memory usage back down to what it was before. However, this results in the JVM spending a large amount of time garbage collecting. I would like to be able to see the contents of the heap before GC occurs.
    These are the options I've tried so far:
    * Using -XX:+PrintClassHistogram. As mentioned above, this always garbage collects before printing the histogram.
    * Using -XX:+HeapDumpOnOutOfMemoryError. The problem is that the JVM always manages to GC before running out of memory, so never dumps the heap.
    * Using the jmap tool. Unfortunately I'm running Windows (in production), so this is not available for 1.5.
    * Using HPROF. However this seems to slow the JVM down hugely (whenever I use -agentlib:hprof=heap=sites or -agentlib:hprof=heap=dump).
    * Using the HeapViewer demo tool that comes with the JVM. This has the same effect as PrintClassHistogram and garbage collects before outputting.
    * Using JProfiler. Unfortunately it seems (with the 1.5 JVM anyway) the Concurrent Garbage Collector cannot be used in conjunction with JProfiler (I think this is a JVM TI issue?). With the Parallel GC we don't see the same problem (probably mostly because throughput is crippled with the Parallel GC).
    * Using jstat. This only gives us statistics about how much has been garabge collected, not which objects were collected.
    Has anybody got any other suggestions?
    Thanks.
    Neil.

    Hi all.
    Just an update on this -- I couldn't find any way to do this in Java 1.5 (on Windows).
    In Java 1.6 (and maybe in 1.5 on other platforms) jmap will do a heap dump without garbage collecting.
    I also came across an open source memory profiling tool called Ariadna (see http://mernst.org/ariadna/) which seems to work quite well. It was only of limited use in Java 1.5 however, since JVM TI doesn't support the concurrent garbage collector in this version.
    Hope this is helpful anyway. I'll be trying to get upgraded to 1.6 ASAP!
    Thanks.
    Neil.

  • Memory handling and garbage collection?

    Sorry, these are the correct snippets of code, of course:
    public class Test {
         public static void main(String[] args) {
              byte[] b = new byte[10000000];
              b = null;
              while(true) {}
    public class Test {
         public static void main(String[] args) {
              new Test();
         public Test()
              byte[] b = new byte[10000000];
              b = null;
              zzz();
         public synchronized void zzz()
              try
                   wait();
              catch(Throwable t) {}

    Oh god this is all messed up... Original message then:
    Hi all!
    I'm just interested in the way that Java handles garbage and frees memory, as I have seen some problems with this in the chat server I'm building. I was under the impression that you could remove a reference to something and the memory allocated by it would automatically free up.
    I wrote a stupid little test program that allocates a 10MB byte array and then immediately removes the reference to it. Using the Windows Task Manager I just compared the memory usage when allocating the huge array, to the usage when not. When using the array my program eats about 15MB of memory, while the amount when not using the array is about 5MB. So it's obvious that no memory at all is freed when I remove the reference to that array.
    public class Test {
         public static void main(String[] args) {
              byte[] b = new byte[10000000];
              b = null;
              while(true) {}
    }Ok so perhaps the Garbage Collector doesn't operate while in an endless loop. A little change to let the program get stuck in a wait() instead:
    public class Test {
         public static void main(String[] args) {
              new Test();
         public Test() {
              byte[] b = new byte[10000000];
              b = null;
              zzz();
         public synchronized void zzz() {
              try {
                   wait();
              catch(Exception e) {}
    }Unfortunately, the result is the same. Program eats 10MB of memory too much, and it wouldn't free up in the few minutes I waited anyway.
    Anyone at all have thoughts on this?
    Thanks

  • Memory Leak in BlazeDS

    I have used BlazeDS in an  application to push the data in realtime. However, we have been  experiencing a heavy memory leak in the application. The application has  a scheduler that pushes the data every one minute into the default  queue. Without any browsers opened, the appication remain stable. And  closing the browsers also cleans up the FlexClient object and other  associated objects for that browsers. But the leak occurs when we open  and keep browsers for a long period, say 2 to 3 days. The memory usage  of application gradually increases and throws a  "java.lang.OutOfMemoryError: Java heap space" Exception. On analyzing  the HeapDump, I found that there are thousands of AsyncMessage Objects  in the memory, not getting garbage-collected. This the channel definiton  congured in services-config.xml file
            <channel-definition id="my-polling-amf" class="mx.messaging.channels.AMFChannel">
                <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amfpolling" class="flex.messaging.endpoints.AMFEndpoint"/>
                <properties>
                    <polling-enabled>true</polling-enabled>
                    <polling-interval-seconds>4</polling-interval-seconds>
                    <invalidate-session-on-disconnect>true</invalidate-session-on-disconn ect>
                </properties>
            </channel-definition>
    Does anyone know the cause of this issue and how it can be overcome. Please help.

    We also have the same problem. The JVM goes out of memory every 24 - 50hrs. I have documented a working fix for the problem on the JVM side.
    http://in-finite.me/fixing-blazeds-polling-amf-memory-leak/@

  • Fixing Memory Leaks in AIR App?

    Hi Friends,
    I'm been facing this memory leaks issue in our app and this has taken enough of our time and resources and we are not being able to find a solution for it.
    I have identified the problem in the module where we primarily need memory related fixes which is - We are setting Repeater's recycleChildren() property to true/false based upon certain conditions which we cant change. Now when this property is set to false Repeater is supposed to be removing its last created objects from memory and creating fresh ones. In our case repeater is unable to delete those. When I managed to get their instances (using createdChildren()) and freed them in code I called System.gc() for releasing the memory back to OS. Now what is happening is that this approach works fine when I run the app from code but when I create its installer (from Installsheild) and formally out in on machines it does not work. I came to know the reason from following blogs:
    http://jvalentino.blogspot.com/2009/05/flex-memory-issue-3-garbage-collection.html
    http://gskinner.com/blog/archives/2006/06/as3_resource_ma.html
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/system/System.htm l#gc()
    http://stackoverflow.com/questions/192373/force-garbage-collection-in-as3
    http://gskinner.com/blog/archives/2006/08/as3_resource_ma_2.html
    Guys, can anyone of you suggest what should I do here? This has become a delivery bottleneck and we need to give a fix where the memroy is released periodically and efficiently so that the end user's system does not hang.
    Please help.
    Shubhra

    Are you sure it doesn't ? Maybe flash does release memory but the OS leaves it assigned as described in the below comment from http://www.mikechambers.com/blog/2008/08/06/what-are-your-biggest-issues-with-adobe-air/co mment-page-3/#comment-26330
    "I just finished doing more experiments, I looks like the AIR app  does free the memory, but the OS leaves it assignned to the app process,  until other apps requiere that memory. So, looks like it is a natural  behaviour and the memory leak is not as terrible as I thougth."

  • Memory Leak when just launched and Idle..  fixes when being used ??  [HELP]

    So I'm in the debugging and testing phase of my app and using this tool for tracking memory leaks ( https://github.com/mrdoob/Hi-ReS-Stats )
    When I launch my app my numbers are
    FPS: 61/60
    MS: 17
    MEM: 3.157
    MAX: 3.157
    Now immediately my memory starts increasing    from  3.157, 3.167, 3.177, 3.187, 3.197 and so on.
    Now if I make any nav selection in my app
    MEM changes back down to about 3.215
    but then it starts its count again   3.215, 3.225,  3.235, 3.445, 3.455
    I don't have any loops happening.
    Has anyone run in to this ?
    I'm almost tempted to force garbage collection every 60 seconds that the app is idle or something.  Not the best way to handle this ..   I just dont know where the leak is happening.
    Any support is appreciated!
    Cheers!

    Hi there - I just had the same query a couple of days ago (http://forums.adobe.com/thread/977174?tstart=30).
    I saw the same symptoms on my app so I built a blank app with just the profiler on stage. I've been monitoring it for a few days now and notice that memory does creep up even when the app is left idle (apart from the profiler) - but ... and this is the important bit ... it does periodically get reduced back to the starting point (when the garbage collector kicks in and memory is released).
    When I was monitoring my app the time through this cycle could be well over 5 mins.
    If you actually use the monitor when putting your app through it's paces you'll see memory being gobbled up more rapidly and hopefully (if you've no leaks) the garbage collection kicking in more regularly and bringing the reported usage back down.

  • Garbage collection Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01

    "Hi,
    I try and understand the mechanism of garbage collection of the Java Virtual Machine : Hewlett-Packard Hotspot release 1.3.1.01.
    There is description of this mechanism in the pdf file : "memory management and garbage collection" available at the paragraph "Java performance tuning tutorial" at the page :
    http://h21007.www2.hp.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1,1701,1607,00.html
    Regarding my question :
    Below is an extract of the log file of garbage collections. This extract has 2 consecutive garbage collections.
    (each begins with "<GC:").
    <GC: 1 387875.630047 554 1258496 1 161087488 0 161087488 20119552 0 20119552
    334758064 238778016 335544320
    46294096 46294096 46399488 5.319209 >
    <GC: 5 387926.615209 555 1258496 1 161087488 0 161087488 0 0 20119552
    240036512 242217264 335544320
    46317184 46317184 46399488 5.206192 >
    There are 2 "full garbage collections", one of reason "1" and one of reason "5".
    For the first one "Old generation After " =238778016
    For the second "Old generation After " =238778016
    Thus, "Old generation Before garbage collection" of the second is higher than "Old generation After garbage collection". Why?
    I expected all objects to be allocated in the "Eden" space. And therefore I did not expect to s

    I agree but my current Hp support is not very good on JVM issues.
    Rob Woollen <[email protected]> wrote:
    You'd probably be better off asking this question to HP.
    -- Rob
    Martial wrote:
    The object of this mail is the Hewlett-Packard 1.3.1.01 Hotspot JavaVirtual Machine
    release and its garbage collection mechanism.
    I am interested in the "-Xverbosegc" option for garbage collectionmonitoring.
    I have been through the online document :
    http://www.hp.com/products1/unix/java/infolibrary/prog_guide/java1_3/hotspot.html#-Xverbosegc
    I would like to find out more about the garbage collection mechanismand need
    further information to understand the result of the log file generatedwith the
    "-Xverbosegc"
    For example here is an extract of a garbage collection log file generatedwith
    Hewlett-Packard Hotspot Java Virtual Machine. Release 1.3.1.01.
    These are 2 consecutive rows of the files :
    <GC: 5 385565.750251 543 48 1 161087488 0 161087488 0 0 20119552 264184480255179792
    335544320 46118384 46118384 46137344 5.514721 >
    <GC: 1 385876.530728 544 1258496 1 161087488 0 161087488 20119552 020119552 334969696
    255530640 335544320 46121664 46106304 46137344 6.768760 >
    We have 2 full garbage collections, one of Reason 5 and the next oneof Reason
    1.
    What happened between these 2 garbage collections as we got : "Oldgeneration
    After" of row 2 is higher than "Old generation Before" of row 1? Iexpected Objects
    to be initially allocated in eden and so we could not get "old generation2modified
    between the end of one garbage collection and before the next one.
    Could you please clarify this issue and/or give more information aboutgarbage
    collection mechanisms with the Hewlett-Packard Hotspot Java VirtualMachine. Release
    1.3.1.01.

  • SystemManager and Garbage Collection

    Hi everyone, I have a question regarding the SystemManager and Garbage Collection. I have and application that loads in its assets via a swc created in Flash. In that swc I have different MovieClips that act as the different screens of my application each one being tied to its own custom class. For example one MovieClip is a login screen another is the main menu etc. These screens contain components, text fields and animations. The problem that I am having is when I move from one screen to the other the garbage collector is not cleaning up everything. There are still references to the MovieClips that have animations for example. So even though I remove the screen via removeChild and set the variable reference to that object to null it is not releasing the MovieClips. When I pull it up in the profiler it shows me that SystemManager is holding references to them. Also if I debug the applicaion and look inside the instance of the MovieClip I can see that the private property "listeners" has values, but I am not adding listeners. It appears that the SystemManager is adding listeners. Does anyone know how I can clear these listeners or force the SystemManager to release these items. Any ideas or help would be greatly appreciated. I am fairly new to dealing with memory management and garbage collection in Flex. I am using Flash CS4 to create the swc and Flash Builder 4 Beta with the 3.4 framework and Flash Player 10 to create the app. If you need me to clarify any of this please let me know. Again any help or ideas on where to go from here would be great!

    This chain says that the focusManager is referencing UserMenu.  Is there a default button or focusable objects in UserMenu and references from UserMenu to the objects in question?
    BTW, the CS4 fl.managers.FocusManager and any fl.. classes are incompatible with Flex, so make sure you're not using them in your MovieClips.
    Alex Harui
    Flex SDK Developer
    Adobe Systems Inc.
    Blog: http://blogs.adobe.com/aharui

  • WLS not garbage collecting enough?

    WLS 5.1 SP5
    SunOS 5.6 (2.6)
    JDK 1.2.2_005a (with the JVMARGS directive set as per BEA's docs. The
    JVMARGS directive solved the SIGBUS 10 for us when requesting the
    AdminMain servlet)
    We tried some stress testing of our web app by running about 100 to 200
    virtual clients requesting the same URL (servlet that forwards to a
    JSP). These virtual clients are actually Java threads running
    simultaneously.
    After each test, we notice that, via WL console, the heap usage of WL
    increased. However, after leaving WL server by itself (no HTTP requests
    coming to it) for a long time (say 30 minutes to 1 hour), we noticed
    that the heap usage has not gone down, except for about 1%. As soon as
    we explicitly tell WLS to garbage collect via the WL console, then the
    heap usage goes down ... sometimes even as low as just 10% when we tell
    it to gc again just after a gc a few seconds earlier.
    Is this proper behaviour by WLS? ... or does this have something to do
    with the JVMARGS directive? Why doesn't it gc even after a long time?
    Thanks in advance,
    John Salvo
    Homepage: http://homepages.tig.com.au/~jmsalvo/

    There are a number of problems with memory usage and garbage collection. Going to WLS 5.1 will fix
    some of them. You should also consider having your application call System.runFinalization(); and
    System.gc() from time to time.
    Mike
    "Jim Zhou" <[email protected]> wrote:
    I see the same behavior in my stress test/pre-production run. I think it's
    normal behavior. The GC will kick in when the heap size gets 100%, you might
    see a pause in your WLS because all threads will stop for the GC ( you
    should see CPU really busy when GC is in ). JDK 1.2 should have better GC
    algorithm than JDK 1.1. Usually for people still using JDK1.1.x, tuning the
    heap size for tolerable GC pause is a pain., the tuning guide want you to
    run multiple WLS instances in a box to stagger the effects of GC pause. If
    you use JDK 1.2, you probably set your heap size comparable to your box's
    memory.
    In one of my stress test, I set heap size to 256m, I see on my console the
    heap gets full every 30 seconds, and it take 1 or 2 seconds to GC. But if
    your heap is 2G, it might take longer.
    WLS 4.51 SP11
    Solaris 2.6
    JDK 1.2.2_06.
    Regards,
    Jim Zhou.
    Jesus M. Salvo Jr. <[email protected]> wrote in message
    news:[email protected]...
    WLS 5.1 SP5
    SunOS 5.6 (2.6)
    JDK 1.2.2_005a (with the JVMARGS directive set as per BEA's docs. The
    JVMARGS directive solved the SIGBUS 10 for us when requesting the
    AdminMain servlet)
    We tried some stress testing of our web app by running about 100 to 200
    virtual clients requesting the same URL (servlet that forwards to a
    JSP). These virtual clients are actually Java threads running
    simultaneously.
    After each test, we notice that, via WL console, the heap usage of WL
    increased. However, after leaving WL server by itself (no HTTP requests
    coming to it) for a long time (say 30 minutes to 1 hour), we noticed
    that the heap usage has not gone down, except for about 1%. As soon as
    we explicitly tell WLS to garbage collect via the WL console, then the
    heap usage goes down ... sometimes even as low as just 10% when we tell
    it to gc again just after a gc a few seconds earlier.
    Is this proper behaviour by WLS? ... or does this have something to do
    with the JVMARGS directive? Why doesn't it gc even after a long time?
    Thanks in advance,
    John Salvo
    Homepage: http://homepages.tig.com.au/~jmsalvo/

  • Avoiding Garbage Collection

    Hi,
    Does anyone know of a general design pattern that allows an object to remain
    in memory without being garbage collected ? I'm not clear whether or not
    the singleton pattern fulfills this requirement. I basically want to have a
    Global constants class (constants are loaded from a properties file) that
    remains in memory so that it can be used by various components. The
    constants are loaded from a properties file initialially. So if that class
    gets garbage collected, then the next time that class is accessed, it will
    have to reload from the props file. This is a performance issue, and I
    would like to find a solution around it.
    I will probably want to have other services such as a LoggingService,
    JNDIService that I want started up, and for it to remain in memory. I know
    you can register startup classes with WL, but do those classes remain in
    memory ?
    I'm been trying to find an answer to these questions. Hopefully someone
    will have them.
    Thanks.

    One way to do it is to bind your constants class into JNDI during
    start-up.
    -- Rob
    Jamie Tsao wrote:
    >
    Hi,
    Does anyone know of a general design pattern that allows an object to remain
    in memory without being garbage collected ? I'm not clear whether or not
    the singleton pattern fulfills this requirement. I basically want to have a
    Global constants class (constants are loaded from a properties file) that
    remains in memory so that it can be used by various components. The
    constants are loaded from a properties file initialially. So if that class
    gets garbage collected, then the next time that class is accessed, it will
    have to reload from the props file. This is a performance issue, and I
    would like to find a solution around it.
    I will probably want to have other services such as a LoggingService,
    JNDIService that I want started up, and for it to remain in memory. I know
    you can register startup classes with WL, but do those classes remain in
    memory ?
    I'm been trying to find an answer to these questions. Hopefully someone
    will have them.
    Thanks.--
    Coming Soon: Building J2EE Applications & BEA WebLogic Server
    by Michael Girdley, Rob Woollen, and Sandra Emerson
    http://learnweblogic.com

  • Memory leak in java / forcing garbage collection for unused resource?

    Is there any possibility in big programs if not designed properly for leakage of memory?
    If say i forget to force garbage collection of unused resouces what will happen?
    Even if i am forcing garbage collection how much assurity can be given to do so?
    I need answers w.r.t typical programming examples if someone can provide i will be happy.
    Or any useful link.
    Thanks
    Vijendra

    Memory leaks are usually much related with C/C++ programming since in that language you have direct access to memory using pointers.
    Now, in Java you do not have access to pointers, however you could still tie up your objects in a way that the garbage collection can not remove them.
    Basically, the grabage collection will search all the object implementation, and see if they are referenced or not. If not it will free that memory. However if you, somehow in you code allow a reference to your object then the garbage collection will not displose of that object.
    An example I can think of is when developing web applications. For example storing objects in the session will mean that you will have a reference to the object from the session, therefore the garbage collection will not free up the meomry taken by those objects untill the session has expired.
    That is how I know it... at least that is how they tought it to me!
    regards,
    sim085

  • Huge memory leaks in using PL/SQL tables and collections

    I have faced a very interesting problem recently.
    I use PL/SQL tables ( Type TTab is table of ... index by binary_integer; ) and collections ( Type TTab is table of ...; ) in my packages very widely. And have noticed avery strange thing Oracle does. It seems to me that there are memory leaks in PGA when I use PL/SQL tables or collections. Let me a little example.
    CREATE OR REPLACE PACKAGE rds_mdt_test IS
    TYPE TNumberList IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    PROCEDURE test_plsql_table(cnt INTEGER);
    END rds_mdt_test;
    CREATE OR REPLACE PACKAGE BODY rds_mdt_test IS
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    END;
    END rds_mdt_test;
    I run the following test code:
    BEGIN
    rds_mdt_test.test_plsql_table (1000000);
    END;
    and see that my session uses about 40M in PGA.
    If I repeat this example in the same session creating the PL/SQL table of smaller size, for instance:
    BEGIN
    rds_mdt_test.test_plsql_table (1);
    END;
    I see again that the size of used memory in PGA by my session was not decreased and still be the same.
    The same result I get if I use not PL/SQL tables, but collections or varrays.
    I have tried some techniques to make Oracle to free the memory, for instance to rewrite my procedure in the following ways:
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    x.DELETE;
    END;
    or
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    FOR indx in 1 .. cnt LOOP
    x.DELETE(indx);
    END LOOP;
    END;
    or
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    empty TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    x := empty;
    END;
    and so on, but result was the same.
    This is a huge problem for me as I have to manipulate collections and PL/SQL tables of very big size (from dozens of thousand of rows to millions or rows) and just a few sessions running my procedure may cause server's fall due to memory lack.
    I can not understand what Oracle reseveres such much memory for (I use local variables) -- is it a bug or a feature?
    I will be appreciated for any help.
    I use Oracle9.2.0.1.0 server under Windows2000.
    Thank you in advance.
    Dmitriy.

    Thank you, William!
    Your advice about using DBMS_SESSION.FREE_UNUSED_USER_MEMORY was very useful. Indeed it is the tool I was looking for.
    Now I write my code like this
    declare
    type TTab is table of ... index binary_integer;
    res TTab;
    empty_tab TTab;
    begin
    res(1) := ...;
    res := empty_tab;
    DBMS_SESSION.FREE_UNUSED_USER_MEMORY;
    end;
    I use construction "res := empty_tab;" to mark all memory allocated to PL/SQL table as unused according to Tom Kyte's advices. And I could live a hapy life if everything were so easy. Unfortunately, some tests I have done showed that there are some troubles in cleaning complex nested PL/SQL tables indexed by VARCHAR2 which I use in my current project.
    Let me another example.
    CREATE OR REPLACE PACKAGE rds_mdt_test IS
    TYPE TTab0 IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    TYPE TRec1 IS RECORD(
    NAME VARCHAR2(4000),
    rows TTab0);
    TYPE TTab1 IS TABLE OF TRec1 INDEX BY BINARY_INTEGER;
    TYPE TRec2 IS RECORD(
    NAME VARCHAR2(4000),
    rows TTab1);
    TYPE TTab2 IS TABLE OF TRec2 INDEX BY BINARY_INTEGER;
    TYPE TStrTab IS TABLE OF NUMBER INDEX BY VARCHAR2(256);
    PROCEDURE test_plsql_table(cnt INTEGER);
    PROCEDURE test_str_tab(cnt INTEGER);
    x TTab2;
    empty_tab2 TTab2;
    empty_tab1 TTab1;
    empty_tab0 TTab0;
    str_tab TStrTab;
    empty_str_tab TStrTab;
    END rds_mdt_test;
    CREATE OR REPLACE PACKAGE BODY rds_mdt_test IS
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    BEGIN
    FOR indx1 IN 1 .. cnt LOOP
    FOR indx2 IN 1 .. cnt LOOP
    FOR indx3 IN 1 .. cnt LOOP
    x(indx1) .rows(indx2) .rows(indx3) := indx1;
    END LOOP;
    END LOOP;
    END LOOP;
    x := empty_tab2;
    dbms_session.free_unused_user_memory;
    END;
    PROCEDURE test_str_tab(cnt INTEGER) IS
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    str_tab(indx) := indx;
    END LOOP;
    str_tab := empty_str_tab;
    dbms_session.free_unused_user_memory;
    END;
    END rds_mdt_test;
    1. Running the script
    BEGIN
    rds_mdt_test.test_plsql_table ( 100 );
    END;
    I see that usage of PGA memory in my session is close to zero. So, I can judge that nested PL/SQL table indexed by BINARY_INTEGER and the memory allocated to it were cleaned successfully.
    2. Running the script
    BEGIN
    rds_mdt_test.test_str_tab ( 1000000 );
    END;
    I can see that plain PL/SQL table indexed by VARCHAR2 and memory allocated to it were cleaned also.
    3. Changing the package's type
    TYPE TTab2 IS TABLE OF TRec2 INDEX BY VARCHAR2(256);
    and running the script
    BEGIN
    rds_mdt_test.test_plsql_table ( 100 );
    END;
    I see that my session uses about 62M in PGA. If I run this script twice, the memory usage is doubled and so on.
    The same result I get if I rewrite not highest, but middle PL/SQL type:
    TYPE TTab1 IS TABLE OF TRec1 INDEX BY VARCHAR2(256);
    And only if I change the third, most nested type:
    TYPE TTab0 IS TABLE OF NUMBER INDEX BY VARCHAR2(256);
    I get the desired result -- all memory was returned to OS.
    So, as far as I can judge, in some cases Oracle does not clean complex PL/SQL tables indexed by VARCHAR2.
    Is it true or not? Perhaps there are some features in using such way indexed tables?

  • Full garbage collection issue, not releasing/flagging memory

    I have the following problem running on a multi-cpu windows server with Java 1.4.2_05 using WebLogic 8.1:
    During a lifecyle of the web application (under load, but not to heavy) memory usage seems ok and garbage collection is called regularly. Suddenly, the used heap starts to rize very fast and after a while, even a full garbage collection cylce, does not release any memory anymore.
    I am sure that, from our coding, we release memory ok, and normally we should only use about 5 to 10 mb for each user max (with0 normal defnew garbage collections).
    I tried changing the garbage collection parameters, but this does not solve the problem. Best scenario was with the concurrent collector and I got this output at +/- the end:
    [GC 100202K->93511K(115628K), 0.0091472 secs]
    [GC 148480K->139612K(163808K), 0.0225914 secs]
    [Full GC[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor289]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor290]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor273]
    153750K->133006K(164064K), 1.2434402 secs]
    [GC 148939K->137948K(203264K), 0.0223085 secs]
    [GC 188789K->177116K(203264K), 0.0180729 secs]
    [Full GC[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor312]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor322]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor309]
    189788K->170264K(203264K), 1.1851945 secs]
    [Full GC 203228K->203227K(203264K), 1.2876122 secs]
    [Full GC 203263K->203233K(203264K), 1.3354548 secs]
    [Full GC 203263K->203258K(203264K), 1.2873518 secs]
    <Jan 17, 2007 9:40:40 AM EST> <Error> <HTTP> <BEA-101017> <[ServletContext(id=33114655,name=console,context-path=/console)] Root cause of ServletException.
    java.lang.OutOfMemoryError
    >
    [Full GC 203263K->203233K(203264K), 1.2814516 secs]
    [Full GC 203233K->203231K(203264K), 1.6029044 secs]
    [Full GC 203263K->203242K(203264K), 1.3081352 secs]
    <Jan 17, 2007 9:41:51 AM EST> <Emergency> <WebLogicServer> <BEA-000210> <The WebLogic Server is no longer listening for connections.>
    [Full GC 203263K->203247K(203264K), 1.3161194 secs]
    [Full GC 203263K->203249K(203264K), 1.2954988 secs]
    [Full GC 203263K->203247K(203264K), 1.6423404 secs]
    <Jan 17, 2007 9:41:57 AM EST> <Alert> <WebLogicServer> <BEA-000218> <Server shutdown has been requested by <WLS Kernel>>
    [Full GC 203263K->203250K(203264K), 1.3161025 secs]
    Another strange item is: I maximized the amount of memory it uses to 512m with the Xmx parameter, I am almost sure that that one is used, but it never gets higher than 203M? Does anyone know why this is?
    Another strange item: the monitoring in the weblogic code indicates 32MB of usage (relative memory usage seems to be ok, but the quanity indication is just plain wrong) with 15 threads running.
    This problem does not exist when using JBoss 4.0.2 or 4.0.3 (standard j2ee settings).
    If anyone has an idea or can help me, I would appreciate it very very much. :)

    Hi ,
    Is this issue resolved ?
    we are facing same problem.
    1. We have checked the CPU and memory utilization everything is normal
    2. GC logs showing FULL GC calls continuously
    3. After restart the resin server system is working normally.
    Environment detail
    Resin ./resin-pro-3.0.18 on suse Linux
    Java JDK1.4.2_08
    Please suggest

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

Maybe you are looking for