Heap usage

Hi
I am using JRockit JVM. I see during load tests that heap s used almost entirely and then gc happens. GC times are still in milliseconds and i do not see out of memory issue. I would like to knw if almost 99% of heap consumption calls for increase in heap size ? I checked the heap for memory leaks too. Couldnt find anythin out of usual
Thanks
Krupa

If the heap is 'used' then GC isn't going to be doing much.  Conversely if the heap is cleaned up after a gc thus the active objects are using much less space then there is no problem.
Given that you are not get an out of memory problem and if you are testing the application in a way that mimics production usage for that then it would suggest there is no problem.

Similar Messages

  • Capturing the JVM heap usage information to a log

    When using weblogic 6.1sp3 the console under monitoring/performance a graph is
    displayed with the historical JVM heap usage information. Is there any way to
    capture this information to a log?

    For heap size before and after each gc, you could pass the -verbose:gc option to the JVM
    on startup:
    WLS C:\alex>java -verbose:gc weblogic.Admin PING 10 10
    [GC 512K->154K(1984K), 0.0068905 secs]
    [GC 666K->164K(1984K), 0.0069037 secs]
    [GC 676K->329K(1984K), 0.0029822 secs]
    [GC 841K->451K(1984K), 0.0038960 secs]
    [GC 963K->500K(1984K), 0.0015452 secs]
    [GC 1012K->598K(1984K), 0.0027509 secs]
    [GC 1110K->608K(1984K), 0.0029370 secs]
    [GC 1120K->754K(1984K), 0.0027361 secs]
    [GC 1266K->791K(1984K), 0.0019639 secs]
    [GC 1303K->869K(1984K), 0.0028314 secs]
    [GC 1381K->859K(1984K), 0.0012957 secs]
    [GC 1367K->867K(1984K), 0.0012504 secs]
    [GC 1379K->879K(1984K), 0.0018592 secs]
    [GC 1391K->941K(1984K), 0.0036871 secs]
    [GC 1453K->988K(1984K), 0.0027143 secs]
    Sending 10 pings of 10 bytes.
    RTT = ~47 milliseconds, or ~4 milliseconds/packet
    Looks like it might be too much info though...
    Cheerio,
    -alex
    Fazle Khan wrote:
    When using weblogic 6.1sp3 the console under monitoring/performance a graph is
    displayed with the historical JVM heap usage information. Is there any way to
    capture this information to a log?

  • Does entity cache cause high heap usage ? better setClearCacheOnCommit ?

    Hi all,
    During peak load (150-200 users) of our production ADF application (10.1.3.3), the heap usage can be reach 3GB, causing JVM very busy doing frequent GC.
    Is this possibly because the 'by default uncleared' entity cache ?
    What is the implication is I do 'setClearCacheOnCommit()' ?
    Thank you for your help,
    xtanto

    The EO cache will be cleared when the AM is released in stateless mode. By default that would occur when your web session times out, but you can eagerly release it in stateless mode (when the user is finished with the task that uses that AM).
    Using setClearCacheOnCommit() will more eagerly clear the EO cache, however doing so will clear the VO caches, too, for the VOs related to those EOs so it may end up causing more database requerying than you were doing before. Effectively, after a commit you'll need to requery any data that's needed for the subsequent pages the user visits. If your work flow is such that the user does not do a commit and then continue processing other rows that you've already queried, then it might be an overall slight win on memory usage, however if the user does issue a commit (say, from an Edit form) and then return back to a "list" page to process some other record, doing a clearCacheOnCommit=true will force your list page to requery the data (which it's not doing now when the entity cache isn't been eagerly cleared)
    So, like many performance-related question, it depends on exactly what your app is doing.

  • Crystal Report export to PDF cause high Heap usage ?

    Hi all,
    As part of our reporting integrated with our JSF/JSP application, Crystal report is converted to PDF then sent to browser for user to display. mean while during peak load our Heap usage could reach 3.5GB - 4GB. So I am suspecting the unclosed byteArrayInputStream is the cause.
    (This is a production application so I am collecting information before change the code)
    Is the unclosed() byteArrayInputStream  really cause the problem ?  (the codes is below)
    Thank you,
    Krist
    ByteArrayInputStream byteArrayInputStream = (ByteArrayInputStream)
                                     reportClientDoc.getPrintOutputController().export(exportOptions);       
    reportClientDoc.close();
    writeToBrowser(byteArrayInputStream, response, "application/csv", EXPORT_FILE);
    private void writeToBrowser(ByteArrayInputStream byteArrayInputStream, HttpServletResponse
    response, String mimetype, String exportFile)
       throws Exception {
          byte[] buffer = new byte[byteArrayInputStream.available()];
          int bytesRead = 0;
          response.reset();
          response.setHeader("Content-disposition", "inline;filename=" + exportFile);
          response.setContentType(mimetype);
          //Stream the byte array to the client.
          while((bytesRead = byteArrayInputStream.read(buffer)) != -1)
                { response.getOutputStream().write(buffer , 0, bytesRead);}
          //Flush and close the output stream.
          response.getOutputStream().flush();
          response.getOutputStream().close();

    I do not know if my solution to my Heap problem will help any of you but I thought I would just post it here
    just incase yourselves or others come looking for possible solutions.
    I created a very simlpe report with 2 groups and not much in the way of complex functions. Whilst reporting against about
    100 pages of output everything worked fine, but as soon as we pushed the report up to 500+ pages we got all sorts
    of issues.
    java.lang.OutOfMemoryError: Java heap space
    After much hair pulling and trial and error I discovered that the issue came about where I did not declare formula variables as local. I was concatinating various street address details for of the envelope windows.
    Stringvar Address;      //    I was using this declaration
    Global Stringvar Address;    // Specific Global declaration
    Local Stringvar Address;    // Changed to this declaration
    After changing to Local, my report now runs with no hassels. And the memory usage whilst exporting the report has gone from maxing out at over 1GB to almost nothing (dont even get it registering)
    Am sure someone can come up with a better explanation for this and give reasons but just thougth I would share.
    Cheers
    Darren

  • Could this unclosed() byteArrayInputStream cause high Heap usage ?

    Hi all,
    As part of our reporting integrated with our JSF/JSP application, the
    report is converted to PDF then sent to browser for user to display.
    mean while during peak load our Heap usage could reach 3.5GB - 4GB. So
    I am suspecting the unclosed byteArrayInputStream is the cause.
    (This is a production application so I am collecting information
    before change the code)
    Is the unclosed() byteArrayInputStream really cause the problem ?
    (the codes is below)
    Thank you,
    Krist
    ByteArrayInputStream byteArrayInputStream = (ByteArrayInputStream)
    reportClientDoc.getPrintOutputController().export(exportOptions);
    reportClientDoc.close();
    writeToBrowser(byteArrayInputStream, response, "application/csv",
    EXPORT_FILE);
    private void writeToBrowser(ByteArrayInputStream byteArrayInputStream,
    HttpServletResponse
    response, String mimetype, String exportFile)
    throws Exception {
    byte[] buffer = new byte[byteArrayInputStream.available()];
    int bytesRead = 0;
    response.reset();
    response.setHeader("Content-disposition", "inline;filename=" +
    exportFile);
    response.setContentType(mimetype);
    //Stream the byte array to the client.
    while((bytesRead = byteArrayInputStream.read(buffer)) != -1)
    { response.getOutputStream().write(buffer , 0,
    bytesRead);}
    //Flush and close the output stream.
    response.getOutputStream().flush();
    response.getOutputStream().close();
    }

    xtanto wrote:
    Is the unclosed() byteArrayInputStream really cause the problem ? Source code from 1.6.12
    class ByteArrayInputStream extends InputStream {
         * Closing a <tt>ByteArrayInputStream</tt> has no effect. The methods in
         * this class can be called after the stream has been closed without
         * generating an <tt>IOException</tt>.
         * <p>
        public void close() throws IOException {
        }

  • Find out current old heap usage from within the process

    Hello!
    We use the CMS garbage collector and need a way to find out how much memory is used of the old heap by reachable objects. This we have to do from within the process (not using jvmstat or jstat etc.).
    Since there is no way to distinguish between reachable and non-reachable objects (except for traversing the entire heap... -- or are there other possibilities?) our idea is to get the amount of used memory right after a garbage collection in the old heap.
    Using Java 1.5, this can be done by
    java.lang.management.MemoryPoolMXBean pool = <Pool for Old Generation>;
    pool.getUsage().getUsed();However, java.lang.management is only available in Java 1.5.
    Therefore my first question: Is there a similar way of finding out old heap usage in Java 1.4?
    There is another problem with this method: By calling pool.getUsage().getUsed();, one has to know when a GC has occurred (this could be done by calling it in an interval of x seconds -- if the current value is lower than the one before, a GC must hava occurred). A better way would be to use pool.getCollectionUsage().getUsed();, but this seems not to work for the CMS collector.
    Second question: Is pool.getCollectionUsage().getUsed(); really not working with CMS, or are we just doing it in a wrong way? Are there other ways of finding out the used memory in the old heap after a GC even when using the CMS?
    Thanks for any help!
    Regards,
    Nicolas Michael

    Hi Nicolas,
    There is no API in 1.4 to get the after GC memory usage of the old generation. The only thing close to it is (Runtime.totalMemory - Runtime.freeMemory) but it is the approx amount of memory used for the heap (not just the old generation).
    MemoryPoolMXBean.getCollectionUsage() returns the after GC MemoryUsage. This method should work for all collectors. I have a simple test case that shows it working fine with CMS. It shows the same value as the -XX:+PrintGCDetails shows.
    If you have a test case showing that this method doesn't work correctly, please submit a bug along with the test case. We'll investigate it.
    Thanks
    Mandy

  • Keeping Heap Usage lower than a given threshold level

    Hello,
    We are trying to find out which are the most suitable parameters for the gc for our application.
    We have an application using a 1GB heap.
    We have tried different combinations of parameters, such as generational gc vs single spaced, different nursery sizes, dynamic gc with priority on memory throughput vs pause time, …
    We plot the usage of the heap before and after the gc has run.
    There is one common thing that we see with all the different configurations: The heap usage always reaches levels of around 95% of the heap size.
    We would like to find a way to reduce the heap usage (even if it increases the pause times. If the pause time is higher, we will analyze the final impact on our application responsiveness and decide afterwards).
    The ideal would be to find a way to keep it under a given threshold level (let’s say, 70% of the total heap size).
    Your ideas will be very welcome.
    Thanks and regards,
    Ramiro

    I'm not sure I understand your question, but I'll give you an answer and
    you can tell me if it helped :-).
    Basically the gc doesn't trigger until the heap is full. For
    generational GCs, nursery collections will trigger when the nursery is
    full. Some objects get promoted, and when the heap is entirely full an
    old-space collection is performed.
    So you can't really get JRockit to gc before the heap is full.
    If you want to know how small heap you can use, look at how much space
    is used after a full gc. Add some marigin of error, then use that value
    for -Xmx.
    I don't know what tool you're using for plotting the heap usage, but you
    could use JRockit's Management Console (bin/jrcc). Also, if you want to
    know where your heap memory is going, try the memleak detector from
    http://dev2dev.bea.com/jrockit/tools.html.
    Regards //Johan
    Ramiro Alcazar wrote:
    Hello,
    We are trying to find out which are the most suitable parameters for the gc for our application.
    We have an application using a 1GB heap.
    We have tried different combinations of parameters, such as generational gc vs single spaced, different nursery sizes, dynamic gc with priority on memory throughput vs pause time, ???
    We plot the usage of the heap before and after the gc has run.
    There is one common thing that we see with all the different configurations: The heap usage always reaches levels of around 95% of the heap size.
    We would like to find a way to reduce the heap usage (even if it increases the pause times. If the pause time is higher, we will analyze the final impact on our application responsiveness and decide afterwards).
    The ideal would be to find a way to keep it under a given threshold level (let???s say, 70% of the total heap size).
    Your ideas will be very welcome.
    Thanks and regards,
    Ramiro

  • High heap usage even when system idle, is this caused by AMPool setting ?

    Hi All,
    We are running ADF BC 10.1.3.3 with 2 JVM OC4J instance. Max heap is 3.5 GB.
    I notice that when system is idle, the heap usage for each JVM can be between 1.5 to 2 GB. It is confusing me because relatively no user is accessing the application.
    Then I am suspecting that the setting in Apps Module Pool may be the cause, i.e :
    <jbo.recyclethreshold>50</jbo.recyclethreshold>
    <jbo.ampool.maxavailablesize>90</jbo.ampool.maxavailablesize>
    <jbo.ampool.minavailablesize>25</jbo.ampool.minavailablesize>
    Is there anything wrong with the setting that may cause AM Pool Monitor busy doing wrong thing ?
    Thank you for your help,
    xtanto

    mcmillan wrote:
    rohankolay wrote:Right now i have kernel26-2.6.35.4 which i updated on 27 aug. That is around the same time the problem began.Should i downgrade it to kernel26-2.6.35.3 or kernel26-2.6.34.3?
    I think most people have been seeing the problems in any of the 2.6.35 kernels, so 2.6.34.3 would be my recommendation
    +1
    And put kernel26 in IgnorePkg.

  • Constant increase of resident memroy while heap usage is constant

    Why is it that the memory use (resident memory reported by top) grows to be much larger than the heap? I have a Linux Centos 5 server running JBoss 5 with JDK1.6_29. We are using the latest collectors and a maximum heap of 12GB. I can see that the young generation is never larger than 3.5 GB, the Old Generation is never larger than 300MB, adn the Perm Gen is never alrger than 100 MB. I can see the heap being collected when it aproaches 3.8GB and the JVM has set limits of 4GB and 8GB fro the young and old gen.
    My settigns are as follows:
    JAVA_OPTS="-d64 -Xms12288m -Xmx12288m -XX:MaxPermSize=512m -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:GCTimeRatio=19 -XX:+AggressiveOpts -Dsun.lang.ClassLoader.allowArraySyntax=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000"
    The physical memory of the server is about is about 16GB. When I re-start JBoss, the resident memory is about 4GB and during the week, it increases steadly up to 8.4GB at which pointwe start to see that the OS starts paging into swap. Is the resident memory mainly non-heap or for soft references, or both?
    Why should the server page anything when the resident memory of the JBoss process reaches about 8.5GB whn there is still another 8GB of total memory available to the OS?

    The answer to all of the above comes from understanding, in detail, how virtual memory works and what impact the management (by the OS) of that has on various tools.
    You might want to start by research exactly what top tells you and what it doesn't.

  • Desktop Heap usage with 1 Application Pool running as Web Garden

    We just upgraded our webserver from 2003 to 2012 R2. Originally we set it up as using 8 worker processes, which was good enough on our 2003 with 8GB memory. Since now in our new server we have 16GB, I increased the worker process to 16. It works fine until
    we suddenly we got mysterious errors (NOTE: we don't change any web files during this migration.. all were working fine for 1 day, and suddenly the problem just appearing throughout the day). As a quick fix, we restarted the application pool, and we notice
    strange errors in Event Viewer about Complus (Event ID 4689), which lead us to suspect it's Desktop Heap issue.
    Could anybody please direct us on how to troubleshoot? We want to increase performance and use the memory more efficiently, but it will be hard if we do trial-and-error to find the good number of worker process to use.
    FYI the application pool is set to use Application Pool Identity. We also keep all the registry settings as default (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows\ SharedSection=1024,20480,768
    and HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WAS\Parameters\UseSharedWPDesktop=0)
    Thanks.

    This one might help.
    http://www.airesoft.co.uk/desktopheapmonitor#install
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

  • Websphere/Oracle 11 - much more Heap Usage than with Oracle 10

    Hi all,
    while testing our application with Oracle 11 (previously, we had Oracle 10), we saw that our server uses much more heap space.
    It seems as it has something to do with T4CConnection/T4CPreparedStatement; there are 500 objects of T4CPreparedStatement allocated. Someone told me, that Oracle 11 is using SoftReferences to keep the connection pool; but we don't need that.
    Is that correct? Could that be the problem for the increased heap space? If yes - how can we avoid connection pooling?
    Thanks a lot!!
    Edited by: 840550 on 28.02.2011 23:37

    I suspect you are getting a connection for each user and holding onto it for the duration that he is logged in. Also, that you are not closing resultSet and preparedStatements correctly. Rewrite all your code to get/use/close a connection each time its needed as quickly as possible in a try/catch/finally block. There is no performance penalty opening and closing connections countless times in your code (actually, you are not really closing it, but returning it to the pool).
    Here is a previous post I provided on how to get/use/close a connection:
    Re: can not get the right query result using JDBC
    Note: within the above example, you can pass the already open connection to another function within the above function for it to use. Within that other function, you create a new preparedStatement and resultSet. After using it, you close the resultSet and preparedStatement (in that second function), but not the connection (its closed in the outer function).

  • Engine heap usage exceeded when calling Subflow

    Hello all,
    We are running a single server UCCX 7.0(1)SR05_Build504 with two CUCM version 7.1.5.11901-1 servers.
    We recently added a subflow call to seven of our scripts. The subflow checks an XML file that we loaded to Applications > Document Management in UCCX. The XML file is very small. See screenshot
    This is the Subflow:
    We had no problems when we had only one script calling this subflow. However, now that we have seven of our scripts calling it, we are getting this message when attempting to update or refresh an application.
    I found this article (see link below) even though it refered to version 8.0, and so as soon as I removed the Call subflow from the seven scripts, this message stopped appearing. Do I need to somehow delete at cached document at the end of the subflow script or reset some value? We would like to continue to use the subflow for checking for holidays.
    article - http://www.cisco.com/en/US/products/sw/custcosw/ps1846/products_tech_note09186a0080b51d17.shtml
    Thank you in advance for any help anyone can provide. :-)

    You have to open a Cisco TAC case and they will give you access to a secial download area.
    ES05 was issued around June last year.
    The 7.0(2) was released around June this year and that bug is fixed in 7.0(2)
    If you have the contract you can order the 7.0(2) upgrade via the PUT tool. You can't download it Cisco have to ship you the CD.
    http://www.cisco.com/upgrade
    If you don't have a contract you have a problem
    Regards
    Graham

  • XML parser memory usage

    I try to proove the advantage of SAX (and StAX) parser, i.e. the memory usage over time is very low and quite constant over time while parsing a large XML file.
    DOM APIs create a DOM that is stored in the memory and therefore the memory usage is at least the filesize.
    To analyse the SAX heap usage over time I used the following source:
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.InputStream;
    import org.xml.sax.InputSource;
    import org.xml.sax.XMLReader;
    import org.xml.sax.helpers.XMLReaderFactory;
    public class ParserMemTest {
        public static void main(String[] args) {
            System.out.println("Start");
            try {
                InputStream xmlIS = new FileInputStream(
                        new File("xmlTestFile-0.xml") );
                HeapAnalyser ha = new HeapAnalyser(xmlIS);
                InputSource insource = new InputSource(ha);
                XMLReader SAX2parser = XMLReaderFactory.createXMLReader();
                //SAX2EventHandler handler = new SAX2EventHandler();
                //SAX2parser.setContentHandler(handler);
                SAX2parser.parse(insource);
            } catch (Exception e) {
            System.out.println("Finished.");
    }and the HeapAnalyser class:
    import java.io.IOException;
    import java.io.InputStream;
    public class HeapAnalyser extends InputStream {
        private InputStream is = null;
        private int byteCounter = 0;
        private int lastByteCounter = 0;
        private int byteStepLogging = 200000; //bytes between logging times of measurement
        public HeapAnalyser(InputStream is) {
            this.is = is;
        @Override
        public int read() throws IOException {
            int b = is.read();
            if(b!=-1) {
                byteCounter++;
            return b;
        @Override
        public int read(byte b[]) throws IOException {
            int i = is.read(b);
            if(i!=-1) {
                byteCounter += i;
            //LOG
            if ((byteCounter-lastByteCounter)>byteStepLogging) {
                lastByteCounter = byteCounter;
                System.out.println(byteCounter + ": " + getHeapSize() + " bytes.");
            return i;
        @Override
        public int read(byte b[], int off, int len) throws IOException {
            int i = is.read(b, off, len);
            if (i!=-1) {
                byteCounter += i;
            //LOG
            if ((byteCounter-lastByteCounter)>byteStepLogging) {
                lastByteCounter = byteCounter;
                System.out.println(byteCounter + ": " + getHeapSize() + " bytes.");
            return i;
        public static String getHeapSize(){
            Runtime.getRuntime().gc();
            return Long.toString((Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory())/1000);
    }and these are the results:
    Start
    204728: 1013 bytes.
    409415: 1713 bytes.
    614073: 2400 bytes.
    818763: 3085 bytes.
    1023449: 3772 bytes.
    1228130: 4458 bytes.
    1432802: 5145 bytes.
    1637473: 5832 bytes.
    1842118: 6519 bytes.
    2046789: 7206 bytes.
    2251470: 7894 bytes.
    2456134: 8580 bytes.
    2660814: 9268 bytes.
    2865496: 9955 bytes.
    3070177: 10625 bytes.
    3274775: 11287 bytes.
    3479418: 11950 bytes.
    3684031: 12612 bytes.
    3888695: 13275 bytes.
    4093364: 13937 bytes.
    4298027: 14600 bytes.
    4502694: 15262 bytes.
    4707372: 15925 bytes.
    4912040: 16586 bytes.
    5116662: 17249 bytes.
    5321331: 17912 bytes.
    5525975: 18574 bytes.
    5730640: 19237 bytes.
    5935308: 19898 bytes.
    Finished.
    As you can see while parsing the XML file (200k elements, about 6MB) the heap memory raises. I would expect this result when a DOM API is analysed, but not with SAX .
    What could be the reason? The Runtime class measurement, the SAX implementation or what?
    thanks!

    http://img214.imageshack.us/img214/7277/jprobeparser.jpg
    Test with jProbe while parsing the 64MB XML file.
    Testsystem: Windows 7 64bit, java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02), Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode), Xerces 2.10.0
    Eclipse Console System output:
    25818828: 116752 bytes.
    26018980: 117948 bytes.
    26219154: 99503 bytes.
    26419322: 100852 bytes.
    26619463: 102275 bytes.
    26819642: 103624 bytes.
    27019805: 104974 bytes.
    27220008: 105649 bytes.
    27420115: 106998 bytes.
    27620234: 108348 bytes.
    27820330: 109697 bytes.
    Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
    at java.lang.String.intern(Native Method)
    at org.apache.xerces.util.SymbolTable$Entry.<init>(Unknown Source)
    at org.apache.xerces.util.SymbolTable.addSymbol(Unknown Source)
    at org.apache.xerces.impl.XMLEntityScanner.scanQName(Unknown Source)
    at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
    at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
    at ParserMemTest.main(ParserMemTest.java:47)<img src="file:///C:/Users/*******/AppData/Local/Temp/moz-screenshot.png" alt="" />
    Edited by: SUNMrFlipp on Sep 13, 2010 11:48 AM
    Edited by: SUNMrFlipp on Sep 13, 2010 11:50 AM

  • Java heap won't shring

    java version "1.6.0_11"
    Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)
    I have an application that sees bursts of activity and long periods of no work. Recently I checked and the memory usage was 104,856,816 (used)/ 3,145,924,608 (total). Why didn't the jvm shrink the heap? The default value of MaxHeapFreeRatio is supposed to be 70.

    The HotSpot JVM prefers to leave the heap expanded, rather than shrinking it to your current usage and then having to expand it back up when work comes in. So there's some hysteresis built into the heap shrinking algorithm. If your heap usage stayed small across several full collections, I would expect the heap to shrink. On the other hand, if you aren't doing any work, then I wouldn't expect there to be any full collections.
    You don't say what your command line parameters are. If, for example you used "-Xms3g", you've told the JVM that you want the heap to start at and shrink no smaller than 3GB. In that case, I would expect the heap to start out at 3GB and stay at least that large no matter how little of it you were using at any time.

  • System heap value in LC10

    Hi,
    This is related to LC 7.5 on AIX.
    I go to LC10 - LiveCache Monitoring - Current Status - Memory Areas - Heap Usage.
    On the right side in "Currect value" section there is a "System Heap". How does this value gets assigned. I was in impression that "System Heap" value and OMS_HEAP_LIMIT are same but it does not seem to be same.
    What is the impact of "System Heap" and how can we increase it. In production it is 30% higher than OMS_HEAP_LIMIT. In test box it is same.
    Please let me know if anybody has a good understanding about this issue.
    Thank you.
    Sume

    1) "What is dimensions of data cache and how it can be increased.
    Does it mean that increase the data cache. I know how to increase
    the data cache."
    -> As I wrote that you could increase the value of the CACHE_SIZE
       < Size of the I/O buffer cache in pages > liveCache parameter.
       As you know, the primary consumers of the main memory managed in
       the I/O cache are the converter and the data cache.
    You could monitor the data cache usage in LC10 -> liveCache:Monitoring
    -> Choose Current Status -> Memory Areas -> Caches.
    => you will see the current size values in KB & pages for
       I/O Buffer Cache 
          Data-Cache    -> current size of data cache !!
          Converter 
          Sonstiges
        Catalog-Cache 
        Sequence-Cache
       && current data cache usage information ... 
    You could review the DBAN_OMS_CACHE_OCCUPANCY.csv file of the DB Analyzer
    Statistics information.
    Please check the Permanently Used Area of the Data Area in liveCache,
    review the statistics in DBAN_FILLING.csv file, if the DB Analyzer is
    Running. In case the applications are using all data created in liveCache,
    For the best performance the data need to be cashed => you will define
    the dimensions of your data cache ... + the history data is saved as
    pages in the data cache.
    And if you have problems to identify the values for the liveCache parameters
    On your system => Please create the ticket to the component 'BC-DB-LVC'
    For SAP liveCache documentation in English: http://help.sap.com/saphelp_nw04/helpdata/en/f2/0271f49770f0498d32844fc0283645/frameset.htm
    < -> Database Administration in CCMS: SAP liveCache -> liveCache Assistant ->
    liveCache: Monitoring -> Current Status -> Memory Areas < -> Caches !! >
    2) "How can I find the current filling level in % of OMS Heap.
    I know that total size is defined by OMS_HEAP_LIMIT. Please let me
    know if I am missing anything."
    As you know, the liveCache allocates memory statically as well as
    dynamically. The DATA CACHE is allocated statically.
    When the database instance is started, the I/O buffer cache is created in
    the main memory in accordance with the size entered in the general
    database parameter CACHE_SIZE.
    Additionally the liveCache allocates the OMS HEAP dynamically. 
    The amount of the OMS HEAP memory depends on the runtime of OMS versions              
    and the amount of data which is used by the transactions.
    The OMS heap can grow dynamically until it reaches the maximum size
    specified in the liveCache parameter OMS_HEAP_LIMIT.
    For SAP liveCache documentation in English: http://help.sap.com/saphelp_nw04/helpdata/en/f2/0271f49770f0498d32844fc0283645/frameset.htm
    < -> Database Administration in CCMS: SAP liveCache -> liveCache Assistant ->
    liveCache: Monitoring -> Current Status -> Memory Areas < -> Heap Usage !! >
    So:
    -> You could use Heap Usage display in the LC10.
       < Currently used column -
           Size of the memory actually occupied by the OMS heap
        You could mark the column + click on 'Sum' => see the sum ... >
    -> You could use the Memory node in the liveCache Alert Monitor under
       'Space Management' for LCA connection & you will see, for example:
           Heap Usage                         1 %     < day/time> 
           Heap Usage in KB                   18999 KB < day/time > 
    -> Sorry, I didn't get clear on another posted questions.
       Could you please give the reference to the documents or notes where you saw
       those sentences.
    Thank you and best regards, Natalia Khlopina

Maybe you are looking for