Heap dump file size vs heap size

Hi,
I'd like to clarify my doubts.
At the moment we're analyzing Sun JVM heap dumps from Solaris platform.
Observation is that heap dump file is around 1,1GB while after loading to SAP Memory Analyzer it displays statistics: "Heap: 193,656,968" which as I understood is size of heap.
After I run:
jmap -heap <PID>
I get following information:
using thread-local object allocation
Parallel GC with 8 thread(s)
Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 3221225472 (3072.0MB)
   NewSize          = 2228224 (2.125MB)
   MaxNewSize       = 4294901760 (4095.9375MB)
   OldSize          = 1441792 (1.375MB)
   NewRatio         = 2
   SurvivorRatio    = 32
   PermSize         = 16777216 (16.0MB)
   MaxPermSize      = 67108864 (64.0MB)
Heap Usage:
PS Young Generation
Eden Space:
   capacity = 288620544 (275.25MB)
   used     = 26593352 (25.36139678955078MB)
   free     = 262027192 (249.88860321044922MB)
   9.213949787302736% used
From Space:
   capacity = 2555904 (2.4375MB)
   used     = 467176 (0.44553375244140625MB)
   free     = 2088728 (1.9919662475585938MB)
   18.27830779246795% used
To Space:
   capacity = 2490368 (2.375MB)
   used     = 0 (0.0MB)
   free     = 2490368 (2.375MB)
   0.0% used
PS Old Generation
   capacity = 1568669696 (1496.0MB)
   used     = 1101274224 (1050.2569427490234MB)
   free     = 467395472 (445.74305725097656MB)
   70.20434109284916% used
PS Perm Generation
   capacity = 67108864 (64.0MB)
   used     = 40103200 (38.245391845703125MB)
   free     = 27005664 (25.754608154296875MB)
   59.75842475891113% used
So I'm just wondering what is this "Heap" in Statistic Information field visible in SAP Memory Analyzer.
When I go to Dominator Tree view, I look at Retained Heap column and I see that they roughly sum up to 193,656,968.
Could someone put some more light on it?
thanks
Michal

Hi Michal,
that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
Kind regards,
   - Andreas.
Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

Similar Messages

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

  • JVMDG315: JVM Requesting Heap dump file

    Hi expert,
    Using 10.2.04 Db with 11.5.10.2 on AIX 64 bit.
    While running audit report which generate pdf format output gone in error....
    JVMDG217: Dump Handler is Processing OutOfMemory - Please Wait.
    JVMDG315: JVM Requesting Heap dump file
    ..............................JVMDG318: Heap dump file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/heapdump7545250.1301289300.phd
    JVMDG303: JVM Requesting Java core file
    JVMDG304: Java core file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/javacore7545250.1301289344.txt
    JVMDG274: Dump Handler has Processed OutOfMemory.
    JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java(Compiled Code))
         at java.io.OutputStream.write(OutputStream.java(Compiled Code))
         at oracle.apps.xdo.common.log.LogOutputStream.write(LogOutputStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFStream.output(PDFStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.write(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.writeWritable(PDFGenerator.java(Inlined Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.closePage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.newPage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.genNewPageWithSize(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommand(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommandsFromDataStream(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.close(ProxyGenerator.java:1230)
         at oracle.apps.xdo.template.fo.FOProcessingEngine.process(FOProcessingEngine.java:336)
         at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1043)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5888)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3593)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.processForm(PayPDFGen.java:243)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.runProgram(PayPDFGen.java:795)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    oracle.apps.pay.pdfgen.server.PayPDFGen
    Program exited with status 1
    No any error in db alert ...
    Pleaes suggest where is prob..
    Thanks in ADV

    Hello,
    * JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
    This is your problem. There is no enough memory. Change Java heap settings
    Regards,
    Shomoos
    Edited by: shomoos.aldujaily on Mar 29, 2011 12:39 AM

  • Heap dumps on very large heap memory

    We are experiencing memory leak issues with one of our application deployed on JBOSS (SUN JVM 1.5, Win 32 OS). The application is already memory intensive and consumes the maximum heap (1.5 GB) allowed on a 32-bit JVM on Win32.
    This leaves very few memory for heap dump and the JVM crashes whenever we try adding the heap dump flag (-agentlib:), with a "malloc error"..
    Has anyone faced a scenario like this?
    Alternatively for investigation purpose, we are trying to deploy it on a Windows X64 - but the vendor advises only to run on 32 bit JVM. Here is my question:
    1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?
    2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?
    3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.
    Any help is really appreciated! :-)
    Anush

    anush_tv wrote:
    1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?Yes, but you're limited to 2GB like any other 32-bit process.
    2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?It's probably related to JBoss itself, which is likely using native code. I don't have experience with JBoss though.
    3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.You could try "jmap", which can dump the heap.

  • Very Large Heap Dump

    Hi,
    I've to analyze a huge heap dump file (ca. 1GB).
    I've tried some tools HAT, YourKit, Profiler4J... etc.)
    Always an OutOfMemoryError occures.
    My machine has 2GB physocal Memory and 3GB SwapSpace on a Dual Core Intel Processor with 2,6GHz
    Is there a way to load the file on my machine or is there a tool which is able to load dump files partially?
    ThanX ToX

    1GB isn't very large :-) When you say you tried HAT, did you mean jhat? Just checking as jhat requires less memory than the original HAT. Also, did you include an option such as -J-mx1500m to increase the maximum heap for the tools that you tried? Another one to try is the HeapWalker tool in VisualVM. That does a better job than HAT/jhat with bigger heaps.

  • View a Head Dump file

    Hi,
      Is there an application to view the head dump from a Java Server Node? The name of files that I want to view are named dev_serverX.
    Thank you and best regards
    Ivá

    It is not head dump (maybe it's your head banging against the wall , it would be heap dump. But a heap dump is not in a dev_serverX file, this would be a thread dump.
    Heap dumps can be analyzed with the memory analyser from the link above. This has to be done if you experience outofmemory exceptions and suspect a memory leak.
    Thread dumps can be analyzed with solman E2E root cause analyis or with the thread dump viewer -> [SDN search for Thread Dump Viewer|http://www.sdn.sap.com/irj/scn/advancedsearch?query=threaddumpviewer]
    Cheers Michael

  • JMap heap dump against gcore file

    Hi,
    We are running Java 1.5.0_06 on Solaris 9 on Sun v245, and are trying to analyze some memory issues we have been experiencing.
    We have used Solaris gcore utility to take a complete core dump of the process. This dump file is ~1.8Gb.
    We are now trying to create a heap dump from the core file using JMAP, so that we can use tools such as the Eclipse Memory Analyser (http://www.eclipse.org/mat) to examine our heap for memory issues.
    However, we are having issues creating the heap dump, as JMAP appears to be running out of system memory (swap). The command we are running is "jmap -heap:format=b /usr/bin/java ms04Core.29918". We can run some other JMAP commands, such as "jmap -histo" against the core without encountering these issues. The server we are running on has 8Gb physical memory, but JMAP seems to crash when swap size reaches ~3.8Gb (according to Solaris PRSTAT command). The error returned by JMAP is -
    Exception in thread "main" java.lang.OutOfMemoryError: requested 8192 bytes for jbyte in /BUILD_AREA/jdk1.5.0_06/hotspot/src/share/v
    m/prims/jni.cpp. Out of swap space?
    Would anyone have any ideas as to why we are seeing these issues?
    Also, can anyone comment on if this approach makes sense as a way to analyze memory issues? Other suggestions would be welcome!
    Thanks very much,
    Adrian

    Hi, we have solved this issue now - apparently this is a memory bug with the 1.5.0_06 version of JMAP. We installed a higher version JRE (5u15) and were able to complete the heap dump without issues.
    Cheers,
    Adrian

  • *** DUMP FILE SIZE IS LIMITED TO 0 BYTES ***

    I always get a trace file with
    *** DUMP FILE SIZE IS LIMITED TO 0 BYTES ***

    Do you have a cleardown script on your dump directory (probably user dump) that deletes dumps while sessions are still running. You get a trace in background dump directory (pmon, I think) that gives your message when that happens.
    OTOH it might be something completely different.

  • Size of export dump file

    I read an article on EXPORT of dump file cannot over the limit of 2Gig. This is because prior to 8.1.3 there is no large file support for Oracle Import, Export, or SQL*Loader utilties.
    How bout the current version? 9i or 10g? Can the exp dump file be larger than 2G?

    Under AIX Unix, there is a fsize parameter for every user in /etc/security/limits file. if fsize = -1 then user will have authority to create any file size. But to do this change, you should have root access under Unix enviroment.

  • Dump file size

    Hi,
    on 10g R2, on AIX 6.1 I use the following EXPDP :
    expdp system@DB SCHEMAS=USER1 DIRECTORY=dpump_dir1 DUMPFILE=exp_USER  Logfile=log_expdpuserwhich results in a 3Gb dump file. Is there any option to decrease dump file size ?
    I saw in documentation :
    COMPRESSION=(METADATA_ONLY | NONE)
    but it seams to me that COMPRESSION=METADATA_ONLY is already used since it is default value and COMPRESSION=NONE can not reduce the size.
    Thank you.

    You can use FILESIZE parameter. and specify multilple dumps.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm
    Otherwise you can use RMAN Compressed backup sets.
    Thanks
    Edited by: Cj on Dec 28, 2010 2:15 AM

  • Estimate the Import Time by viewing the DUMP File Size

    Its very generic question and I know it can varry from DB to DB.
    But is there any standard way that we can Estimate the Oracle dump import time by viewing the Export Dump file size ?

    Well, it's going to be vaguely linear, in that it probably takes about twice as long to load a 2 GB dump file as to load a 1 GB dump file, all else being equal. Of course, all things rarely are equal.
    For example,
    - Since only index DDL is stored in the dump file, dumps from systems with a lot of indexes will take longer to load than dumps from systems with few indexes, all else being equal.
    - Doing a direct path load will generally be more efficient than doing a conventional path load
    Justin

  • Dump File Size vs Database Table Size

    Hi all!
    Hope you're all well. If Datapump estimates that 18 million records will produce a 2.5GB dumpfile, does this mean that 2.5GB will also be consumed on the database table when this dump file is imported into a database?
    Many thanks in advance!
    Regards
    AC

    does this mean that 2.5GB will also be consumed on the database table when this dump file is imported into a database?No since the size after import depends on various factors like block size, block storage parameters etc.

  • What is the dump file size limit?

    Hello all,
    Can somebody tell me what is the maximum size of the dump file created by export utility in Oracle? Is there any maximum size applicable to dump file?
    Thanks in advance
    Himanshu

    Hi,
    Later Oracle 8.1.6, is no more Oracle limitation on dump file, with restriction 2Gb for full 32-bits OS (32-bit with 32-bit files). The maximum value that can be stored in a file is dependent on your operating system.
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10825/exp_imp.htm#g1040908
    Nicolas.

Maybe you are looking for