JHAT unable to read heap dump

I have a heap dump taken on a 64 bit linux 1.5.0_16 JVM (64bit).
I'm unable to read the file using JHAT (latest 1.6.0 JDK), getting the following error: WARNING: Ignoring unrecognized record type 32
Have tried on both Win32 and 64bit Linux platforms.
Regards,

Hi Mike,
your heap dump is probably broken. Taking heap dump with jmap under JDK 1.5 is more suited for recovering a heap dump from a core dump file rather than taking the heap dump of a running java application. jmap can observe the heap in an inconsistent state and hence the dump file is not always useful. You can try to create core dump of the java application (if you can) and than use jmap to get heap dump from core dump. Note that there is no such from when you are running on JDK/JRE 6.
Bye,
Tomas Hurka <mailto:[email protected]>
NetBeans Profiler http://profiler.netbeans.org
VisualVM http://visualvm.dev.java.net
Software Engineer, Developer Platforms Group
Sun Microsystems, Praha Czech Republic

Similar Messages

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • Generate Heap dump in AD4j

    Hi All,
    I m unable to generate heap dump from AD4j. I use Java EE application 1.5.0_04, deployed jamagent.war file. When I click on Dump Heap button, it throws up a JSP error page:
    JSP Error:
    Request URI:/jvmHeapDump.jsp
    Exception:
    java.lang.ClassFormatError: _jvmHeapDump (Bad magic number)
    Any ideas to resolve this error will be very helpful.
    Thanks.

    provide me exact error.
    and also provide me which JDK vendor you are looking for getting heap dump.

  • ERROR while analysing 2.13 GB heap dump using HAT and JHAT.

    I am trying to analyse a 2.13 GB heap dump taken from java1.4 "IBM hotspot" running in solaris 8, but i get an error, java.io.IOException: Bad record length of -2021068228 at byte 0x00da146e. Is there any way that i can read this dump to analyse the objects.

    I have no idea what "IBM HotSpot" is but it sounds like you are running this bug:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6614052
    Grab the latest build of jdk6u update 10 or jdk7 and use that to examine the dump file.

  • How to take regular heap dumps using HPROF

    Hi Folks,
    I am using Oracle App server as my application server. I found that the memory is growing gradualy and gets maxed out with in 1 hour. I am using 1 GB of heap.
    I defently feel this is a memory leak issue. Once the Heap usage reaches 100%, I will start getting the FULL GCs and my whole server hangs and nothing will work. Some times even the JVM crashes and restarts again.
    I didn't find Out of Memory exception also in any of my logs.
    I came to know that we can use Hprof to deal with this.
    I use the below as my JVM agrs...
    -agentlib:hprof=heap=all,format=b,depth=10,file=$ORACLE_HOME\hprof\Data.hprof
    I run my load run for 10 mins, now my heap usage has been grown to some extent.
    My Questions:
    1. Why there are 2 files generated, one is with the name Data.hprof and another with Data.hprof.tmp. Which is what?
    2. How to get the dump at 2 different points. So that I can compare the the 2 dumps and I can say which object is growing more.
    I downloaded the HAT and If I use to open this Data.hprof file from HAT, I am getting this error. This error will come if I open the file with out stoping the JVM process.
    java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at hat.parser.HprofReader.read(HprofReader.java:202)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    If I stop the JVM process, and then open through HAT I am getting this error,
    Started HTTP server on port 7000
    Reading from hprofData.hprof...
    Dump file created Wed Dec 13 02:35:03 MST 2006
    Warning: Weird stack frame line number: -688113664
    java.io.IOException: Bad record length of -1551478782 at byte 0x0008ffab of file.
    at hat.parser.HprofReader.read(HprofReader.java:193)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    JVm version I am using is: Sun JVM 1.5.0_06
    I am seriously fed up of this memory leak issue... Please help me out folks... I need this as early as possible..
    I hope I get early replys...
    Thanks in advance...

    First, the suggestion of using jmap is an excellent one, you should try it. On large applications, using the hprof agent you have to restart your VM, and hprof can disturb your JVM process, you may not be able to see the problem as quickly. With jmap, you can get a heap snapshot of a running JVM when it is in the state you want to understand more of, and it's really fast compared to using the hprof agent. The hprof dump file you get from jmap will not have the stack traces of where objects were allocated, which was a concern of mine a while back, but all indications are that these stack traces are not critical to finding memory leak problems. The allocation sites can usually be found with a good IDE ot search tool,
    like the NetBeans 'Find Usages' feature.
    On hprof, there is a temp file created during the heap dump creation, ignore the tmp file.
    The HAT utility has been added to JDK6 (as jhat) and many problems have been fixed. But most importantly, this JDK6 jhat can read ANY hprof dump file, from JDK5 or even JDK1.4.2. So even though the JDK6 jhat is using JDK6 itself, the hprof dump file it is given could have come from pretty much anywhere, including jmap. As long as it's a valid hprof binary dump file.
    So even if it's just to have jhat handy, you should get JDK6.
    Also, the Netbeans profiler (http://www.netbeans.org) might be helpful too. But it will require a restart of the VM.
    -kto

  • Very Large Heap Dump

    Hi,
    I've to analyze a huge heap dump file (ca. 1GB).
    I've tried some tools HAT, YourKit, Profiler4J... etc.)
    Always an OutOfMemoryError occures.
    My machine has 2GB physocal Memory and 3GB SwapSpace on a Dual Core Intel Processor with 2,6GHz
    Is there a way to load the file on my machine or is there a tool which is able to load dump files partially?
    ThanX ToX

    1GB isn't very large :-) When you say you tried HAT, did you mean jhat? Just checking as jhat requires less memory than the original HAT. Also, did you include an option such as -J-mx1500m to increase the maximum heap for the tools that you tried? Another one to try is the HeapWalker tool in VisualVM. That does a better job than HAT/jhat with bigger heaps.

  • Unable to read file from application server

    Hi guys,
    I am reading file(could be any extension) from application server,but some time i am successfuly able to read file and sometime unable to read,why its happening .
    my code is here
    OPEN DATASET E_FILE FOR INPUT IN  BINARY MODE . "
      IF SY-SUBRC = 0.
        DO .
          READ DATASET E_FILE INTO GS_PDF_TAB.
          IF SY-SUBRC = 0.
            APPEND GS_PDF_TAB TO GT_PDF_TAB.
          ELSE.
            EXIT.
          ENDIF.
        ENDDO.
        CLOSE DATASET E_FILE.
      ENDIF.
    Thanks
    Ankur Sharma

    Hi,
    What actually happens?  Do you get a short dump?  Do you get a return code ne 0?  Does it run fine but you get no data in your table?
    We aren't mind-readers and can't help much without more information.
    Try using transaction AL11 to see if you access the files you are trying to open.
    Gareth.

  • Reading Short Dump in ST22

    Hi , Is there any Function Module or a way  to read the Short Dump generated in ST22 for a particular program.?
    I have a Z program running in Background on daily basis.For error Handling , when the program generates a short dump , i want to read the short dump. I tried the FM  /SDF/GET_DUMP_LOG .with this i am able to read the  runtime error , exception , error short text  but i am unable to read the entire log description.
    like what happened , what can u do etc  which can be viewed in st22.Any Inputs Appreciated.
    Thanks & Regards,
    John.

    Hi,
    Try FM
    "STRUCTURE_DUMP"     -- Current contents of internal tables will be printed
    RS_SNAP_DUMP_DISPLAY
    Best regards,
    Prashant

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • How can I get heap dump for 1.4.2_11 when OutOfMemory Occured

    Hi guys,
    How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
    We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
    Thanks.

    The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.

  • Error accessing device data in netweaver."Unable to read data;can't connect

    Hi Friends,
    We have implemented MAM in one of our client here in India. Its around 7 months the project been gone live.
    From last few weeks we are facing problem accessing Device in Netweaver.
    We prepared a patch to be applied to devices but on searching by providing mobile device name ie. eg. MOBILE_000011 it gives an error which says "Unable to read data; cannot connect to middleware".
    It searches the device successfully but but doesnt display its device details,components etc.
    I checked in MI server for short dumps TSV_TNEW_PAGE_ALLOC_FAILED.
    All the measures have been taken to remove this dump. But problem still persist.
    Please suggest to identify the problem.
    Thanks, Amit Sharma

    hello frnds....
    still expecting your views....

  • JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?

    I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
    What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
    I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
    Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
    Thanks

    I tried researching the meaning of
    JVMPI_GC_ROOT_MONITOR_USED, but found little help.
    Should this be objects that are ready for garbage
    collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
    However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
    It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it).

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • Full heap dump - weblogic 8.1

    Hi to everyone,
    Is there any possibility to full heap dump (core dump) in weblogic 8.1 manually?
    Best regards,
    Peter

    In JDK6 there are the jhat and jmap utils which are described here
    http://weblogs.java.net/blog/kellyohair/archive/2005/09/heap_dump_snaps.html
    But, prior to that (JDK5 and earlier), you have to use the HAT utility which can be found at
    http://hat.dev.java.net/
    If you are using JRockit, you can use Mission Control for this, I believe. There's an intro to this tool at
    http://dev2dev.bea.com/pub/a/2005/12/jrockit-mission-control.html?page=1

  • Thread Dump vs Heap Dump

    Hi,
    The below question are related to J2EE 7.00.
    - What is the difference between Thred dump and Heap dump?
    - When I go to Config tool then I go to Instance ID.
    There are two nodes under the Instance ID, DIspatcher and Server (could be multiple).
    I see different tabs on Instance ID and they are Message Server and BootStrap, Server General. There are lots of setting under Server General tab.
    My question is that what is the usage of the "Server General" tab?
    Please let me know.
    I will appreciate your reply.
    Regards.
    Sume

    A java engine runs all its work in 'threads' much like WP's in SM50 but unfortunately not so stand alone in 7.0.
    A thread dump is a bit like taking a photo of SM50, that is why you will be asked to take 2 or 3 so as to see what sticks in threads for a while.
    A java engine uses memory to store all its objects. Most memory is a single area called heap. A heap dump lists all objects in memory at the time.
    Thread dump = activities working/waiting
    Heap Dump = objects used
    As for the configtool, I do not have one in front of me to double check but I believe you are looking at the general settings as opposed to local instance settings. So you can set parameters at a local level using an instance number or for all instances by setting it as a general setting. Quite a few settings that are set at the local level are set globally anyway so be aware that that may be the case.

Maybe you are looking for

  • How can I allow Contribute Users to save documents somewhere other than "docs"?

    Our website was rebuilt by an outside company, who took my file structure and dumped it into the new site (I use Dreamweaver and our staff uses Contribute for their little niche of the web). We have a large site with multiple folders within the site.

  • Mailto link doesn't work

    I had gmail set as my default email client for Firefox. It was working fine for a long time, but then, all of sudden it stopped working. Now when I click on email links, new email window opens, but for some reason it cannot fully load, it just gets s

  • J2SE 1.4.2 for Windows x64 bit download link

    Dear All, Can any one give me the link to download J2SE 1.4.2 for Windows x64 bit architecture? Your help is highly appreciated. Regards, Syed

  • Roll over buttons showing on all layers?

    Hello, I have created an interactive presentation in indesign CS4, I have a series of photos which all have roll over button states applied to them so they when you click the photo they will then show profile content for that photo. These are on sepa

  • Question on DBMS_OUTPUT

    DBMS_OUTPUT.GET_LINES accepts two parameters a varchar array and the number of lines to be read into the array. Is it possible to know how many lines there are in the buffer so that the whole result can be got with a single call to GET_LINES? Thanks,