Strack Trace Heap Dumps on JDK 1.4.2_12

Hi all,
Right now I have the following arguments for my JVM container
BROKER_JVM_ARGS="-Xms512m -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak"
So on a kill -3 I get the heap dumps. I also want to get strack traces. The process runs are a daemon, is there anyway I can do this?
PS this is on a RHEL 3 box running Sun Java 1.4.2_12
Thank you.

kill -3 triggers a thread dump to the console.

Similar Messages

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • Why does hprof=heap=dump have so much overhead?

    I udnerstand why the HPROF option heap=sites incurs a massive performance overhead; it has to intercept every allocation and record the current call stack.
    However, I don't understand why the HPROF option heap=dump incurs so much of a performance overhead. Presumably it could do nothing until invoked, and only then trace from the system roots the entire heap.
    Can anyone speak to why it doesn't work that way?
    - Gordon @ IA

    Traditionally agents like hprof had to be loaded into the virtual machine at startup, and this was the only way to capture these object allocations. The new hprof in the JDK 5.0 release (Tiger) was written using the newer VM interface JVM TI and this new hprof was mostly meant to reproduce the functionality of the old hprof from JDK 1.4.2 that used JVMPI. (Just FYI: run 'java -Xrunhprof:help' for help on hprof).
    The JDK 5.0 hprof will at startup, instrument java.lang.Object.<init>() and all classes and methods that use the newarray bytecodes. This instrumentation doesn't take long and is just an initial startup cost, it's the run time and what happens then that is the performance bottleneck. At run time, as any object is allocated, the instrumented methods trigger an extra call into a Java tracker class which in turn makes a JNI call into the hprof agent and native code. At that point, hprof needs to track all the objects that are live (the JVM TI free event tells it when an object is freed), which takes a table inside the hprof agent and memory space. So if the machine you are using is low on RAM, using hprof will cause drastic slowdowns, you might try heap=sites which uses less memory but just tracks allocations based on site of allocation not individual objects.
    The more likely run time performance issue is that at each allocation, hprof wants to get the stack trace, this can be expensive, depends on how many objects are allocated. You could try using depth=0 and see if the stack trace samples are a serious issue for your situation. If you don't need stack traces, then you would be better off looking at the pmap command that gets you an hprof binary dump on the fly, no overhead, then you can use jhat (or HAT) to browse the heap. This may require use of the JDK 6 (Mustang) release for this experiment, see http://mustang.dev.java.net for the free downloads of JDK 6 (Mustang).
    There is an RFE for hprof to allow the tracking of allocations to be turned on/off in the Java tracker methods that were injected, at the Java source level. But this would require adding some Java APIs to control sun/tools/hprof/Tracker which is in rt.jar. This is very possible and more with the JVM TI interfaces.
    If you haven't tried the NetBeans Profiler (http://www.netbeans.org) you may want to look at it. It does take an incremental approach to instrumentation and tries to focus in on the areas of interest and allows you to limit the overhead of the profiler. It works with the latest JDK 5 (Tiger) update release, see http://java.sun.com/j2se.
    Oh yes, also look at some of the JVM TI demos that come with the JDK 5 download. Look in the demo/jvmti directory and try the small agents HeapTracker and HeapViewer, they have much lower overhead and the binaries and all the source is right there for you to just use or modify and customize for yourself.
    Hope this helps.
    -kto

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • Java heap dump location

    Hi Everyone.
    How do I configure where I can output the java heap dumps?
    Right now they're dumped in /tmp automatically.
    Thanks

    Hi Khaled,
    It depends on what is the operating system and the jdk. For sun jdk:
    -XX:HeapDumpPath=<directory where to save the heap dumps>
    for IBM:
    BM_HEAPDUMPDIR environment variable should be set to point to the desired location.
    Greetings, Myriana

  • How to take regular heap dumps using HPROF

    Hi Folks,
    I am using Oracle App server as my application server. I found that the memory is growing gradualy and gets maxed out with in 1 hour. I am using 1 GB of heap.
    I defently feel this is a memory leak issue. Once the Heap usage reaches 100%, I will start getting the FULL GCs and my whole server hangs and nothing will work. Some times even the JVM crashes and restarts again.
    I didn't find Out of Memory exception also in any of my logs.
    I came to know that we can use Hprof to deal with this.
    I use the below as my JVM agrs...
    -agentlib:hprof=heap=all,format=b,depth=10,file=$ORACLE_HOME\hprof\Data.hprof
    I run my load run for 10 mins, now my heap usage has been grown to some extent.
    My Questions:
    1. Why there are 2 files generated, one is with the name Data.hprof and another with Data.hprof.tmp. Which is what?
    2. How to get the dump at 2 different points. So that I can compare the the 2 dumps and I can say which object is growing more.
    I downloaded the HAT and If I use to open this Data.hprof file from HAT, I am getting this error. This error will come if I open the file with out stoping the JVM process.
    java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at hat.parser.HprofReader.read(HprofReader.java:202)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    If I stop the JVM process, and then open through HAT I am getting this error,
    Started HTTP server on port 7000
    Reading from hprofData.hprof...
    Dump file created Wed Dec 13 02:35:03 MST 2006
    Warning: Weird stack frame line number: -688113664
    java.io.IOException: Bad record length of -1551478782 at byte 0x0008ffab of file.
    at hat.parser.HprofReader.read(HprofReader.java:193)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    JVm version I am using is: Sun JVM 1.5.0_06
    I am seriously fed up of this memory leak issue... Please help me out folks... I need this as early as possible..
    I hope I get early replys...
    Thanks in advance...

    First, the suggestion of using jmap is an excellent one, you should try it. On large applications, using the hprof agent you have to restart your VM, and hprof can disturb your JVM process, you may not be able to see the problem as quickly. With jmap, you can get a heap snapshot of a running JVM when it is in the state you want to understand more of, and it's really fast compared to using the hprof agent. The hprof dump file you get from jmap will not have the stack traces of where objects were allocated, which was a concern of mine a while back, but all indications are that these stack traces are not critical to finding memory leak problems. The allocation sites can usually be found with a good IDE ot search tool,
    like the NetBeans 'Find Usages' feature.
    On hprof, there is a temp file created during the heap dump creation, ignore the tmp file.
    The HAT utility has been added to JDK6 (as jhat) and many problems have been fixed. But most importantly, this JDK6 jhat can read ANY hprof dump file, from JDK5 or even JDK1.4.2. So even though the JDK6 jhat is using JDK6 itself, the hprof dump file it is given could have come from pretty much anywhere, including jmap. As long as it's a valid hprof binary dump file.
    So even if it's just to have jhat handy, you should get JDK6.
    Also, the Netbeans profiler (http://www.netbeans.org) might be helpful too. But it will require a restart of the VM.
    -kto

  • JHAT unable to read heap dump

    I have a heap dump taken on a 64 bit linux 1.5.0_16 JVM (64bit).
    I'm unable to read the file using JHAT (latest 1.6.0 JDK), getting the following error: WARNING: Ignoring unrecognized record type 32
    Have tried on both Win32 and 64bit Linux platforms.
    Regards,

    Hi Mike,
    your heap dump is probably broken. Taking heap dump with jmap under JDK 1.5 is more suited for recovering a heap dump from a core dump file rather than taking the heap dump of a running java application. jmap can observe the heap in an inconsistent state and hence the dump file is not always useful. You can try to create core dump of the java application (if you can) and than use jmap to get heap dump from core dump. Note that there is no such from when you are running on JDK/JRE 6.
    Bye,
    Tomas Hurka <mailto:[email protected]>
    NetBeans Profiler http://profiler.netbeans.org
    VisualVM http://visualvm.dev.java.net
    Software Engineer, Developer Platforms Group
    Sun Microsystems, Praha Czech Republic

  • Dificulty generating Heap dump....

    Hi,
    We are on NW04s SPS 12 on JAVA with Solaris and Oracle.
    We get OutOfMemory on intermittent basis and I want to take a heap dump.
    It is taking forever to take the dump and it consumes lots of memory on the server to generate a dump.
    Last time it took almost 6GB and dump was not complete. It ran for 30 mnts. It dumped only 50MB. It starts writing to a file but eventually I kill it because it takes too long and lots of memory.
    I am taking a dump using a jmap command u2013
    /usr/sap/j2se/j2sdk1.4.2_13/bin/jmap -d64 -heap:format=b <pid>
    I am little afraid to put the parameter in Visual Admin to get the dump because it may crash the box or the dump may never complete.
    Please let me know if I am doing anything wrong.
    I will appreciate your help.
    Regards.
    SC

    Hi,
    you should create the DUMP using jcmon.
    Goto your SYS/profile directory:
    e.g. /usr/sap/<SID>/SYS/profile or X:\usr\sap.... on Windows
    Start jcmon using:
    jcmon pf=<PROFILE>
    The Profile to be used is <SID>_JC<InstanceNo>_<hostname>
    This will show up jcmon:
    ============================================================
    JControl Monitor Program - Main Menu
    ============================================================
    0 : exit
    10 : Cluster Administration Menu
    20 : Local Administration Menu
    30 : Shared Memory Menu (Solid Rock, experimental)
    Choose 20 for Local Administration Menu
    You see a list of processes:
    Idx
    Name
    PID
    State
    Error
    Restart
    0
    dispatcher
    16670
    Running
    0
    yes
    1
    server0
    16671
    Running
    0
    yes
    2
    SDM
    16672
    Running
    0
    yes
    ============================================================
    JControl Monitor Program - Administration Menu  (Local)
    Instance : JC_<hostname>_<SID>_<InstanceNo>
    ============================================================
    0  : exit
    1  : Refresh list
    2  : Shutdown instance
    3  : Enable process
    4  : Disable process
    5  : Restart process
    6  : Enable bootstrapping on restart
    7  : Disable bootstrapping on restart
    8  : Enable debugging
    9  : Disable debugging
    10 : Dump stacktrace
    11 : Process list
    12 : Port list
    13 : Activate debug session
    14 : Deactivate debug session
    15 : Increment trace level
    16 : Decrement trace level
    17 : Enable process restart
    18 : Disable process restart
    19 : Restart instance
    40 : Enable bootstrapping for all processes with specified process type
    41 : Enable bootstrapping for all processes excluding specified process type
    99 : Extended process list on/off
    Now use option 10 to Dump the stacktrace
    Hope this helps (Reward points for helpful answers are appreciated
    Cheers

  • Generate Heap dump in AD4j

    Hi All,
    I m unable to generate heap dump from AD4j. I use Java EE application 1.5.0_04, deployed jamagent.war file. When I click on Dump Heap button, it throws up a JSP error page:
    JSP Error:
    Request URI:/jvmHeapDump.jsp
    Exception:
    java.lang.ClassFormatError: _jvmHeapDump (Bad magic number)
    Any ideas to resolve this error will be very helpful.
    Thanks.

    provide me exact error.
    and also provide me which JDK vendor you are looking for getting heap dump.

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

  • Problems with creation of an HEAP Dump

    Dear all,
    I have tried to create a HEAP Dump (CntrBreak) with the Java-based SAPMC Console (Process Table => server<no.> => DumpStack, but I get no dump on operation system (/usr/sap/<SID>/JC01/j2ee/cluster/server0/). I set the parameter -XX:HeapDumpOnOutOfMemoryError -XX:HeapDumpOnCtrlBreak in the VM. Our Java version is 1.4.12, the OS is Sun Solaris.
    Did anyone know why it not functioned in our solution?
    Many thanks in advanced.
    Patrick

    Hi,
    The Java based MC should provide the same functionality as the MMC. I tested today to trigger a heap dump as you have described and I had success.
    As the "Dump Stack" action is a protected one, the user should be asked for user/pass. Is it also for you so?
    And I have found several notes describing problems with the authentication of the sapstartsrv user on different unix platforms. As there is no action at all logged in your case, I guess that this could be the problem.
    Here are the notes I found:
    [Note 927637 - Web service authentication in sapstartsrv as of Release 7.00|https://service.sap.com/sap/support/notes/927637]
    [Note 992907 - sapstartsrv user authentication on Solaris|https://service.sap.com/sap/support/notes/992907]
    I hope this helps.
    Have you tried already to perform the action from an MMC?
    Regards,
    Krum

  • Java 1.4.2. - full heap dump?

    Hello,
    Is there any possibility to generate full heap dump (core dump) in java 1.4.2. on demand?
    Best regards,
    Peter

    If you are in Unix platform, you can use this script to have thread dump.
    I am not sure whether you can generate coredump with out an application crash .
    kill -3 <java pid> will provide full stacktrace of the java process thread dump.
    Note: kill -3 will not terminate the java process, it will only generate full stacktrace in your log file and safe to use while the java process is running.
    you can get the java process id, using this unix cmd "ps -ef | grep java" .
    #!/bin/ksh
    [ $# -le 0 ] && echo "USAGE: $0 <pid>" && exit
    for i in 1 2
    do
    DT=`date +%Y%m%d_%H%M`
    prstat -Lmp $1 1 1 >> prstat_Lmp_$i_$DT.dmp
    pstack $1 >> pstack_$i_$DT.dmp
    kill -3 $1
    echo "prstat, pstack, and thread dump done. #" $i
    sleep 1
    echo "Done sleeping."
    done
    Pls go through some of this links, this will provide you on how to debug the issue with the logs generated by the scripts:
    http://support.bea.com/application_content/product_portlets/support_patterns/wls/UnexpectedHighCPUUsageWithWLSPattern.html
    http://www.unixville.com/~moazam/stories/2004/05/18/debuggingHangsInTheJvm.html

  • How can I get heap dump for 1.4.2_11 when OutOfMemory Occured

    Hi guys,
    How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
    We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
    Thanks.

    The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.

  • JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?

    I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
    What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
    I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
    Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
    Thanks

    I tried researching the meaning of
    JVMPI_GC_ROOT_MONITOR_USED, but found little help.
    Should this be objects that are ready for garbage
    collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
    However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
    It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it).

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

Maybe you are looking for