Generate Heap dump in AD4j

Hi All,
I m unable to generate heap dump from AD4j. I use Java EE application 1.5.0_04, deployed jamagent.war file. When I click on Dump Heap button, it throws up a JSP error page:
JSP Error:
Request URI:/jvmHeapDump.jsp
Exception:
java.lang.ClassFormatError: _jvmHeapDump (Bad magic number)
Any ideas to resolve this error will be very helpful.
Thanks.

provide me exact error.
and also provide me which JDK vendor you are looking for getting heap dump.

Similar Messages

  • Dificulty generating Heap dump....

    Hi,
    We are on NW04s SPS 12 on JAVA with Solaris and Oracle.
    We get OutOfMemory on intermittent basis and I want to take a heap dump.
    It is taking forever to take the dump and it consumes lots of memory on the server to generate a dump.
    Last time it took almost 6GB and dump was not complete. It ran for 30 mnts. It dumped only 50MB. It starts writing to a file but eventually I kill it because it takes too long and lots of memory.
    I am taking a dump using a jmap command u2013
    /usr/sap/j2se/j2sdk1.4.2_13/bin/jmap -d64 -heap:format=b <pid>
    I am little afraid to put the parameter in Visual Admin to get the dump because it may crash the box or the dump may never complete.
    Please let me know if I am doing anything wrong.
    I will appreciate your help.
    Regards.
    SC

    Hi,
    you should create the DUMP using jcmon.
    Goto your SYS/profile directory:
    e.g. /usr/sap/<SID>/SYS/profile or X:\usr\sap.... on Windows
    Start jcmon using:
    jcmon pf=<PROFILE>
    The Profile to be used is <SID>_JC<InstanceNo>_<hostname>
    This will show up jcmon:
    ============================================================
    JControl Monitor Program - Main Menu
    ============================================================
    0 : exit
    10 : Cluster Administration Menu
    20 : Local Administration Menu
    30 : Shared Memory Menu (Solid Rock, experimental)
    Choose 20 for Local Administration Menu
    You see a list of processes:
    Idx
    Name
    PID
    State
    Error
    Restart
    0
    dispatcher
    16670
    Running
    0
    yes
    1
    server0
    16671
    Running
    0
    yes
    2
    SDM
    16672
    Running
    0
    yes
    ============================================================
    JControl Monitor Program - Administration Menu  (Local)
    Instance : JC_<hostname>_<SID>_<InstanceNo>
    ============================================================
    0  : exit
    1  : Refresh list
    2  : Shutdown instance
    3  : Enable process
    4  : Disable process
    5  : Restart process
    6  : Enable bootstrapping on restart
    7  : Disable bootstrapping on restart
    8  : Enable debugging
    9  : Disable debugging
    10 : Dump stacktrace
    11 : Process list
    12 : Port list
    13 : Activate debug session
    14 : Deactivate debug session
    15 : Increment trace level
    16 : Decrement trace level
    17 : Enable process restart
    18 : Disable process restart
    19 : Restart instance
    40 : Enable bootstrapping for all processes with specified process type
    41 : Enable bootstrapping for all processes excluding specified process type
    99 : Extended process list on/off
    Now use option 10 to Dump the stacktrace
    Hope this helps (Reward points for helpful answers are appreciated
    Cheers

  • EM Agent get restarted on heap dump generation with jmap command

    Hi all,
    To find the cause of OutOfMemory error we are generating heap dump of perticular JVM process by using the command "jmap -dump:format=b,file=heap.bin <pid>".
    But when we excute this command, then our all the other jvm process get crashed and it also crashed emagent process.
    I would like to know the reason why these processes and specially emagent process
    get crashed.
    Regards,
    $

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • Java 1.4.2. - full heap dump?

    Hello,
    Is there any possibility to generate full heap dump (core dump) in java 1.4.2. on demand?
    Best regards,
    Peter

    If you are in Unix platform, you can use this script to have thread dump.
    I am not sure whether you can generate coredump with out an application crash .
    kill -3 <java pid> will provide full stacktrace of the java process thread dump.
    Note: kill -3 will not terminate the java process, it will only generate full stacktrace in your log file and safe to use while the java process is running.
    you can get the java process id, using this unix cmd "ps -ef | grep java" .
    #!/bin/ksh
    [ $# -le 0 ] && echo "USAGE: $0 <pid>" && exit
    for i in 1 2
    do
    DT=`date +%Y%m%d_%H%M`
    prstat -Lmp $1 1 1 >> prstat_Lmp_$i_$DT.dmp
    pstack $1 >> pstack_$i_$DT.dmp
    kill -3 $1
    echo "prstat, pstack, and thread dump done. #" $i
    sleep 1
    echo "Done sleeping."
    done
    Pls go through some of this links, this will provide you on how to debug the issue with the logs generated by the scripts:
    http://support.bea.com/application_content/product_portlets/support_patterns/wls/UnexpectedHighCPUUsageWithWLSPattern.html
    http://www.unixville.com/~moazam/stories/2004/05/18/debuggingHangsInTheJvm.html

  • Generate heap file fail

    Hi
    I'm trying to generate heap file for about 500MB of RAM. But it alsway shows the message
    file size limit
    i use the command:
    jmap -dump:file=out.dump,format=b 3059
    And this file can not be opened in MAT. it gets plain text with error character encoding.
    Can anyone help me?
    Thanks and best regards.

    Hunters wrote:
    I'm trying to generate heap file for about 500MB of RAM. But it alsway shows the message
    file size limit
    On Linux?
    $ ulimit -a
    $ echo -e "(Hard) file size limit (blocks): \c";ulimit -Hf
    $ echo -e "(Soft) file size limit (blocks): \c";ulimit -Sf
    $ sudo ulimit -f unlimited

  • JVMDG315: JVM Requesting Heap dump file

    Hi expert,
    Using 10.2.04 Db with 11.5.10.2 on AIX 64 bit.
    While running audit report which generate pdf format output gone in error....
    JVMDG217: Dump Handler is Processing OutOfMemory - Please Wait.
    JVMDG315: JVM Requesting Heap dump file
    ..............................JVMDG318: Heap dump file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/heapdump7545250.1301289300.phd
    JVMDG303: JVM Requesting Java core file
    JVMDG304: Java core file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/javacore7545250.1301289344.txt
    JVMDG274: Dump Handler has Processed OutOfMemory.
    JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java(Compiled Code))
         at java.io.OutputStream.write(OutputStream.java(Compiled Code))
         at oracle.apps.xdo.common.log.LogOutputStream.write(LogOutputStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFStream.output(PDFStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.write(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.writeWritable(PDFGenerator.java(Inlined Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.closePage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.newPage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.genNewPageWithSize(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommand(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommandsFromDataStream(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.close(ProxyGenerator.java:1230)
         at oracle.apps.xdo.template.fo.FOProcessingEngine.process(FOProcessingEngine.java:336)
         at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1043)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5888)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3593)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.processForm(PayPDFGen.java:243)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.runProgram(PayPDFGen.java:795)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    oracle.apps.pay.pdfgen.server.PayPDFGen
    Program exited with status 1
    No any error in db alert ...
    Pleaes suggest where is prob..
    Thanks in ADV

    Hello,
    * JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
    This is your problem. There is no enough memory. Change Java heap settings
    Regards,
    Shomoos
    Edited by: shomoos.aldujaily on Mar 29, 2011 12:39 AM

  • How to take regular heap dumps using HPROF

    Hi Folks,
    I am using Oracle App server as my application server. I found that the memory is growing gradualy and gets maxed out with in 1 hour. I am using 1 GB of heap.
    I defently feel this is a memory leak issue. Once the Heap usage reaches 100%, I will start getting the FULL GCs and my whole server hangs and nothing will work. Some times even the JVM crashes and restarts again.
    I didn't find Out of Memory exception also in any of my logs.
    I came to know that we can use Hprof to deal with this.
    I use the below as my JVM agrs...
    -agentlib:hprof=heap=all,format=b,depth=10,file=$ORACLE_HOME\hprof\Data.hprof
    I run my load run for 10 mins, now my heap usage has been grown to some extent.
    My Questions:
    1. Why there are 2 files generated, one is with the name Data.hprof and another with Data.hprof.tmp. Which is what?
    2. How to get the dump at 2 different points. So that I can compare the the 2 dumps and I can say which object is growing more.
    I downloaded the HAT and If I use to open this Data.hprof file from HAT, I am getting this error. This error will come if I open the file with out stoping the JVM process.
    java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at hat.parser.HprofReader.read(HprofReader.java:202)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    If I stop the JVM process, and then open through HAT I am getting this error,
    Started HTTP server on port 7000
    Reading from hprofData.hprof...
    Dump file created Wed Dec 13 02:35:03 MST 2006
    Warning: Weird stack frame line number: -688113664
    java.io.IOException: Bad record length of -1551478782 at byte 0x0008ffab of file.
    at hat.parser.HprofReader.read(HprofReader.java:193)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    JVm version I am using is: Sun JVM 1.5.0_06
    I am seriously fed up of this memory leak issue... Please help me out folks... I need this as early as possible..
    I hope I get early replys...
    Thanks in advance...

    First, the suggestion of using jmap is an excellent one, you should try it. On large applications, using the hprof agent you have to restart your VM, and hprof can disturb your JVM process, you may not be able to see the problem as quickly. With jmap, you can get a heap snapshot of a running JVM when it is in the state you want to understand more of, and it's really fast compared to using the hprof agent. The hprof dump file you get from jmap will not have the stack traces of where objects were allocated, which was a concern of mine a while back, but all indications are that these stack traces are not critical to finding memory leak problems. The allocation sites can usually be found with a good IDE ot search tool,
    like the NetBeans 'Find Usages' feature.
    On hprof, there is a temp file created during the heap dump creation, ignore the tmp file.
    The HAT utility has been added to JDK6 (as jhat) and many problems have been fixed. But most importantly, this JDK6 jhat can read ANY hprof dump file, from JDK5 or even JDK1.4.2. So even though the JDK6 jhat is using JDK6 itself, the hprof dump file it is given could have come from pretty much anywhere, including jmap. As long as it's a valid hprof binary dump file.
    So even if it's just to have jhat handy, you should get JDK6.
    Also, the Netbeans profiler (http://www.netbeans.org) might be helpful too. But it will require a restart of the VM.
    -kto

  • Difference between heap dump/core dump/thread dump?

    Whats the difference between heap dump/core dump/thread dump?

    To be very precise, Core Dump is generated when your JVM is shutted down abnormally, it contains recorded state of the working memory of a computer program at a specific time with some platform specific details
    Heap Dump can be generated to see the live objects at that specific time stamp
    Thread Dump can be generated to see the threads present (runing, blocked everything) in a "live JVM" at any specific time stamp

  • Problems with creation of an HEAP Dump

    Dear all,
    I have tried to create a HEAP Dump (CntrBreak) with the Java-based SAPMC Console (Process Table => server<no.> => DumpStack, but I get no dump on operation system (/usr/sap/<SID>/JC01/j2ee/cluster/server0/). I set the parameter -XX:HeapDumpOnOutOfMemoryError -XX:HeapDumpOnCtrlBreak in the VM. Our Java version is 1.4.12, the OS is Sun Solaris.
    Did anyone know why it not functioned in our solution?
    Many thanks in advanced.
    Patrick

    Hi,
    The Java based MC should provide the same functionality as the MMC. I tested today to trigger a heap dump as you have described and I had success.
    As the "Dump Stack" action is a protected one, the user should be asked for user/pass. Is it also for you so?
    And I have found several notes describing problems with the authentication of the sapstartsrv user on different unix platforms. As there is no action at all logged in your case, I guess that this could be the problem.
    Here are the notes I found:
    [Note 927637 - Web service authentication in sapstartsrv as of Release 7.00|https://service.sap.com/sap/support/notes/927637]
    [Note 992907 - sapstartsrv user authentication on Solaris|https://service.sap.com/sap/support/notes/992907]
    I hope this helps.
    Have you tried already to perform the action from an MMC?
    Regards,
    Krum

  • How can I get heap dump for 1.4.2_11 when OutOfMemory Occured

    Hi guys,
    How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
    We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
    Thanks.

    The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.

  • JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?

    I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
    What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
    I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
    Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
    Thanks

    I tried researching the meaning of
    JVMPI_GC_ROOT_MONITOR_USED, but found little help.
    Should this be objects that are ready for garbage
    collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
    However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
    It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it).

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

Maybe you are looking for

  • Is there a way to export from final cut X to XDCam discs?

    Hello, I need to export from Final Cut Pro X to a XDCam  disc. Does anyone knows how this can be done? Thamks Miguel

  • Print server belkin network USB Hub (F5L009)

    I just bough a Belkin network USB Hub print server and downloaded from their website the mac 10.4 compatible software (their is no 10.5 yet), when I want to open the application I get this error message: _SXUPTP Driver has not been installed, please

  • SBWP full of status error 56 IDoc ?

    Hi SAP Experts, In my SAP ECC 6, thousand of inbound IDocs have been in error with status error 56 (partner profile not available with WE20). Now, all users can see in their Business Workplace...thousand of error notifications !. => How can I deactiv

  • Here is the source code to JMF Webcam app + saves jpeg

    Since so many people ask for this code, but never get it, I figured I'd repost it. I didn't write it, but I added the jpeg saving part, and modified it for my device, which you will have to do to get it to work. Just go into JMFRegistry and get the d

  • Re: ISU and BI

    Hi All, I have some question related to ISU and BI. Any one can help me out. 1) Is it ISU is separate server or component of ECC ? 2) What is the Architecture of ISU? 3) As a BI consultant what are we need to know in ISU 4) Any setting we do, to extr