Heap dumps again & again ...

Hi,
I'm looking into an application which is frequently facing heap dumps... I have an idea where the problem occurs ... Trouble is I cannot run the app from on my local m/c due to some issue.... I've tried HeapAnalyser on the dump file & results show a huge collection of primitive objects & java.lang.Objects ... But I don't see any place where the trouble's occuring... Is there a way I can analyse the code better ?? How can we go about solving the problem of heap dump (w/o having to increase memory allocated to it)?

Hi ,
App Server: WAS 5.0 & heap size's 512 MB ... Looking at the log java.io.OutOfMemory error occurs only when some changes done to the profile of a user is activated by owner .... & this is a place when max data is fetched from DB & attempted to be stored into Vector object ...
I'm currently going to use checkpoints at various locations to log mem usage... Any better suggestion ?

Similar Messages

  • How to take regular heap dumps using HPROF

    Hi Folks,
    I am using Oracle App server as my application server. I found that the memory is growing gradualy and gets maxed out with in 1 hour. I am using 1 GB of heap.
    I defently feel this is a memory leak issue. Once the Heap usage reaches 100%, I will start getting the FULL GCs and my whole server hangs and nothing will work. Some times even the JVM crashes and restarts again.
    I didn't find Out of Memory exception also in any of my logs.
    I came to know that we can use Hprof to deal with this.
    I use the below as my JVM agrs...
    -agentlib:hprof=heap=all,format=b,depth=10,file=$ORACLE_HOME\hprof\Data.hprof
    I run my load run for 10 mins, now my heap usage has been grown to some extent.
    My Questions:
    1. Why there are 2 files generated, one is with the name Data.hprof and another with Data.hprof.tmp. Which is what?
    2. How to get the dump at 2 different points. So that I can compare the the 2 dumps and I can say which object is growing more.
    I downloaded the HAT and If I use to open this Data.hprof file from HAT, I am getting this error. This error will come if I open the file with out stoping the JVM process.
    java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at hat.parser.HprofReader.read(HprofReader.java:202)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    If I stop the JVM process, and then open through HAT I am getting this error,
    Started HTTP server on port 7000
    Reading from hprofData.hprof...
    Dump file created Wed Dec 13 02:35:03 MST 2006
    Warning: Weird stack frame line number: -688113664
    java.io.IOException: Bad record length of -1551478782 at byte 0x0008ffab of file.
    at hat.parser.HprofReader.read(HprofReader.java:193)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    JVm version I am using is: Sun JVM 1.5.0_06
    I am seriously fed up of this memory leak issue... Please help me out folks... I need this as early as possible..
    I hope I get early replys...
    Thanks in advance...

    First, the suggestion of using jmap is an excellent one, you should try it. On large applications, using the hprof agent you have to restart your VM, and hprof can disturb your JVM process, you may not be able to see the problem as quickly. With jmap, you can get a heap snapshot of a running JVM when it is in the state you want to understand more of, and it's really fast compared to using the hprof agent. The hprof dump file you get from jmap will not have the stack traces of where objects were allocated, which was a concern of mine a while back, but all indications are that these stack traces are not critical to finding memory leak problems. The allocation sites can usually be found with a good IDE ot search tool,
    like the NetBeans 'Find Usages' feature.
    On hprof, there is a temp file created during the heap dump creation, ignore the tmp file.
    The HAT utility has been added to JDK6 (as jhat) and many problems have been fixed. But most importantly, this JDK6 jhat can read ANY hprof dump file, from JDK5 or even JDK1.4.2. So even though the JDK6 jhat is using JDK6 itself, the hprof dump file it is given could have come from pretty much anywhere, including jmap. As long as it's a valid hprof binary dump file.
    So even if it's just to have jhat handy, you should get JDK6.
    Also, the Netbeans profiler (http://www.netbeans.org) might be helpful too. But it will require a restart of the VM.
    -kto

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

  • Problems with creation of an HEAP Dump

    Dear all,
    I have tried to create a HEAP Dump (CntrBreak) with the Java-based SAPMC Console (Process Table => server<no.> => DumpStack, but I get no dump on operation system (/usr/sap/<SID>/JC01/j2ee/cluster/server0/). I set the parameter -XX:HeapDumpOnOutOfMemoryError -XX:HeapDumpOnCtrlBreak in the VM. Our Java version is 1.4.12, the OS is Sun Solaris.
    Did anyone know why it not functioned in our solution?
    Many thanks in advanced.
    Patrick

    Hi,
    The Java based MC should provide the same functionality as the MMC. I tested today to trigger a heap dump as you have described and I had success.
    As the "Dump Stack" action is a protected one, the user should be asked for user/pass. Is it also for you so?
    And I have found several notes describing problems with the authentication of the sapstartsrv user on different unix platforms. As there is no action at all logged in your case, I guess that this could be the problem.
    Here are the notes I found:
    [Note 927637 - Web service authentication in sapstartsrv as of Release 7.00|https://service.sap.com/sap/support/notes/927637]
    [Note 992907 - sapstartsrv user authentication on Solaris|https://service.sap.com/sap/support/notes/992907]
    I hope this helps.
    Have you tried already to perform the action from an MMC?
    Regards,
    Krum

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • Problem with Premiere always conforming Audio again & again

    I've got a problem with premiere CC which is always conforming audios time & again & again. Although this problem is not new in Premiere and I did face it in previous versions. But I've never found a good way to avoid it, especially when I work with much heavier clips or with some formats like mpeg or mov....
    What should I do? Is that a network issue?
    Thanks, Jim.

    No Antivirus.
    Audio is 5.1 audio from the video AVCHD, and music tracks are mp3 files.
    Hardware:  Intel i7 970 hex core processor, system drive (C:) 600GB 7500rpm, data drive (F:) where audio and video files are stored 10000rpm 500GB velociraptor, NVidia GTX 470 video card.

  • Java 1.4.2. - full heap dump?

    Hello,
    Is there any possibility to generate full heap dump (core dump) in java 1.4.2. on demand?
    Best regards,
    Peter

    If you are in Unix platform, you can use this script to have thread dump.
    I am not sure whether you can generate coredump with out an application crash .
    kill -3 <java pid> will provide full stacktrace of the java process thread dump.
    Note: kill -3 will not terminate the java process, it will only generate full stacktrace in your log file and safe to use while the java process is running.
    you can get the java process id, using this unix cmd "ps -ef | grep java" .
    #!/bin/ksh
    [ $# -le 0 ] && echo "USAGE: $0 <pid>" && exit
    for i in 1 2
    do
    DT=`date +%Y%m%d_%H%M`
    prstat -Lmp $1 1 1 >> prstat_Lmp_$i_$DT.dmp
    pstack $1 >> pstack_$i_$DT.dmp
    kill -3 $1
    echo "prstat, pstack, and thread dump done. #" $i
    sleep 1
    echo "Done sleeping."
    done
    Pls go through some of this links, this will provide you on how to debug the issue with the logs generated by the scripts:
    http://support.bea.com/application_content/product_portlets/support_patterns/wls/UnexpectedHighCPUUsageWithWLSPattern.html
    http://www.unixville.com/~moazam/stories/2004/05/18/debuggingHangsInTheJvm.html

  • How can I get heap dump for 1.4.2_11 when OutOfMemory Occured

    Hi guys,
    How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
    We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
    Thanks.

    The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.

  • Installed mozilla yesterday but STill IE pop-up message appears that IE needs to be closed, close it comes back again again

    installed Mozilla yesterday but STill IE pop-up message appears that IE needs to be closed, close it comes back again again
    how can i get IE to disappear and stay gone?

    Hi,
    Try
    # Disable the popup option in IE.
    Then enjoy Mozilla Firefox.Happy Browsing...!!!
    Cheers...:)
    Regards
    '''Deepak Krishnan P.R'''

  • JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?

    I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
    What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
    I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
    Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
    Thanks

    I tried researching the meaning of
    JVMPI_GC_ROOT_MONITOR_USED, but found little help.
    Should this be objects that are ready for garbage
    collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
    However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
    It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it).

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • Internal error occured while parsing heap dump.

    Hello,
    I have a huge heap dump .hprof file (800 MB) in size and I tried to open it with memory analyzer in Eclipse. After parsing the file till 4%, I get internal error occurred. Java heap space. My system has 2GB memory. Below is the command I used to launch eclipse. Can someone help me with this ? Many thanks for your time.
    C:\eclipse\eclipse.exe -vmargs -Xms512M -Xmx512M -XX:PermSize=256M -XX:MaxPermSize=256M
    I have JRE 1.6.0_05-b13 installed on the system.
    Thanks,
    Hari

    Hello Hari,
    I would recommend that you first try to give more memory to eclipse and see if this helps. Try with 1200m for example.
    I can't give you a precise estimation how much memory will be needed, as the limiting factor is the number of objects in the heap dump (not the size of the file). This number varies from case to case. In 800mb you may have only a few huge objects, but it may also happen that there are more than 20.000.000 of very small objects.
    So, please try with more memory, and let me know if you still encounter the problem.
    Regards,
    Krum

  • Full heap dump - weblogic 8.1

    Hi to everyone,
    Is there any possibility to full heap dump (core dump) in weblogic 8.1 manually?
    Best regards,
    Peter

    In JDK6 there are the jhat and jmap utils which are described here
    http://weblogs.java.net/blog/kellyohair/archive/2005/09/heap_dump_snaps.html
    But, prior to that (JDK5 and earlier), you have to use the HAT utility which can be found at
    http://hat.dev.java.net/
    If you are using JRockit, you can use Mission Control for this, I believe. There's an intro to this tool at
    http://dev2dev.bea.com/pub/a/2005/12/jrockit-mission-control.html?page=1

  • ERROR while analysing 2.13 GB heap dump using HAT and JHAT.

    I am trying to analyse a 2.13 GB heap dump taken from java1.4 "IBM hotspot" running in solaris 8, but i get an error, java.io.IOException: Bad record length of -2021068228 at byte 0x00da146e. Is there any way that i can read this dump to analyse the objects.

    I have no idea what "IBM HotSpot" is but it sounds like you are running this bug:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6614052
    Grab the latest build of jdk6u update 10 or jdk7 and use that to examine the dump file.

Maybe you are looking for