Thread Dump vs Heap Dump

Hi,
The below question are related to J2EE 7.00.
- What is the difference between Thred dump and Heap dump?
- When I go to Config tool then I go to Instance ID.
There are two nodes under the Instance ID, DIspatcher and Server (could be multiple).
I see different tabs on Instance ID and they are Message Server and BootStrap, Server General. There are lots of setting under Server General tab.
My question is that what is the usage of the "Server General" tab?
Please let me know.
I will appreciate your reply.
Regards.
Sume

A java engine runs all its work in 'threads' much like WP's in SM50 but unfortunately not so stand alone in 7.0.
A thread dump is a bit like taking a photo of SM50, that is why you will be asked to take 2 or 3 so as to see what sticks in threads for a while.
A java engine uses memory to store all its objects. Most memory is a single area called heap. A heap dump lists all objects in memory at the time.
Thread dump = activities working/waiting
Heap Dump = objects used
As for the configtool, I do not have one in front of me to double check but I believe you are looking at the general settings as opposed to local instance settings. So you can set parameters at a local level using an instance number or for all instances by setting it as a general setting. Quite a few settings that are set at the local level are set globally anyway so be aware that that may be the case.

Similar Messages

  • Difference between heap dump/core dump/thread dump?

    Whats the difference between heap dump/core dump/thread dump?

    To be very precise, Core Dump is generated when your JVM is shutted down abnormally, it contains recorded state of the working memory of a computer program at a specific time with some platform specific details
    Heap Dump can be generated to see the live objects at that specific time stamp
    Thread Dump can be generated to see the threads present (runing, blocked everything) in a "live JVM" at any specific time stamp

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • Java 1.4.2. - full heap dump?

    Hello,
    Is there any possibility to generate full heap dump (core dump) in java 1.4.2. on demand?
    Best regards,
    Peter

    If you are in Unix platform, you can use this script to have thread dump.
    I am not sure whether you can generate coredump with out an application crash .
    kill -3 <java pid> will provide full stacktrace of the java process thread dump.
    Note: kill -3 will not terminate the java process, it will only generate full stacktrace in your log file and safe to use while the java process is running.
    you can get the java process id, using this unix cmd "ps -ef | grep java" .
    #!/bin/ksh
    [ $# -le 0 ] && echo "USAGE: $0 <pid>" && exit
    for i in 1 2
    do
    DT=`date +%Y%m%d_%H%M`
    prstat -Lmp $1 1 1 >> prstat_Lmp_$i_$DT.dmp
    pstack $1 >> pstack_$i_$DT.dmp
    kill -3 $1
    echo "prstat, pstack, and thread dump done. #" $i
    sleep 1
    echo "Done sleeping."
    done
    Pls go through some of this links, this will provide you on how to debug the issue with the logs generated by the scripts:
    http://support.bea.com/application_content/product_portlets/support_patterns/wls/UnexpectedHighCPUUsageWithWLSPattern.html
    http://www.unixville.com/~moazam/stories/2004/05/18/debuggingHangsInTheJvm.html

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • Strack Trace Heap Dumps on JDK 1.4.2_12

    Hi all,
    Right now I have the following arguments for my JVM container
    BROKER_JVM_ARGS="-Xms512m -Xmx1024m -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak"
    So on a kill -3 I get the heap dumps. I also want to get strack traces. The process runs are a daemon, is there anyway I can do this?
    PS this is on a RHEL 3 box running Sun Java 1.4.2_12
    Thank you.

    kill -3 triggers a thread dump to the console.

  • Heap dump file size vs heap size

    Hi,
    I'd like to clarify my doubts.
    At the moment we're analyzing Sun JVM heap dumps from Solaris platform.
    Observation is that heap dump file is around 1,1GB while after loading to SAP Memory Analyzer it displays statistics: "Heap: 193,656,968" which as I understood is size of heap.
    After I run:
    jmap -heap <PID>
    I get following information:
    using thread-local object allocation
    Parallel GC with 8 thread(s)
    Heap Configuration:
       MinHeapFreeRatio = 40
       MaxHeapFreeRatio = 70
       MaxHeapSize      = 3221225472 (3072.0MB)
       NewSize          = 2228224 (2.125MB)
       MaxNewSize       = 4294901760 (4095.9375MB)
       OldSize          = 1441792 (1.375MB)
       NewRatio         = 2
       SurvivorRatio    = 32
       PermSize         = 16777216 (16.0MB)
       MaxPermSize      = 67108864 (64.0MB)
    Heap Usage:
    PS Young Generation
    Eden Space:
       capacity = 288620544 (275.25MB)
       used     = 26593352 (25.36139678955078MB)
       free     = 262027192 (249.88860321044922MB)
       9.213949787302736% used
    From Space:
       capacity = 2555904 (2.4375MB)
       used     = 467176 (0.44553375244140625MB)
       free     = 2088728 (1.9919662475585938MB)
       18.27830779246795% used
    To Space:
       capacity = 2490368 (2.375MB)
       used     = 0 (0.0MB)
       free     = 2490368 (2.375MB)
       0.0% used
    PS Old Generation
       capacity = 1568669696 (1496.0MB)
       used     = 1101274224 (1050.2569427490234MB)
       free     = 467395472 (445.74305725097656MB)
       70.20434109284916% used
    PS Perm Generation
       capacity = 67108864 (64.0MB)
       used     = 40103200 (38.245391845703125MB)
       free     = 27005664 (25.754608154296875MB)
       59.75842475891113% used
    So I'm just wondering what is this "Heap" in Statistic Information field visible in SAP Memory Analyzer.
    When I go to Dominator Tree view, I look at Retained Heap column and I see that they roughly sum up to 193,656,968.
    Could someone put some more light on it?
    thanks
    Michal

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • JVMDG315: JVM Requesting Heap dump file

    Hi expert,
    Using 10.2.04 Db with 11.5.10.2 on AIX 64 bit.
    While running audit report which generate pdf format output gone in error....
    JVMDG217: Dump Handler is Processing OutOfMemory - Please Wait.
    JVMDG315: JVM Requesting Heap dump file
    ..............................JVMDG318: Heap dump file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/heapdump7545250.1301289300.phd
    JVMDG303: JVM Requesting Java core file
    JVMDG304: Java core file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/javacore7545250.1301289344.txt
    JVMDG274: Dump Handler has Processed OutOfMemory.
    JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java(Compiled Code))
         at java.io.OutputStream.write(OutputStream.java(Compiled Code))
         at oracle.apps.xdo.common.log.LogOutputStream.write(LogOutputStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFStream.output(PDFStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.write(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.writeWritable(PDFGenerator.java(Inlined Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.closePage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.newPage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.genNewPageWithSize(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommand(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommandsFromDataStream(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.close(ProxyGenerator.java:1230)
         at oracle.apps.xdo.template.fo.FOProcessingEngine.process(FOProcessingEngine.java:336)
         at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1043)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5888)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3593)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.processForm(PayPDFGen.java:243)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.runProgram(PayPDFGen.java:795)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    oracle.apps.pay.pdfgen.server.PayPDFGen
    Program exited with status 1
    No any error in db alert ...
    Pleaes suggest where is prob..
    Thanks in ADV

    Hello,
    * JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
    This is your problem. There is no enough memory. Change Java heap settings
    Regards,
    Shomoos
    Edited by: shomoos.aldujaily on Mar 29, 2011 12:39 AM

  • Frequent heap dumps and system load average

    Dear Experts,
    Performance of our development has gone worst. There has been frequent Heap Dumps and Paging, also System Load Average '/Kernel/SAPJVM/SystemProblemReporting/System Load Average' is over 133%.
    We are using NW731 with EP core.
    JVM parameters arememory are according to SAP recommendations
    -Xms4096m
    -Xmx4096m
    -XX:NewSize=1365m
    -XX:MaxNewSize=1365m
    -XX:PermSize=1024m
    -XX:MaxPermSize=1024m
    -XX:NewRatio=6
    SCS, Application and Database are on single host.
    Number of nodes=2.
    Physical memory size is 15GB
    Appreciate your assistance in analyzing and fixing the performance issue.
    Thanks,
    Kumar

    Hi Divyanshu,
    I was away on to another project for configuring change control tool.
    As mentioned earlier I don't have OS access.
    But could able to download Thread Dumps via NWA as mentioned by David...., but couldn't able to open them.
    Opened OOM.hprof file through MAT to check leak suspects etc. I could hardly understand as it is my first Java AS project ....here is bits and pieces from the dump analyzer tool.
    Leap Suspects:
    Problem Suspect 1:
    The thread com.sap.engine.core.thread.execution.CentralExecutor$SingleThread @ 0x58dcd248 HTTP Worker [@662359980] keeps local variables with total size 3,66,84,54,240 (89.70%) bytes.
    The memory is accumulated in one instance of "java.lang.Object[]" loaded by "<system class loader>".
    The stacktrace of this Thread is available. See stacktrace.
    And here is the stacktrace...
    HTTP Worker [@662359980]
      at java.lang.OutOfMemoryError.<init>()V (OutOfMemoryError.java:25)
      at java.lang.reflect.Array.newInstance(Ljava/lang/Class;I)Ljava/lang/Object; (Array.java:52)
      at oracle.jdbc.driver.BufferCache.get(Ljava/lang/Class;I)Ljava/lang/Object; (BufferCache.java:226)
      at oracle.jdbc.driver.PhysicalConnection.getByteBuffer(I)[B (PhysicalConnection.java:7665)
      at oracle.jdbc.driver.T4C8TTIClob.read([BJJZ[CI)J (T4C8TTIClob.java:226)
      at oracle.jdbc.driver.T4CConnection.getChars(Loracle/sql/CLOB;JI[C)I (T4CConnection.java:3211)
      at oracle.sql.CLOB.getChars(JI[C)I (CLOB.java:459)
      at oracle.jdbc.driver.OracleClobReader.needChars(I)Z (OracleClobReader.java:187)
      at oracle.jdbc.driver.OracleClobReader.read([CII)I (OracleClobReader.java:142)
      at java.io.Reader.read([C)I (Reader.java:123)
      at com.sap.sql.jdbc.common.CommonWrappedReader.read([C)I (CommonWrappedReader.java:108)
      at com.technidata.core.persistency.sql.QueryResult.getString(Ljava/lang/String;)Ljava/lang/String; (QueryResult.java:190)
      at com.technidata.em.facility.controller.PC_EmissionRes.getEmissionResBo(Lde/technidata/ecs/persist/sql/persistency/iface/IQueryResult;)Lcom/technidata/em/facility/detail/EmissionResBo; (PC_EmissionRes.java:2235)
      at com.technidata.em.facility.controller.PC_EmissionRes.readData(JJJJLjava/sql/Date;Ljava/sql/Date;Ljava/lang/String;ZJI)Ljava/util/List; (PC_EmissionRes.java:682)
      at com.technidata.em.facility.controller.PC_EmissionRes.readData(JJJLjava/sql/Date;Ljava/sql/Date;ZI)Ljava/util/List; (PC_EmissionRes.java:599)
      at com.technidata.em.facility.controller.PC_EmissionRes.readData(Lcom/technidata/em/facility/TypeCapArgument;)Ljava/util/List; (PC_EmissionRes.java:573)
      at com.technidata.em.facility.ESourceBo.getTypeCapData(Ljava/lang/String;Lcom/technidata/em/facility/TypeCapArgument;Ljava/lang/String;Ljava/lang/String;)Ljava/util/List; (ESourceBo.java:707)
      at com.technidata.em.facility.mgmt.EmissionResTable.initialize(Z)V (EmissionResTable.java:1793)
      at com.technidata.em.facility.mgmt.EmissionResTable.init(Lcom/sap/tc/webdynpro/progmodel/api/IWDCustomEvent;Ljava/lang/String;)V (EmissionResTable.java:1454)
      at com.technidata.em.facility.mgmt.wdp.InternalEmissionResTable.wdInvokeEventHandler(Ljava/lang/String;Lcom/sap/tc/webdynpro/progmodel/api/IWDCustomEvent;)Ljava/lang/Object; (InternalEmissionResTable.java:723)
      at com.sap.tc.webdynpro.progmodel.generation.DelegatingView.invokeEventHandler(Ljava/lang/String;Lcom/sap/tc/webdynpro/progmodel/api/IWDCustomEvent;)Ljava/lang/Object; (DelegatingView.java:142)
      at com.sap.tc.webdynpro.progmodel.components.Component.fireEvent(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;)V (Component.java:492)
      at com.technidata.em.facility.mgmt.wdp.InternalTabsInterface.wdFireEventInitTab(Ljava/lang/String;)V (InternalTabsInterface.java:398)
      at com.technidata.em.facility.mgmt.EmissionResWinInterfaceView.onPlugIn(Lcom/sap/tc/webdynpro/progmodel/api/IWDCustomEvent;)V (EmissionResWinInterfaceView.java:97)
      at com.technidata.em.facility.mgmt.wdp.InternalEmissionResWinInterfaceView.wdInvokeEventHandler(Ljava/lang/String;Lcom/sap/tc/webdynpro/progmodel/api/IWDCustomEvent;)Ljava/lang/Object; (InternalEmissionResWinInterfaceView.java:95)
      at com.sap.tc.webdynpro.progmodel.generation.DelegatingWindow.invokeEventHandler(Ljava/lang/String;Lcom/sap/tc/webdynpro/progmodel/api/IWDCustomEvent;)Ljava/lang/Object; (DelegatingWindow.java:121)
    please help me fix the performance issue. And let me know if I can provide some details to help you analyse further.
    Thanks,
    Kumar

  • Compare Heap Dumps not working

    Dear All,
    Am trying to compare two heap dumps to find differences. I have read the Tutorial. I have two heap dumps open and click on the Delta button. But all I get is a message saying something like there are no two dumps open on the desktop. Can anybody help? Thanks a lot, Anthony

    Hi Scott,
    thanks for finding this bug... It looks like we do not pick up dumps that have been opened via "Open File...". I will have a look at it.
    The context menu does not work on purpose. The problem is this: in the heap dump we only have the object address. But as the garbage collector is moving objects around, this address changes even as the object stays the same. We cannot know if the object was moved or if a new one was created.
    So one has to structurally compare the heap dump. We do this currently for the histogram on class level and on class loader level (by the extracted class loader name if a name resolver is present). For objects we do nothing at the moment. I haven't really given much thought about that, but it should be possible to compare the graph/domTree once the user picked to comparable objects (which he can, because he knows the semantics).
    Regarding open sourcing: I just spend the last 3 hours hunting a little, tiny bug that was introduced when removing some SAP specifics which we do not open source. Soon, maybe and of next week we should have something there.
    New features is an automatic leak detection, e.g. we create a HTML report with the suspectd leaks. And the handling of local variables, ie. stuff kept alive by a thread, is much easier. Ahh, and we have some pie charts now.
    Andreas.

  • JMap heap dump against gcore file

    Hi,
    We are running Java 1.5.0_06 on Solaris 9 on Sun v245, and are trying to analyze some memory issues we have been experiencing.
    We have used Solaris gcore utility to take a complete core dump of the process. This dump file is ~1.8Gb.
    We are now trying to create a heap dump from the core file using JMAP, so that we can use tools such as the Eclipse Memory Analyser (http://www.eclipse.org/mat) to examine our heap for memory issues.
    However, we are having issues creating the heap dump, as JMAP appears to be running out of system memory (swap). The command we are running is "jmap -heap:format=b /usr/bin/java ms04Core.29918". We can run some other JMAP commands, such as "jmap -histo" against the core without encountering these issues. The server we are running on has 8Gb physical memory, but JMAP seems to crash when swap size reaches ~3.8Gb (according to Solaris PRSTAT command). The error returned by JMAP is -
    Exception in thread "main" java.lang.OutOfMemoryError: requested 8192 bytes for jbyte in /BUILD_AREA/jdk1.5.0_06/hotspot/src/share/v
    m/prims/jni.cpp. Out of swap space?
    Would anyone have any ideas as to why we are seeing these issues?
    Also, can anyone comment on if this approach makes sense as a way to analyze memory issues? Other suggestions would be welcome!
    Thanks very much,
    Adrian

    Hi, we have solved this issue now - apparently this is a memory bug with the 1.5.0_06 version of JMAP. We installed a higher version JRE (5u15) and were able to complete the heap dump without issues.
    Cheers,
    Adrian

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

  • Problems with creation of an HEAP Dump

    Dear all,
    I have tried to create a HEAP Dump (CntrBreak) with the Java-based SAPMC Console (Process Table => server<no.> => DumpStack, but I get no dump on operation system (/usr/sap/<SID>/JC01/j2ee/cluster/server0/). I set the parameter -XX:HeapDumpOnOutOfMemoryError -XX:HeapDumpOnCtrlBreak in the VM. Our Java version is 1.4.12, the OS is Sun Solaris.
    Did anyone know why it not functioned in our solution?
    Many thanks in advanced.
    Patrick

    Hi,
    The Java based MC should provide the same functionality as the MMC. I tested today to trigger a heap dump as you have described and I had success.
    As the "Dump Stack" action is a protected one, the user should be asked for user/pass. Is it also for you so?
    And I have found several notes describing problems with the authentication of the sapstartsrv user on different unix platforms. As there is no action at all logged in your case, I guess that this could be the problem.
    Here are the notes I found:
    [Note 927637 - Web service authentication in sapstartsrv as of Release 7.00|https://service.sap.com/sap/support/notes/927637]
    [Note 992907 - sapstartsrv user authentication on Solaris|https://service.sap.com/sap/support/notes/992907]
    I hope this helps.
    Have you tried already to perform the action from an MMC?
    Regards,
    Krum

  • How can I get heap dump for 1.4.2_11 when OutOfMemory Occured

    Hi guys,
    How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
    We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
    Thanks.

    The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.

  • JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?

    I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
    What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
    I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
    Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
    Thanks

    I tried researching the meaning of
    JVMPI_GC_ROOT_MONITOR_USED, but found little help.
    Should this be objects that are ready for garbage
    collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
    However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
    It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it).

Maybe you are looking for

  • Trying to update iphone with the new software. being told cannot do adn to check network settings

    Trying to update iphone. getting an error message to check network settings.  I dont know what to do. have wireless router.

  • Migrations assistant won't transfer apps

    I just finished using migration assistant to transfer everything from my old computer to my new macbook pro 15 over firewire. When it finished none of my apps had transfered. They show up in finder but when you click on the application it says "You c

  • Yahoo Toolbar? WTF?

    Dear Sun Developers, I understand that you seek for opportunities to make money. Since Java is free, you may feel that making money by integrating the Yahoo Toolbar into Java is a good idea. But it is not. Just in case you didn't see the reasons, let

  • GB2312 in SQL Server

    We just recently upgraded from ColdFusion 5 to CF8. We have some Chinese text stored in our SQL Server database in a text column in GB2312 format. We have been unable to output the text in CF8. To output the text in CF5, we just needed to use a <META

  • My find on page is not working. Help?

    For the last week my find on page thing/trick is not working. I have an Ipad 2 with the latest IOS.7 and when I type my word into the search/URL bar it won't find the word even if I know its on the page. If there is a word it accepts/finds, for lack