Jvm heap dump options
We have a memory leak occurring on our appserver (glassfish) and are having trouble pinpointing its cause, so I want to turn on some of the JVM heap dump options. I have turned on the following two:
-XX:-HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/heap_dump.hprofAre there any other options I can use to help resolve this? Also what will the dump output contain? Plain-text?
Thanks
a profiler comes to mind. But how do you know it is an actual memory leak? Are you getting out of memory exceptions?
Similar Messages
-
Heap dump file size vs heap size
Hi,
I'd like to clarify my doubts.
At the moment we're analyzing Sun JVM heap dumps from Solaris platform.
Observation is that heap dump file is around 1,1GB while after loading to SAP Memory Analyzer it displays statistics: "Heap: 193,656,968" which as I understood is size of heap.
After I run:
jmap -heap <PID>
I get following information:
using thread-local object allocation
Parallel GC with 8 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 3221225472 (3072.0MB)
NewSize = 2228224 (2.125MB)
MaxNewSize = 4294901760 (4095.9375MB)
OldSize = 1441792 (1.375MB)
NewRatio = 2
SurvivorRatio = 32
PermSize = 16777216 (16.0MB)
MaxPermSize = 67108864 (64.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 288620544 (275.25MB)
used = 26593352 (25.36139678955078MB)
free = 262027192 (249.88860321044922MB)
9.213949787302736% used
From Space:
capacity = 2555904 (2.4375MB)
used = 467176 (0.44553375244140625MB)
free = 2088728 (1.9919662475585938MB)
18.27830779246795% used
To Space:
capacity = 2490368 (2.375MB)
used = 0 (0.0MB)
free = 2490368 (2.375MB)
0.0% used
PS Old Generation
capacity = 1568669696 (1496.0MB)
used = 1101274224 (1050.2569427490234MB)
free = 467395472 (445.74305725097656MB)
70.20434109284916% used
PS Perm Generation
capacity = 67108864 (64.0MB)
used = 40103200 (38.245391845703125MB)
free = 27005664 (25.754608154296875MB)
59.75842475891113% used
So I'm just wondering what is this "Heap" in Statistic Information field visible in SAP Memory Analyzer.
When I go to Dominator Tree view, I look at Retained Heap column and I see that they roughly sum up to 193,656,968.
Could someone put some more light on it?
thanks
MichalHi Michal,
that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
Kind regards,
- Andreas.
Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM -
JVMDG315: JVM Requesting Heap dump file
Hi expert,
Using 10.2.04 Db with 11.5.10.2 on AIX 64 bit.
While running audit report which generate pdf format output gone in error....
JVMDG217: Dump Handler is Processing OutOfMemory - Please Wait.
JVMDG315: JVM Requesting Heap dump file
..............................JVMDG318: Heap dump file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/heapdump7545250.1301289300.phd
JVMDG303: JVM Requesting Java core file
JVMDG304: Java core file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/javacore7545250.1301289344.txt
JVMDG274: Dump Handler has Processed OutOfMemory.
JVMST109: Insufficient space in Javaheap to satisfy allocation request
Exception in thread "main" java.lang.OutOfMemoryError
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java(Compiled Code))
at java.io.OutputStream.write(OutputStream.java(Compiled Code))
at oracle.apps.xdo.common.log.LogOutputStream.write(LogOutputStream.java(Compiled Code))
at oracle.apps.xdo.generator.pdf.PDFStream.output(PDFStream.java(Compiled Code))
at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.write(PDFGenerator.java(Compiled Code))
at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.writeWritable(PDFGenerator.java(Inlined Compiled Code))
at oracle.apps.xdo.generator.pdf.PDFGenerator.closePage(PDFGenerator.java(Compiled Code))
at oracle.apps.xdo.generator.pdf.PDFGenerator.newPage(PDFGenerator.java(Compiled Code))
at oracle.apps.xdo.generator.ProxyGenerator.genNewPageWithSize(ProxyGenerator.java(Compiled Code))
at oracle.apps.xdo.generator.ProxyGenerator.processCommand(ProxyGenerator.java(Compiled Code))
at oracle.apps.xdo.generator.ProxyGenerator.processCommandsFromDataStream(ProxyGenerator.java(Compiled Code))
at oracle.apps.xdo.generator.ProxyGenerator.close(ProxyGenerator.java:1230)
at oracle.apps.xdo.template.fo.FOProcessingEngine.process(FOProcessingEngine.java:336)
at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1043)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5888)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3593)
at oracle.apps.pay.pdfgen.server.PayPDFGen.processForm(PayPDFGen.java:243)
at oracle.apps.pay.pdfgen.server.PayPDFGen.runProgram(PayPDFGen.java:795)
at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
oracle.apps.pay.pdfgen.server.PayPDFGen
Program exited with status 1
No any error in db alert ...
Pleaes suggest where is prob..
Thanks in ADVHello,
* JVMST109: Insufficient space in Javaheap to satisfy allocation request
Exception in thread "main" java.lang.OutOfMemoryError
This is your problem. There is no enough memory. Change Java heap settings
Regards,
Shomoos
Edited by: shomoos.aldujaily on Mar 29, 2011 12:39 AM -
Heap dumps on 32bit and 64 bit JVM
We are experiencing memory leak issues with one of our application deployed on JBOSS (SUN JVM 1.5, Win 32 OS). The application is already memory intensive and consumes the maximum heap (1.5 GB) allowed on a 32-bit JVM on Win32.
This leaves very few memory for heap dump and the JVM crashes whenever we try adding the heap dump flag (-agentlib:), with a "malloc error"..
Has anyone faced a scenario like this?
Alternatively for investigation purpose, we are trying to deploy it on a Windows X64 - but the vendor advises only to run on 32 bit JVM. Here is my question:
1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?
2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?
3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.
Any help is really appreciated! :-)
Anush
This topic has no replies.Originally Posted by gleach1
I've done this a few time with ZCM 11.2
take special note that for each action in a bundle you can set a requirement - this is where you shold be specifying the appropriate architecture for the action, takes a bit longer to do but saves on the number of bundles and clogging up event logs with junk
modifying your existing actions in your bundles would probably be easier and make things cleaner to manage in the long term, this is more so the case when you have software that might only differ in the launch action depending on what program files folder it ends up in
Sorry to dig this late topic, but I've created a bundle which have an action that needs to be executed on 32 / 64 bits, each by an specific executable. Said,
(32 bits) ipxfer -s officescan.server -m 1 -p 8080 -c 63016
(64 bits) ipxfer_x64 -s officescan.server -m 1 -p 8080 -c 63016
So I added these two actions, marked each to have the correct arch ("Architecture / = / 32" or "Architecture / = / 64"), no fail on pre-requisite.
But now, every time one 32 bit machine executes the bundle, it gets an error:
Error starting "ipXfer_x64 -s officescan.server -m 1 -p 8080 -c 63016". Windows Error: This version of %1 is not compatible with the windows version in use. Please verify in your computer's system information if you need a x86 (32 bit) or x64 (64 bit) version of the program and contact software vendor
All the machines' CPU are 64 bits, but some of them have 32-bit Windows 7. These are the ones that are raising these errors.
I have other bundles restricted to 32/64 architectures, working correctly, but the restriction was created on the bundle level, not on the action level.
Sorry for the bad english. -
How can I get heap dump for 1.4.2_11 when OutOfMemory Occured
Hi guys,
How can I get heap dump for 1.4.2_11 when OutOfMemory Occured, since it has no options like: -XX:+HeapDumpOnOutOfMemoryError and -XX:+HeapDumpOnCtrlBreak
We are running Webloic 8.1 SP3 applications using this Sun 1.4.2_11 JVM and it's throwing out OutOfMemory, but we can not find a heap dump. The application is running as a service in Windows Server 2003. How can I do some more analysis on this issue.
Thanks.The HeapDumpOnOutOfMemoryError option was added to 1.4.2 in update 12. Further work to support all collectors was done in update 15.
-
Hi,
I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
Thanks,
PrasadHi, Prasad
Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size. -
Why does hprof=heap=dump have so much overhead?
I udnerstand why the HPROF option heap=sites incurs a massive performance overhead; it has to intercept every allocation and record the current call stack.
However, I don't understand why the HPROF option heap=dump incurs so much of a performance overhead. Presumably it could do nothing until invoked, and only then trace from the system roots the entire heap.
Can anyone speak to why it doesn't work that way?
- Gordon @ IATraditionally agents like hprof had to be loaded into the virtual machine at startup, and this was the only way to capture these object allocations. The new hprof in the JDK 5.0 release (Tiger) was written using the newer VM interface JVM TI and this new hprof was mostly meant to reproduce the functionality of the old hprof from JDK 1.4.2 that used JVMPI. (Just FYI: run 'java -Xrunhprof:help' for help on hprof).
The JDK 5.0 hprof will at startup, instrument java.lang.Object.<init>() and all classes and methods that use the newarray bytecodes. This instrumentation doesn't take long and is just an initial startup cost, it's the run time and what happens then that is the performance bottleneck. At run time, as any object is allocated, the instrumented methods trigger an extra call into a Java tracker class which in turn makes a JNI call into the hprof agent and native code. At that point, hprof needs to track all the objects that are live (the JVM TI free event tells it when an object is freed), which takes a table inside the hprof agent and memory space. So if the machine you are using is low on RAM, using hprof will cause drastic slowdowns, you might try heap=sites which uses less memory but just tracks allocations based on site of allocation not individual objects.
The more likely run time performance issue is that at each allocation, hprof wants to get the stack trace, this can be expensive, depends on how many objects are allocated. You could try using depth=0 and see if the stack trace samples are a serious issue for your situation. If you don't need stack traces, then you would be better off looking at the pmap command that gets you an hprof binary dump on the fly, no overhead, then you can use jhat (or HAT) to browse the heap. This may require use of the JDK 6 (Mustang) release for this experiment, see http://mustang.dev.java.net for the free downloads of JDK 6 (Mustang).
There is an RFE for hprof to allow the tracking of allocations to be turned on/off in the Java tracker methods that were injected, at the Java source level. But this would require adding some Java APIs to control sun/tools/hprof/Tracker which is in rt.jar. This is very possible and more with the JVM TI interfaces.
If you haven't tried the NetBeans Profiler (http://www.netbeans.org) you may want to look at it. It does take an incremental approach to instrumentation and tries to focus in on the areas of interest and allows you to limit the overhead of the profiler. It works with the latest JDK 5 (Tiger) update release, see http://java.sun.com/j2se.
Oh yes, also look at some of the JVM TI demos that come with the JDK 5 download. Look in the demo/jvmti directory and try the small agents HeapTracker and HeapViewer, they have much lower overhead and the binaries and all the source is right there for you to just use or modify and customize for yourself.
Hope this helps.
-kto -
Heap dump file - Generate to a different folder
Hello,
When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
Best regards,
Gonçalo Mouro VazHello Gonçalo
I don't think this is possible.
As per SAP Note 1004255;
On the first occurrence (only) of an OutOfMemoryError the JVM
will write a heap dump in the
/usr/sap/ directory
Can i ask why you would like it in a different folder?
Is it a space issue?
Thanks
Kenny -
Can you alter the JVM heap space size for all Windows users?
Hi,
I have used the "-Xmx" option to increase the Java Heap space for my Java application. This works fine, but all other users on my system still have the default heap space setting and I want them to have use an increased heap space as well. I cannot alter the JVM heap space at the command line since our Java application is started via an ActiveX bridge from a Windows application.
So far I have found one potential, but not really good solution; I have figured out that I can copy the deployment.properties file containing the altered JVM heap setting from the "Application Data\Sun\Java\Deployment" folder in my own Windows "Documents and Settings" folder to the same folder of another user. However, this is not a really good solution since we are running a system with about 60 users and often new user accounts are created and sometimes people forget to copy the deployment.properties file.
Does anyone know a better solution. I've tried to search the Windows registry or the JRE files for a default JVM heap space setting but I can't find it anywhere. Still on some systems the default is 64Mb and on others 96Mb, so I guess there must be a setting somewhere?The following is my eclipse.ini:
-vmargs
-Xms256m
-Xmx512m
-Dfuego.fstudio=true
-DprodMode=preProduction
-Dbea.home=C:\bea\albpm6.0\studio\..
-Djava.endorsed.dirs=""
-XX:PermSize=256M
-XX:MaxNewSize=256M
-XX:MaxPermSize=1024M
but i think this configuration just effect the eclipse, not embed bpm engine. I also change the engine preference, increase the heap size, but engine crash as before.
I want to change the jvm from jdk to jrockit, i hope any suggest. -
Heap Dump file generation problem
Hi,
I've configured configtool to have these 2 parameters:
-XX:+HeapDumpOnOutOfMemoryError
-XX:+HeapDumpOnCtrlBreak
In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
My question is, what are the other possibilities that heap dump file will be generated?
2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
Any help would be much appreciated.
Thanks.Hi,
Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
I hope this helps.
Regards,
Krum -
Capturing the JVM heap usage information to a log
When using weblogic 6.1sp3 the console under monitoring/performance a graph is
displayed with the historical JVM heap usage information. Is there any way to
capture this information to a log?For heap size before and after each gc, you could pass the -verbose:gc option to the JVM
on startup:
WLS C:\alex>java -verbose:gc weblogic.Admin PING 10 10
[GC 512K->154K(1984K), 0.0068905 secs]
[GC 666K->164K(1984K), 0.0069037 secs]
[GC 676K->329K(1984K), 0.0029822 secs]
[GC 841K->451K(1984K), 0.0038960 secs]
[GC 963K->500K(1984K), 0.0015452 secs]
[GC 1012K->598K(1984K), 0.0027509 secs]
[GC 1110K->608K(1984K), 0.0029370 secs]
[GC 1120K->754K(1984K), 0.0027361 secs]
[GC 1266K->791K(1984K), 0.0019639 secs]
[GC 1303K->869K(1984K), 0.0028314 secs]
[GC 1381K->859K(1984K), 0.0012957 secs]
[GC 1367K->867K(1984K), 0.0012504 secs]
[GC 1379K->879K(1984K), 0.0018592 secs]
[GC 1391K->941K(1984K), 0.0036871 secs]
[GC 1453K->988K(1984K), 0.0027143 secs]
Sending 10 pings of 10 bytes.
RTT = ~47 milliseconds, or ~4 milliseconds/packet
Looks like it might be too much info though...
Cheerio,
-alex
Fazle Khan wrote:
When using weblogic 6.1sp3 the console under monitoring/performance a graph is
displayed with the historical JVM heap usage information. Is there any way to
capture this information to a log? -
JVMPI_GC_ROOT_MONITOR_USED - what does this mean in a heap dump?
I'm having some OutOfMemory errors in my application, so I turned on a profiler, and took a heap dump before and after an operation that is blowing up the memory.
What changes after the operation is that I get an enormous amount of data that is reported under the node JVMPI_GC_ROOT_MONITOR_USED. This includes some Oracle PreparedStatements which are holding a lot of data.
I tried researching the meaning of JVMPI_GC_ROOT_MONITOR_USED, but found little help. Should this be objects that are ready for garbage collection? If so, they are not being garbage collected, but I'm getting OutOfMemoryError instead (I thought the JVM was supposed to guarantee GC would be run before OutOfMemory occurred).
Any help on how to interpret what it means for objects to be reported under JVMPI_GC_ROOT_MONITOR_USED and any ways to eliminate those objects, will be greatly appreciated!
ThanksI tried researching the meaning of
JVMPI_GC_ROOT_MONITOR_USED, but found little help.
Should this be objects that are ready for garbage
collection? Disclaimer: I haven't written code to use JVMPI, so anything here is speculation.
However, after reading this: http://java.sun.com/j2se/1.4.2/docs/guide/jvmpi/jvmpi.html
It appears that the "ROOT" flags in a level-2 dump are used with objects that are considered a "root reference" for GC (those references that are undeniably alive). Most descriptions of "roots" are static class members and variables in a stack frame. My interpretation of this doc is that objects used in a synchonize() statement are also considered roots, at least for the life of the synchronized block (makes a lot of sense when you think about it). -
Hi all,
Our Environment
===============
OS - Windows XP Service Pack 3
Oracle Developer Suite - 10.1.2.3.0
Oracle Forms & Reports Service 10.1.2.3.0
Oracle Database 10.2.0.1.0
JDK 1.5
Jinitiator 1.3.1.30
Apache POI 3.5
From forms we are writing to excel files after copying XL template using Apache POI 3.5 and JDK 1.5. This XL template file has got lot of macros.
We have imported the Java class files into form as pl/sql library. We are able to write upto 7Mb size of XL file. Beyond that size it comes with the error Ora-105101.
We tried to increase the JVM Heap Size to 640M by setting values -Xmx640M everywhere in OC4J_BI_FORMS/Server Properties/Java Options, Home/Server Properties/Java Options through Enterprise Manager console. Also manually set the values in OPMN.XML and reloaded the same. Also set -Xmx640M in Jinitiator 1.3.1.30 Java Runtime Parameters. Also set in Java console. All settings have no effect.
We have written a small program to display the run time memory from forms, which displays only maximum memory 63M all the time.
PACKAGE BODY HeapSize IS
-- DO NOT EDIT THIS FILE - it is machine generated!
args JNI.ARGLIST;
-- Constructor for signature ()V
FUNCTION new RETURN ORA_JAVA.JOBJECT IS
BEGIN
args := NULL;
RETURN (JNI.NEW_OBJECT('HeapSize', '()V', args));
END;
-- Method: getTotalMemory ()D
FUNCTION getTotalMemory(
obj ORA_JAVA.JOBJECT) RETURN NUMBER IS
BEGIN
args := NULL;
RETURN JNI.CALL_DOUBLE_METHOD(FALSE, obj, 'HeapSize', 'getTotalMemory', '()D', args);
END;
-- Method: getMaxMemory ()D
FUNCTION getMaxMemory(
obj ORA_JAVA.JOBJECT) RETURN NUMBER IS
BEGIN
args := NULL;
RETURN JNI.CALL_DOUBLE_METHOD(FALSE, obj, 'HeapSize', 'getMaxMemory', '()D', args);
END;
BEGIN
NULL;
END;
declare
obj ORA_JAVA.JOBJECT;
BEGIN
obj:=HeapSize.new;
message('Total memory '||HeapSize.getTotalMemory(obj));
message('Max memory '||HeapSize.getMaxMemory(obj));
END;
Below procedure is for writing to Excel file.
============================================
PACKAGE BODY UWWriteExcel IS
-- DO NOT EDIT THIS FILE - it is machine generated!
args JNI.ARGLIST;
-- Constructor for signature ()V
FUNCTION new RETURN ORA_JAVA.JOBJECT IS
BEGIN
args := NULL;
RETURN (JNI.NEW_OBJECT('UWWriteExcel', '()V', args));
END;
-- Method: copyExcel (Ljava/lang/String;Ljava/lang/String;)V
PROCEDURE copyExcel(
obj ORA_JAVA.JOBJECT,
a0 VARCHAR2,
a1 VARCHAR2) IS
BEGIN
args := JNI.CREATE_ARG_LIST(2);
JNI.ADD_STRING_ARG(args, a0);
JNI.ADD_STRING_ARG(args, a1);
JNI.CALL_VOID_METHOD(FALSE, obj, 'UWWriteExcel', 'copyExcel', '(Ljava/lang/String;Ljava/lang/String;)V', args);
END;
-- Method: getSpreadSheetPara (Ljava/lang/String;)V
PROCEDURE getSpreadSheetPara(
obj ORA_JAVA.JOBJECT,
a0 VARCHAR2) IS
BEGIN
args := JNI.CREATE_ARG_LIST(1);
JNI.ADD_STRING_ARG(args, a0);
JNI.CALL_VOID_METHOD(FALSE, obj, 'UWWriteExcel', 'getSpreadSheetPara', '(Ljava/lang/String;)V', args);
END;
-- Method: openSheet (I)V
PROCEDURE openSheet(
obj ORA_JAVA.JOBJECT,
a0 NUMBER) IS
BEGIN
args := JNI.CREATE_ARG_LIST(1);
JNI.ADD_INT_ARG(args, a0);
JNI.CALL_VOID_METHOD(FALSE, obj, 'UWWriteExcel', 'openSheet', '(I)V', args);
END;
-- Method: getCellValues (IID)V
PROCEDURE getCellValues(
obj ORA_JAVA.JOBJECT,
a0 NUMBER,
a1 NUMBER,
a2 NUMBER) IS
BEGIN
args := JNI.CREATE_ARG_LIST(3);
JNI.ADD_INT_ARG(args, a0);
JNI.ADD_INT_ARG(args, a1);
JNI.ADD_DOUBLE_ARG(args, a2);
JNI.CALL_VOID_METHOD(FALSE, obj, 'UWWriteExcel', 'getCellValues', '(IID)V', args);
END;
-- Method: getCellValues (IILjava/lang/String;)V
PROCEDURE getCellValues(
obj ORA_JAVA.JOBJECT,
a0 NUMBER,
a1 NUMBER,
a2 VARCHAR2) IS
BEGIN
args := JNI.CREATE_ARG_LIST(3);
JNI.ADD_INT_ARG(args, a0);
JNI.ADD_INT_ARG(args, a1);
JNI.ADD_STRING_ARG(args, a2);
JNI.CALL_VOID_METHOD(FALSE, obj, 'UWWriteExcel', 'getCellValues', '(IILjava/lang/String;)V', args);
END;
-- Method: exportExcel ()V
PROCEDURE exportExcel(
obj ORA_JAVA.JOBJECT) IS
BEGIN
args := NULL;
JNI.CALL_VOID_METHOD(FALSE, obj, 'UWWriteExcel', 'exportExcel', '()V', args);
END;
-- Method: copy (Ljava/lang/String;Ljava/lang/String;)V
PROCEDURE copy(
a0 VARCHAR2,
a1 VARCHAR2) IS
BEGIN
args := JNI.CREATE_ARG_LIST(2);
JNI.ADD_STRING_ARG(args, a0);
JNI.ADD_STRING_ARG(args, a1);
JNI.CALL_VOID_METHOD(TRUE, NULL, 'UWWriteExcel', 'copy', '(Ljava/lang/String;Ljava/lang/String;)V', args);
END;
BEGIN
NULL;
END;
declare
obj ORA_JAVA.JOBJECT;
BEGIN
message('-1');pause;
obj:=UWWriteExcel.new;
message('0');pause;
UWWriteExcel.copyExcel(obj,'C:\\excel\\CAT2009WS.XLS','C:\\excel\\CAT2009WS.XLS');
message('1');pause;
UWWriteExcel.openSheet(obj,0);
message('2');pause;
UWWriteExcel.getCellValues(obj,6,2,900);
message('3');pause;
UWWriteExcel.getCellValues(obj,7,2,911);
message('4');pause;
UWWriteExcel.exportExcel(obj);
END;
When the size of XL is more than 7Mb, after message(0) it will be display oracle error.
From command prompt if we run the same java class file by passing -Xmx256m parameter we are able to write to big XL file.
Can anyone tell me where I am wrong... Can we increase the JVM Heap Size from forms...I have a simular problem.
Via Forms I call a Java class (import Java class -> PL/SQL class java method).
For this specific process I need to set the Xmx java option...
How do I do this ?
Changing the java option for the Forms-OC4J in the EM doesn't help.
Is there an other level where I can modify this ?
Does the java process of the forms is the single process that exists ? How does he handles such java calls?
Are that separed java processes ? threads ? ....
Thanks !!! -
IBM Heap Dump command line utilities
Hello,
I am looking for command line parameter for IBM Heap dump location in configtool. I know we can set using environment variable IBM_HEAPDUMPDIR but I would like to get some command utilities to set it in configtool.
Thanks in Advance !Hi,
The JVM checks each of the following locations for existence and write-permission, then stores the Heapdump in the first one that is available.
The location that is specified using the file suboption on the triggered -Xdump:heap agent.
The location that is specified by the IBM_HEAPDUMPDIR environment variable, if set (_CEE_DMPTARG on z/OS(R)).
The current working directory of the JVM processes.
The location that is specified by the TMPDIR environment variable, if set.
The /tmp directory. On Windows(R), C:\temp.
Details : http://publib.boulder.ibm.com/infocenter/javasdk/v1r4m2/index.jsp?topic=/com.ibm.java.doc.diagnostics.142j9/html/contents.html
Regards,
Sandeep -
EM Agent get restarted on heap dump generation with jmap command
Hi all,
To find the cause of OutOfMemory error we are generating heap dump of perticular JVM process by using the command "jmap -dump:format=b,file=heap.bin <pid>".
But when we excute this command, then our all the other jvm process get crashed and it also crashed emagent process.
I would like to know the reason why these processes and specially emagent process
get crashed.
Regards,
$Hi Michal,
that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
Kind regards,
- Andreas.
Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM
Maybe you are looking for
-
I have moved from a G5 Powermac to an Intel Mac Pro and selectively porting over items from a backup hard drive. How can I access and use documents and drawings created on the PC based computer ?
-
Hi .. I have a "Afp can't connect" message at startup ... at sometime, I guess, I've asked to connect to my PBook from my desktop and now don't seem to be able to get rid of this message ... anyone any ideas please ? Many thanks Roger
-
New Crack in Case After Phone Ran Hot
Just a few days ago I took my phone out of my pocket and realized it was extremely warm. I woke it up to use it and immediately got the message that the battery was almost dead. It had a full charge from a couple of hours before and I had only used i
-
XSD from PI does not works in Syndicator.
Hey Guys I created a data type in PI then made a XSD out of It ,when i try to import that in Syndicator i don't get any root element for it. but if i tweak the XSD just a little bit. it works perfectly in syndicator. I understand MDM has some limitat
-
System not available for selection in Maintenance Optimizer
Hello, When planning the maintenance in MOPZ I choose product version and the system displays a list of systems assigned to the Product Version. However, the list only shows one of the two systems assigned to the logical component (prod system). I ha