Heap Allocation Parameters
In previous Java versions, -mx was used to set the maximum heap size. As we know, the new -Xmx parameter replaced it. What would occur if -mx was passed to the newer versions of Java? We are debugging an issue where -mx was passed to a newer java process, and it is our suggestion that it may have been ignored. Is this correct?
Java 1.4 (at least) still seems to honor the -ms and -mx options although since they are deprecated that may change in any release.
Simple proof, run "java -ms" and/or "java -mx" and note the error messages.
Chuck
Similar Messages
-
Solaris 10 - Zones - Java Heap Allocation
I have a SUN T5240 running Solaris 10 with 2 zones configured.
We have 64GB of RAM on board....
I am unable to start any of my JAVA applications/methods with more than 1280mb of java heap allocated.
ulimit -a shows:
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
coredump(blocks) 0
nofiles(descriptors) 256
vmemory(kbytes) unlimited
Can anyone tell me why I can't get to the RAM/memory that I know is there?
Thankssoularis wrote:
We're only asking for Xmx2048 currently and we are still being denied...I need to run in 32bit mode for my application.what are you running in 32 bits? solaris? java? or are you locked in a 32 bit zone on a 64bit server? I'm not even sure
SPARC is supported in 32 bit mode anymore (for solaris, anyway).
example:
java -Xmx2g -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Seems like I should be able get 2GB in 32bit Solaris 10
Right?one would think. Though you should make sure that whatever you are using is allocated enough swap to run. It's possible that you are running in a constrained zone, so even if the machine has 64G, your zone is only allocated a small portion of that.
Heres the 64bit attempt:
# java -d64 -Xmx2048m -version
java version "1.5.0_14"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_14-b03)
Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_14-b03, mixed mode)This under the same user? Odd that 64 bits starts up, but 32 bits doesn't. You also might try and upgrade to 1.5.0_18 - it might be a bug that was introduced and now fixed. -
Java.lang.OutOfMemoryError: heap allocation failed
Hi,
I am facing a strange which i cannot debug..
Could anyone let me know what could be the exact reason for this kind of error...
Exception in thread "172.24.36.74:class=SnmpAdaptorServer_172.24.36.74,protocol=snmp,port=161" java.lang.OutOfMemoryError: heap allocation failed
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
at java.net.DatagramSocket.receive(DatagramSocket.java:712)
at com.sun.management.comm.SnmpAdaptorServer.doReceive(SnmpAdaptorServer.java:1367)
at com.sun.management.comm.CommunicatorServer.run(CommunicatorServer.java:617)
at java.lang.Thread.run(Thread.java:595)
terminate called after throwing an instance of 'std::bad_alloc'
what(): St9bad_alloc
Ur help on this is very much appreciated...
Srinivasan.You are trying to receive a Datagram with an enormous byte array. The maximum size of a UDP datagram is 65535-28=65507 bytes, and the maximum practical size is 534 bytes.
Best practice with UDP is to use a byte array one larger than the largest expected datagram, so you can detect truncations: if the size of the received datagram = the size of the byte array, you received an unexpectedly large message and it has probably been truncated. -
Win SRV 2012 R2 - Desktop heap allocation failed
Win SRV 2012 R2 - Desktop heap allocation failed
Every 4-5 days, i get this warnging message :
Warning Win32k (Win32k ) A desktop heap allocation failed.
20 minutes later, this Distributed DCOM error popups every 10 minutes until i restart the server.
The server {73E709EA-5D93-4B2E-BBB0-99B7938DA9E4} did not register with DCOM within the required timeout
Pleas help!Heelo babajeeb69,
This may help to mend the issue running on the servers:
Open regedit,and navigate to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\SubSystems
Then found the following value: "Windows SharedSection=X,Y,Z.
Where X,Y and Z are the values you'll found there.
The Desktop heap the the "Y" value.
Double this value, and then reboot the box.
Hopefully the issue will dismiss for good.
It turns out that too many interactive services are running on the machine, since this services allocate some from the Desktop heap, when it becomes depleted the Win32K will starve for memory even with plenty of memory and disk space.
Att,
Felipe Cobu -
Help needed on abap heap area parameters
Hi Gurus ,
I need your help in regards with abap heap area parameters , we are facing a lot of system no roll memory issues on one of our instance.
As for my understanding :
1)abap/heap_area_dialog : heap memory limit for dialog process.
2).abap/heap_area_nondialog: heap memory limit for non-dialog process.
3).abap/heap_area_total : what this implies ?
In my production instance the parameters are as follows :
Actuals
abap/heap_area_dialog -
2GB (Recommended : 2GB for an application server with max 50 users )
abap/heap_area_nondialog:-- 2GB (Recommended : 2GB or an application server with max 50 users )
abap/heap_area_total --- 1GB (Recommended and default value: 2GB)
My doubt is "abap/heap_area_total of 1GB " might creating problems .
What is the recommended value for abap/heap_area_total on Solaris 10 ?
your valuable inputs required :!Hello Sandy
Have a look at this link for UNIX
http://help.sap.com/saphelp_nw04/helpdata/en/02/96257b538111d1891b0000e8322f96/content.htm
For more in memory management in below link
http://help.sap.com/saphelp_nw04/helpdata/en/02/96253c538111d1891b0000e8322f96/frameset.htm
Regards
Vivek -
Hi all,
I have a strange heap allocation problem with my application. I will try to explain what happens:
My AWT-application reads data every second from a serial port and displays it. Max. heap size is set to -Xmx32m because the application has to run in an environment with little memory. JVM is 1.4.2_18.
After some hours I get an OutOfMemoryError. My first idea of course was: ok a memory leak, use my profiler and see what happens. But I can not find any memory leaks. The following is what I can see in my profiler:
The first 2 hours everything seems to be OK. The available heap size is stable at 15 MB and the used heap goes up and down as normal. But after 2 hours the used heap size has some peaks. The used heap seems to build up and the peaks get higher but still the used heap size goes down due to GC until a peak is so high that it is over 32MB and my application dies. The recorded objects graph has the same trend as the heap graph. The VM seems to "forget" garbage collection...
The only idea I have is that heap fragmentation occurs e.g. because of array allocations. But until know all attempts to track down the problem failed.
Any ideas what can be the cause of the problem or what I can try? Any help is welcome.
Edited by: JBrain on Aug 21, 2008 12:40 AM
Edited by: JBrain on Aug 21, 2008 4:27 AMYes, the same limitation in 32-bit Linux applies to JRockit as well. On
a 64-bit architecture this limit does not exist.
Regards,
/Staffan
Andrew Kelly wrote:
Sun JDK bug 4435069 states that there is a limitation to the Heap size for a Linux
JDK of 2G.
Does this restriction also applie to JRocket, has this been tested? -
Hello,
I am working on a memory-hungry Swing-based application that analyzes several GB of data.
We are running with Java 1.4 on Windows XP. To keep responsiveness up, we want to allocate as much memory to our application as possible and cache data to disk when necessary.
The maximum amount of heap space that we can allocate with JDK1.4 on XP is approx 1.6 GB ( -mx1638m ). The target machine itself has 2GB RAM.
MY QUESTION IS: are there any potential problems associated with allocating the full 1638m that XP will allow? (We haven't seen any yet.)
However one of my colleagues thinks there might be potential problems and feels that we should only allocate 80% of the max (approx 1.3 GB).
His thinking is that as long as we remain in 'pure Java' there should not be any problems, but JNI calls might lead to problems.
Just wondering if anyone else has an opinion on this.
Thanks,
Ted HillC/C++ code uses the application heap space.
The java heap comes from the application heap space.
The application itself has a maximum amount of heap space. This is fixed by the OS, and how the application starts up (ususually due to the way it was built in the first place.)
In consequence if your C/C++ code grabs a large hunk of memory and keeps it then the maximum java heap will be decreased by that amount. Likewise if the java heap grabs all the space then the C/C++ code won't have any space left.
This applies only to the C/C++ heap though. When jni code uses java objects then those objects (ignoring some trivial usage) exist on the java heap. -
Hi everybody,
It seems that our jrockit is creating huge heaps even for the simplest programs. Check out the following data.
OS: RedHat EL 4.0 x86_64
Java: JRockit R27.3.1 x86_64
Say we run the following program:
public class TestHeap {
public static void main(String[] args) throws InterruptedException {
System.out.println("Sleeping");
Thread.sleep(600000);
System.out.println("Finished");
}I run jrcmd print_memusage on that process and this is the weird output:
[JRockit] *** 0th memory utilization report
(all numbers are in kbytes)
Total mapped ;;;;;;;3304980
; Total in-use ;;;;;; 141076
;; executable ;;;;; 5436
;;; java code ;;;; 384; 7.1%
;;;; used ;;; 235; 61.4%
;; shared modules (exec+ro+rw) ;;;;; 5031
;; guards ;;;;; 144
;; readonly ;;;;; 47448
;; rw-memory ;;;;; 88576
;;; Java-heap ;;;; 65536; 74.0%
;;; Stacks ;;;; 4552; 5.1%
;;; Native-memory ;;;; 18487; 20.9%
;;;; java-heap-overhead ;;; 2057
;;;; codegen memory ;;; 768
;;;; classes ;;; 2816; 15.2%
;;;;; method bytecode ;; 196
;;;;; method structs ;; 424 (#5438)
;;;;; constantpool ;; 1101
;;;;; classblock ;; 128
;;;;; class ;; 231 (#439)
;;;;; other classdata ;; 349
;;;;; overhead ;; 189
;;;; threads ;;; 16; 0.1%
;;;; malloc:ed memory ;;; 1545; 8.4%
;;;;; codeinfo ;; 75
;;;;; codeinfotrees ;; 48
;;;;; exceptiontables ;; 10
;;;;; metainfo/livemaptable ;; 222
;;;;; codeblock structs ;; 0
;;;;; constants ;; 0
;;;;; livemap global tables ;; 226
;;;;; callprof cache ;; 0
;;;;; paraminfo ;; 32 (#467)
;;;;; strings ;; 460 (#7653)
;;;;; strings(jstring) ;; 0
;;;;; typegraph ;; 54
;;;;; interface implementor list ;; 10
;;;;; thread contexts ;; 14
;;;;; jar/zip memory ;; 425
;;;;; native handle memory ;; 12
;;;; unaccounted for memory ;;; 11300; 61.1%;7.31
So this makes 3Gb of.... nothing. why is that memory being allocated? is there a way to change this weird behaviour?
Thanks,
MartinSome more info on this. If I run that hello world program with -Xmx64M i get the following output (btw, machine has 4Gb).
[JRockit] *** 0th memory utilization report
(all numbers are in kbytes)
Total mapped ;;;;;;;1201940
; Total in-use ;;;;;; 133140
;; executable ;;;;; 5436
;;; java code ;;;; 384; 7.1%
;;;; used ;;; 235; 61.4%
;; shared modules (exec+ro+rw) ;;;;; 5031
;; guards ;;;;; 144
;; readonly ;;;;; 47448
;; rw-memory ;;;;; 80640
;;; Java-heap ;;;; 65536; 81.3%
;;; Stacks ;;;; 4552; 5.6%
;;; Native-memory ;;;; 10551; 13.1%
;;;; java-heap-overhead ;;; 2057
;;;; codegen memory ;;; 768
;;;; classes ;;; 2816; 26.7%
;;;;; method bytecode ;; 196
;;;;; method structs ;; 424 (#5438)
;;;;; constantpool ;; 1101
;;;;; classblock ;; 128
;;;;; class ;; 231 (#439)
;;;;; other classdata ;; 349
;;;;; overhead ;; 189
;;;; threads ;;; 16; 0.2%
;;;; malloc:ed memory ;;; 1545; 14.6%
;;;;; codeinfo ;; 75
;;;;; codeinfotrees ;; 48
;;;;; exceptiontables ;; 10
;;;;; metainfo/livemaptable ;; 222
;;;;; codeblock structs ;; 0
;;;;; constants ;; 0
;;;;; livemap global tables ;; 226
;;;;; callprof cache ;; 0
;;;;; paraminfo ;; 32 (#467)
;;;;; strings ;; 460 (#7653)
;;;;; strings(jstring) ;; 0
;;;;; typegraph ;; 54
;;;;; interface implementor list ;; 10
;;;;; thread contexts ;; 14
;;;;; jar/zip memory ;; 425
;;;;; native handle memory ;; 12
;;;; unaccounted for memory ;;; 3364; 31.9%;2.18
---------------------!!! -
Java heap size parameters: -Xmx Vs -mx
I am trying to increase heap size of JVM and tried -mx option to mention maximum heap size .It worked for me. As per the java doc the option is -Xmx.
Could any one tell the difference between these two options?
Thanks in adavance,
amarHello amarrecherla,
the -X options are not standard JVM options, but sun specific ones. For a complete list type
java -XWith kind regards
Ben -
Hi all,
I am running a SUNONE App Server on HP-UX11i with JDK 1.4.0_02 (shipped with the SunOne app server). The harware has 2-CPU with 4 GB of RAM. This is the only application that is running on this server.
I am trying to allocate the minimum heap size of 1024mb and max of 2048mb for this application server by setting the JVM options as -Xms1024m -Xmx2048m. If I restart the app server with this changes, I am getting the following error.
[02/Aug/2004:18:36:25] WARNING (29579): CORE3283: stderr: Error occurred during initialization of VM
[02/Aug/2004:18:36:25] FATAL (29579): CORE4005: Internal error: unable to create JVM
[02/Aug/2004:18:36:25] WARNING (29579): CORE3283: stderr: Could not reserve enough space for gen1 generation heap
I tried vaious combinations, the app server runs fine with min. of 128mb and max. of 512mb heap size.
Following is the kernal settings on this server.
max_thread_proc 3000 maxdsiz 0x80000000
maxdsiz_64bit 0x80000000
maxfiles 8192
nfile 100000
maxuprc 2048
maxusers 800
Does anyone knows, what could be the issue or what other areas should I look for to tune the system?.Hi,
Please make sure you got enough swap space.
I would reserve at least 4GB in your case.
ak -
Allocated heap memory goes up even when there is enough free memory
Hi,
Our Java application's memory usage keep growing. Further analysis of the heap memory using JProbe shows that the allocated heap memory goes up even when there is enough free memory available in the heap.
When the process started, allocated heap memory was around 50MB and the memory used was around 8MB. After few hours, the inuse memory remains at 8MB (slight increase in KBs), but the allocated memory went upto 70MB.
We are using JVM 1.5_10. What could be the reason for heap allocation going up even when there is enough free memory available in heap?
-Rajesh.Hi Eric,
Please check if there is any error or warning in the Event Viewer based on the data time?
If there is any error, please post the event ID to help us to troubleshoot.
Best Regards,
Anna -
Gurus,
Could some body help me to understand the differentiate between heap size parameters -Xms and -Xmx ? I usually change both of them. One another is that, the maximum heap size we can use is total ram/2 ?
Thanks,Text from the above link
==================
As part of the Best Practices, we know that we should be setting -Xms & -Xmx Java command line parameters. What are these settings and why is it required to set these.
As JAVA starts, it creates within the systems memory a Java Virtual Machine (JVM). JVM is where the complete processing of any Java program takes place. All JAVA applications (including IQv6) by default allocates & reserves up to 64 MB of memory resource pool from the system on which it is running.
The Xms is the initial / minimum Java memory (heap) size within the JVM. Setting the initial memory (heap) size higher can help in a couple of ways. First, it will allow garbage collection (GC) to work less which is more efficient. The higher initial memory value will cause the size of the memory (heap) not to have to grow as fast as a lower initial memory (heap) size, thereby saving the overhead of the Java VM asking the OS for more memory.
The Xmx is the maximum Java memory (heap) size within the Java Virtual Memory (JVM). As the JVM gets closer to fully utilizing the initial memory, it checks the Xmx settings to find out if it can draw more memory from the system resources. If can, it does so. For the JVM to allocate contiguous memory to itself is a very expensive operation. So as the JVM gets closer to the initial memory, the JVM will use aggressive garbage collection (to clean the memory and if possible avoid memory allocation), increasing the load on the system.
If JVM is in need of memory beyond the value set in Xmx, the JVM will not be able to draw more memory from system resource (even if available) and run out of memory. Hence, the -Xms and -Xmx memory parameters should be increased depending upon the demand estimation of the system. Ideally both same should be the same value (set at maximum possible as per demand estimation). This ensure that the maximum memory is allocated right at the start-up eliminating the need for extra memory allocation during program execution. We recommend aggressive maximum memory (heap) size of between 1/2 and 3/4 of physical memory.
Edited by: oracleSQR on Oct 7, 2009 10:38 AM -
We are kind of struggling with the good old "out of memory"
issue of JVM while running our app on oracle 9i j2ee container
(Solaris 2.6). I do not see much discussion about this
particular problem in this forum. We are playing with heap
allocation, garbage collector, max num of instances etc to
overcome this out of memory issue. If you are running or about
to run OC4J in a production environment please let me now what
measures and settings are you using to tune the JVM.We start with 128m, and use 512m for the maximum, and have no
troubles. That is:
java -jar -Xms128m -Xmx512m orion.jar
This is on linux. There are memory leaks with varous jdbc
drivers, postgresql for example.
Also, if you do not put a maximum on your instances of entity
beans, they will just pile up. You can adjust this in the orion-
ejb-jar.xml file. The bean problem occurs if you have not
normalized your bean mapping for performance.
regards,
the elephantwalker
www.elephantwalker.com -
Hello, I have done some tests to check the heap size of an application.
This is my test code:
public class Main {
public static void main(String[] args) {
Runtime runtime = Runtime.getRuntime();
// Max limit for heap allocation.
// It's in bytes.
long heapLimitBytes = runtime.maxMemory();
// Currently allocated heap.
// It's in bytes
long allocatedHeapBytes = runtime.totalMemory();
// Unused memory from the allocated heap.
// It's an approximation, it's in bytes.
long unusedAllocatedHeapBytes = runtime.freeMemory();
// Used memory from the allocated heap.
// It's an approximation, it's in bytes.
long usedAllocatedHeapBytes =
allocatedHeapBytes - unusedAllocatedHeapBytes;
System.out.println("Max limit for heap allocation: " +
getMBytes(heapLimitBytes) + "MB");
System.out.println("Currently allocated heap: " +
getMBytes(allocatedHeapBytes) + "MB");
System.out.println("Used allocated heap: " +
getMBytes(usedAllocatedHeapBytes) + "MB");
private static long getMBytes(long bytes) {
return (bytes / 1024L) / 1024L;
}Then I run this program with the option -Xmx1024m, and the result was:
On windows: Max limit for heap allocation: 1016MB
On HP-UX: Max limit for heap allocation: 983MB
Someone knows why the max limit is not 1024MB as I requested?
And why it shows a different value in windows than in HP-UX?
Thanks
Edited by: JoseLuis on Oct 5, 2008 11:29 AMThank you for the reply
I have checked and the page size in windows and HP-UX is 4KB.
Also, in the documentation for the -Xmx flag it says that the size must be multiple of 1024 bytes and bigger than 2 MB.
I may understand that the allocated size can be rounded to the nearest page (4 KB block), wich give you a difference less than 1 MB between the requested size and the real allocated size, but in windows the the difference is 8 MB and in HP-UX the difference is 41 MB. It's a big difference.
Am I missing something? -
Hi
I am trying to change a heap allocation from first fit to best fit. I understand that for best it i will have to traverse the whole memory block and find the block which is closest to the one asked for. But, how do i keep track of the differences.
Here is the chunk of code for first fit
for (x = free, back = -1; block [x] != -1; back = x, x = block [x+1])
if (block[x] > size) break; //size is the size that is needed for the running programnow for best fit i am assuming that i 'let' the first block be the best case and then traverse through the list.
If some one could give me a few hints...i would really apreciate it
-bhaaratthx i came up w/ another algorithm though...it seems about right to me please have a look at it and tell me what you think.
best = free;
for(x = free, back = -1; ((x != -1)&&(block[x] != -1)); back =x, x = block[x + 1])//have the && statement cuz sometimes when x = -1 i get a run time error
if (block[best] == size) break;
else if (block [best] > size)
diff = block [best] - size;//calculate the difference
if (block[best+1] != -1)//if the next loc of heap has -1 in it then there is no need to compare as we dont have neother choice but to split a block.
if (((block[block[best+1]] -size > 0) && (block[block[best+1]] -size < 10)) && (block[block[best+1]] -size < diff))
best = block [best +1];
if (x == 0) break;
}-bhaarat
Maybe you are looking for
-
Error in Contract distribution with services
Hi everyone, We need your help. We are working in an implementation of SRM 7.0 with ECC 6 EhP 4. We are using the new cetral contract. We have a issue, when we create a central contract with service items and we try to distribute it into ECC we are g
-
Hi ! I have an existing customer defined as a Ship-to-Party( account group-0002). Now I want to define this customer as a Sold-to-Party(account group 0001). I tried the same by adding the Sold-to-party in the partner functions tab under Sales Area ta
-
When I use IE I don't have this problem. It's only with Firefox.
-
Regarding BAPI_MATERIAL_BOM_GROUP_CREATE
Dear All, Bom is created with given BAPI_MATERIAL_BOM_GROUP_CREATE. But the issue is while creating multiple subitems for each item. it is creating only one subitem for each item. Please let me know regarding this. Regards Anil
-
Hi, When i am releasing billilng document through vf02 then getting following error. No account is specified in item 0000001003 Message no. F5670 Diagnosis No account was specified for account type "S" in item "0000001003" of the FI/CO document. Syst