Weird heap allocation

Hi everybody,
It seems that our jrockit is creating huge heaps even for the simplest programs. Check out the following data.
OS: RedHat EL 4.0 x86_64
Java: JRockit R27.3.1 x86_64
Say we run the following program:
public class TestHeap {
  public static void main(String[] args) throws InterruptedException {
  System.out.println("Sleeping");
  Thread.sleep(600000);
  System.out.println("Finished");
}I run jrcmd print_memusage on that process and this is the weird output:
[JRockit] *** 0th memory utilization report
(all numbers are in kbytes)
Total mapped ;;;;;;;3304980
; Total in-use ;;;;;; 141076
;; executable ;;;;; 5436
;;; java code ;;;; 384; 7.1%
;;;; used ;;; 235; 61.4%
;; shared modules (exec+ro+rw) ;;;;; 5031
;; guards ;;;;; 144
;; readonly ;;;;; 47448
;; rw-memory ;;;;; 88576
;;; Java-heap ;;;; 65536; 74.0%
;;; Stacks ;;;; 4552; 5.1%
;;; Native-memory ;;;; 18487; 20.9%
;;;; java-heap-overhead ;;; 2057
;;;; codegen memory ;;; 768
;;;; classes ;;; 2816; 15.2%
;;;;; method bytecode ;; 196
;;;;; method structs ;; 424 (#5438)
;;;;; constantpool ;; 1101
;;;;; classblock ;; 128
;;;;; class ;; 231 (#439)
;;;;; other classdata ;; 349
;;;;; overhead ;; 189
;;;; threads ;;; 16; 0.1%
;;;; malloc:ed memory ;;; 1545; 8.4%
;;;;; codeinfo ;; 75
;;;;; codeinfotrees ;; 48
;;;;; exceptiontables ;; 10
;;;;; metainfo/livemaptable ;; 222
;;;;; codeblock structs ;; 0
;;;;; constants ;; 0
;;;;; livemap global tables ;; 226
;;;;; callprof cache ;; 0
;;;;; paraminfo ;; 32 (#467)
;;;;; strings ;; 460 (#7653)
;;;;; strings(jstring) ;; 0
;;;;; typegraph ;; 54
;;;;; interface implementor list ;; 10
;;;;; thread contexts ;; 14
;;;;; jar/zip memory ;; 425
;;;;; native handle memory ;; 12
;;;; unaccounted for memory ;;; 11300; 61.1%;7.31
So this makes 3Gb of.... nothing. why is that memory being allocated? is there a way to change this weird behaviour?
Thanks,
Martin

Some more info on this. If I run that hello world program with -Xmx64M i get the following output (btw, machine has 4Gb).
[JRockit] *** 0th memory utilization report
(all numbers are in kbytes)
Total mapped ;;;;;;;1201940
; Total in-use ;;;;;; 133140
;; executable ;;;;; 5436
;;; java code ;;;; 384; 7.1%
;;;; used ;;; 235; 61.4%
;; shared modules (exec+ro+rw) ;;;;; 5031
;; guards ;;;;; 144
;; readonly ;;;;; 47448
;; rw-memory ;;;;; 80640
;;; Java-heap ;;;; 65536; 81.3%
;;; Stacks ;;;; 4552; 5.6%
;;; Native-memory ;;;; 10551; 13.1%
;;;; java-heap-overhead ;;; 2057
;;;; codegen memory ;;; 768
;;;; classes ;;; 2816; 26.7%
;;;;; method bytecode ;; 196
;;;;; method structs ;; 424 (#5438)
;;;;; constantpool ;; 1101
;;;;; classblock ;; 128
;;;;; class ;; 231 (#439)
;;;;; other classdata ;; 349
;;;;; overhead ;; 189
;;;; threads ;;; 16; 0.2%
;;;; malloc:ed memory ;;; 1545; 14.6%
;;;;; codeinfo ;; 75
;;;;; codeinfotrees ;; 48
;;;;; exceptiontables ;; 10
;;;;; metainfo/livemaptable ;; 222
;;;;; codeblock structs ;; 0
;;;;; constants ;; 0
;;;;; livemap global tables ;; 226
;;;;; callprof cache ;; 0
;;;;; paraminfo ;; 32 (#467)
;;;;; strings ;; 460 (#7653)
;;;;; strings(jstring) ;; 0
;;;;; typegraph ;; 54
;;;;; interface implementor list ;; 10
;;;;; thread contexts ;; 14
;;;;; jar/zip memory ;; 425
;;;;; native handle memory ;; 12
;;;; unaccounted for memory ;;; 3364; 31.9%;2.18
---------------------!!!

Similar Messages

  • Solaris 10 - Zones - Java Heap Allocation

    I have a SUN T5240 running Solaris 10 with 2 zones configured.
    We have 64GB of RAM on board....
    I am unable to start any of my JAVA applications/methods with more than 1280mb of java heap allocated.
    ulimit -a shows:
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) unlimited
    stack(kbytes) unlimited
    coredump(blocks) 0
    nofiles(descriptors) 256
    vmemory(kbytes) unlimited
    Can anyone tell me why I can't get to the RAM/memory that I know is there?
    Thanks

    soularis wrote:
    We're only asking for Xmx2048 currently and we are still being denied...I need to run in 32bit mode for my application.what are you running in 32 bits? solaris? java? or are you locked in a 32 bit zone on a 64bit server? I'm not even sure
    SPARC is supported in 32 bit mode anymore (for solaris, anyway).
    example:
    java -Xmx2g -version
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.
    Seems like I should be able get 2GB in 32bit Solaris 10
    Right?one would think. Though you should make sure that whatever you are using is allocated enough swap to run. It's possible that you are running in a constrained zone, so even if the machine has 64G, your zone is only allocated a small portion of that.
    Heres the 64bit attempt:
    # java -d64 -Xmx2048m -version
    java version "1.5.0_14"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_14-b03)
    Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_14-b03, mixed mode)This under the same user? Odd that 64 bits starts up, but 32 bits doesn't. You also might try and upgrade to 1.5.0_18 - it might be a bug that was introduced and now fixed.

  • Java.lang.OutOfMemoryError: heap allocation failed

    Hi,
    I am facing a strange which i cannot debug..
    Could anyone let me know what could be the exact reason for this kind of error...
    Exception in thread "172.24.36.74:class=SnmpAdaptorServer_172.24.36.74,protocol=snmp,port=161" java.lang.OutOfMemoryError: heap allocation failed
         at java.net.PlainDatagramSocketImpl.receive0(Native Method)
         at java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
         at java.net.DatagramSocket.receive(DatagramSocket.java:712)
         at com.sun.management.comm.SnmpAdaptorServer.doReceive(SnmpAdaptorServer.java:1367)
         at com.sun.management.comm.CommunicatorServer.run(CommunicatorServer.java:617)
         at java.lang.Thread.run(Thread.java:595)
    terminate called after throwing an instance of 'std::bad_alloc'
    what(): St9bad_alloc
    Ur help on this is very much appreciated...
    Srinivasan.

    You are trying to receive a Datagram with an enormous byte array. The maximum size of a UDP datagram is 65535-28=65507 bytes, and the maximum practical size is 534 bytes.
    Best practice with UDP is to use a byte array one larger than the largest expected datagram, so you can detect truncations: if the size of the received datagram = the size of the byte array, you received an unexpectedly large message and it has probably been truncated.

  • Win SRV 2012 R2 - Desktop heap allocation failed

    Win SRV 2012 R2 - Desktop heap allocation failed
    Every 4-5 days, i get this warnging message :
    Warning Win32k (Win32k ) A desktop heap allocation failed.
    20 minutes later, this Distributed DCOM error popups every 10 minutes until i restart the server.
    The server {73E709EA-5D93-4B2E-BBB0-99B7938DA9E4} did not register with DCOM within the required timeout
    Pleas help! 

    Heelo babajeeb69,
    This may help to mend the issue running on the servers:
    Open regedit,and navigate to the following key:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\SubSystems
    Then found the following value: "Windows SharedSection=X,Y,Z.
    Where X,Y and Z are the values you'll found there.
    The Desktop heap the the "Y" value.
    Double this value, and then reboot the box.
    Hopefully the issue will dismiss for good.
    It turns out that too many interactive services are running on the machine, since this services allocate some from the Desktop heap, when it becomes depleted the Win32K will starve for memory even with plenty of memory and disk space.
    Att,
    Felipe Cobu

  • Heap allocation problem

    Hi all,
    I have a strange heap allocation problem with my application. I will try to explain what happens:
    My AWT-application reads data every second from a serial port and displays it. Max. heap size is set to -Xmx32m because the application has to run in an environment with little memory. JVM is 1.4.2_18.
    After some hours I get an OutOfMemoryError. My first idea of course was: ok a memory leak, use my profiler and see what happens. But I can not find any memory leaks. The following is what I can see in my profiler:
    The first 2 hours everything seems to be OK. The available heap size is stable at 15 MB and the used heap goes up and down as normal. But after 2 hours the used heap size has some peaks. The used heap seems to build up and the peaks get higher but still the used heap size goes down due to GC until a peak is so high that it is over 32MB and my application dies. The recorded objects graph has the same trend as the heap graph. The VM seems to "forget" garbage collection...
    The only idea I have is that heap fragmentation occurs e.g. because of array allocations. But until know all attempts to track down the problem failed.
    Any ideas what can be the cause of the problem or what I can try? Any help is welcome.
    Edited by: JBrain on Aug 21, 2008 12:40 AM
    Edited by: JBrain on Aug 21, 2008 4:27 AM

    Yes, the same limitation in 32-bit Linux applies to JRockit as well. On
    a 64-bit architecture this limit does not exist.
    Regards,
    /Staffan
    Andrew Kelly wrote:
    Sun JDK bug 4435069 states that there is a limitation to the Heap size for a Linux
    JDK of 2G.
    Does this restriction also applie to JRocket, has this been tested?

  • Max Heap Allocation

    Hello,
    I am working on a memory-hungry Swing-based application that analyzes several GB of data.
    We are running with Java 1.4 on Windows XP. To keep responsiveness up, we want to allocate as much memory to our application as possible and cache data to disk when necessary.
    The maximum amount of heap space that we can allocate with JDK1.4 on XP is approx 1.6 GB ( -mx1638m ). The target machine itself has 2GB RAM.
    MY QUESTION IS: are there any potential problems associated with allocating the full 1638m that XP will allow? (We haven't seen any yet.)
    However one of my colleagues thinks there might be potential problems and feels that we should only allocate 80% of the max (approx 1.3 GB).
    His thinking is that as long as we remain in 'pure Java' there should not be any problems, but JNI calls might lead to problems.
    Just wondering if anyone else has an opinion on this.
    Thanks,
    Ted Hill

    C/C++ code uses the application heap space.
    The java heap comes from the application heap space.
    The application itself has a maximum amount of heap space. This is fixed by the OS, and how the application starts up (ususually due to the way it was built in the first place.)
    In consequence if your C/C++ code grabs a large hunk of memory and keeps it then the maximum java heap will be decreased by that amount. Likewise if the java heap grabs all the space then the C/C++ code won't have any space left.
    This applies only to the C/C++ heap though. When jni code uses java objects then those objects (ignoring some trivial usage) exist on the java heap.

  • Heap Allocation Parameters

    In previous Java versions, -mx was used to set the maximum heap size. As we know, the new -Xmx parameter replaced it. What would occur if -mx was passed to the newer versions of Java? We are debugging an issue where -mx was passed to a newer java process, and it is our suggestion that it may have been ignored. Is this correct?

    Java 1.4 (at least) still seems to honor the -ms and -mx options although since they are deprecated that may change in any release.
    Simple proof, run "java -ms" and/or "java -mx" and note the error messages.
    Chuck

  • JVM heap allocation error

    Hi all,
    I am running a SUNONE App Server on HP-UX11i with JDK 1.4.0_02 (shipped with the SunOne app server). The harware has 2-CPU with 4 GB of RAM. This is the only application that is running on this server.
    I am trying to allocate the minimum heap size of 1024mb and max of 2048mb for this application server by setting the JVM options as -Xms1024m -Xmx2048m. If I restart the app server with this changes, I am getting the following error.
    [02/Aug/2004:18:36:25] WARNING (29579): CORE3283: stderr: Error occurred during initialization of VM
    [02/Aug/2004:18:36:25] FATAL (29579): CORE4005: Internal error: unable to create JVM
    [02/Aug/2004:18:36:25] WARNING (29579): CORE3283: stderr: Could not reserve enough space for gen1 generation heap
    I tried vaious combinations, the app server runs fine with min. of 128mb and max. of 512mb heap size.
    Following is the kernal settings on this server.
    max_thread_proc 3000 maxdsiz 0x80000000
    maxdsiz_64bit 0x80000000
    maxfiles 8192
    nfile 100000
    maxuprc 2048
    maxusers 800
    Does anyone knows, what could be the issue or what other areas should I look for to tune the system?.

    Hi,
    Please make sure you got enough swap space.
    I would reserve at least 4GB in your case.
    ak

  • Allocated heap memory goes up even when there is enough free memory

    Hi,
    Our Java application's memory usage keep growing. Further analysis of the heap memory using JProbe shows that the allocated heap memory goes up even when there is enough free memory available in the heap.
    When the process started, allocated heap memory was around 50MB and the memory used was around 8MB. After few hours, the inuse memory remains at 8MB (slight increase in KBs), but the allocated memory went upto 70MB.
    We are using JVM 1.5_10. What could be the reason for heap allocation going up even when there is enough free memory available in heap?
    -Rajesh.

    Hi Eric,
    Please check if there is any error or warning in the Event Viewer based on the data time?
    If there is any error, please post the event ID to help us to troubleshoot.
    Best Regards,
    Anna

  • JVM heap size and performance

    We are kind of struggling with the good old "out of memory"
    issue of JVM while running our app on oracle 9i j2ee container
    (Solaris 2.6). I do not see much discussion about this
    particular problem in this forum. We are playing with heap
    allocation, garbage collector, max num of instances etc to
    overcome this out of memory issue. If you are running or about
    to run OC4J in a production environment please let me now what
    measures and settings are you using to tune the JVM.

    We start with 128m, and use 512m for the maximum, and have no
    troubles. That is:
    java -jar -Xms128m -Xmx512m orion.jar
    This is on linux. There are memory leaks with varous jdbc
    drivers, postgresql for example.
    Also, if you do not put a maximum on your instances of entity
    beans, they will just pile up. You can adjust this in the orion-
    ejb-jar.xml file. The bean problem occurs if you have not
    normalized your bean mapping for performance.
    regards,
    the elephantwalker
    www.elephantwalker.com

  • Heap max limit

    Hello, I have done some tests to check the heap size of an application.
    This is my test code:
    public class Main {
         public static void main(String[] args) {
              Runtime runtime = Runtime.getRuntime();
              // Max limit for heap allocation.
              // It's in bytes.
              long heapLimitBytes = runtime.maxMemory();
              // Currently allocated heap.
              // It's in bytes
              long allocatedHeapBytes = runtime.totalMemory();
              // Unused memory from the allocated heap.
              // It's an approximation, it's in bytes.
              long unusedAllocatedHeapBytes = runtime.freeMemory();
              // Used memory from the allocated heap.
              // It's an approximation, it's in bytes.
              long usedAllocatedHeapBytes =
                   allocatedHeapBytes - unusedAllocatedHeapBytes;
              System.out.println("Max limit for heap allocation: " +
                        getMBytes(heapLimitBytes) + "MB");
              System.out.println("Currently allocated heap: " +
                        getMBytes(allocatedHeapBytes) + "MB");
              System.out.println("Used allocated heap: " +
                        getMBytes(usedAllocatedHeapBytes) + "MB");
         private static long getMBytes(long bytes) {
              return (bytes / 1024L) / 1024L;
    }Then I run this program with the option -Xmx1024m, and the result was:
    On windows: Max limit for heap allocation: 1016MB
    On HP-UX: Max limit for heap allocation: 983MB
    Someone knows why the max limit is not 1024MB as I requested?
    And why it shows a different value in windows than in HP-UX?
    Thanks
    Edited by: JoseLuis on Oct 5, 2008 11:29 AM

    Thank you for the reply
    I have checked and the page size in windows and HP-UX is 4KB.
    Also, in the documentation for the -Xmx flag it says that the size must be multiple of 1024 bytes and bigger than 2 MB.
    I may understand that the allocated size can be rounded to the nearest page (4 KB block), wich give you a difference less than 1 MB between the requested size and the real allocated size, but in windows the the difference is 8 MB and in HP-UX the difference is 41 MB. It's a big difference.
    Am I missing something?

  • Best-fit Heap

    Hi
    I am trying to change a heap allocation from first fit to best fit. I understand that for best it i will have to traverse the whole memory block and find the block which is closest to the one asked for. But, how do i keep track of the differences.
    Here is the chunk of code for first fit
    for (x = free, back = -1; block [x] != -1; back = x, x = block [x+1])
      if (block[x] > size) break; //size is the size that is needed for the running programnow for best fit i am assuming that i 'let' the first block be the best case and then traverse through the list.
    If some one could give me a few hints...i would really apreciate it
    -bhaarat

    thx i came up w/ another algorithm though...it seems about right to me please have a look at it and tell me what you think.
    best = free;
    for(x = free, back = -1; ((x != -1)&&(block[x] != -1)); back =x, x = block[x + 1])//have the && statement cuz sometimes when x = -1 i get a run time error
                 if (block[best] == size)     break;
                 else if (block [best] > size)
                      diff = block [best] - size;//calculate the difference
                            if (block[best+1] != -1)//if the next loc of heap has -1 in it then there is no need to compare as we dont have neother choice but to split a block. 
                           if (((block[block[best+1]] -size > 0) && (block[block[best+1]] -size < 10)) && (block[block[best+1]] -size < diff))
                                best = block [best +1];
                   if (x == 0) break;
            }-bhaarat

  • Off-heap backing maps seem to generate lots of garbage at insert..!?

    I have been doing a lot of benchmarks of distributed caches with different backing maps. The results where partly positive (I hoped that partitioned (splitting) off-heap backing map would be almost as fast as a non-splitting on-heap backing map). For read and various types of queries this turned out to be mostly true (some queries were slightly slower - probably because they where performed per partition).
    For inserts it does however sadly enough seem to be another story - already when using a non-splitting NIO backing map inserts seemed to generate a lot of garbage slowing the benchmark down significantly and when switching to a splitting NIO backing map this effect became so extreme that full GC occured more or less constantly on the cache nodes slowing execution down to almost a standstill :-(
    Has anybody else tried this and seen the same results or do any of the Coherence developers have some theory?
    To me it would seem like network-io to off-heap (using storage buffers allocated using nio just as the communication buffers!) should be at least as easy to perform without generating excessive garbage as to heap objects but since I dont know the internals of Coherence I cant say for sure if there are something that breaks this theory?
    For me the main expected advantage with using off-heap rather than on-heap would have been REDUCED GC activity and shorter pauses but instead it seems like the result is the oposite - at least when doing inserts...
    My example do not use (or need!) and secondary indexes (only performs get/put/lock/unlock) but each entry is locked before it is inserted and unlocked after (this is needed for the algorithm I am using as a benchmark) - as I have pointed out in another thread it is a pitty that no "lockAll" / unlockAll method calls exists (my benchmark is suffering a lot from all the lock/unlock remote calls) - the overhead for this is however nothing compared to the performance hit that comes from the all the GC...
    I have tried to tune the GC in several ways but this has only to a very limited extent reduced the GC pauses length or the frequency of full-GC - it just seems like a LOT of garbage is generated for some reason...
    The setings that so far was resulted in the least GC-overhead (still awfully bad though!) are -XX:+UseParallelGC -XX:+UseAdaptiveSizePolicy. I am using Coherence 3.5 GE and Sun JRE 1.6.0_14.
    /Magnus
    Edited by: MagnusE on Aug 10, 2009 3:01 PM

    Thanks for ther info - I was indeed using different initial and max size in this experiment and seting them the same eased the problem (now I mostly get incremental rather than full GC messages). Insert do however still generate more GC activity than read (that seem to be more or less totally free from Java heap allocation / deallocation which is VERY good since read is so common!). Perhaps there is some more tweaking of the heap allocation/deallocation that can be done att the same time as you work on that bug you mentioned - it would really be nice with a NIO backing-map with close to zero Java heap usage for all primitive operations (read, insert, delete)!
    /Magnus
    Edited by: MagnusE on Aug 11, 2009 7:33 AM

  • HP-UX Kernel Params for WLS 6.0SP2 Cluster to Avoid java.lang.outofmemory and/or thread death

              I'm running a WLS 6.0 SP2 clustered application on HP-UX 11i. I'm seeing heap and
              thread issues on start-up or invocation of my application as I deploy EJB's and create
              DB connection pools. These are fairly trivial tasks that don't give me any issues
              when starting the first node. It's only when I invoke the second node.
              I'm pretty sure that my issue is tied to the following kernel areas:
              1. Thread allocation
              2. Heap allocation
              3. Max Processes per user
              Can anyone make some kernel recommendations that might be beneficial to my deployment?
              My app runs on NT/2000 and Sun as well, and I haven't seen these issues. I typically
              allocate 50 to 100 threads per node (3 nodes on a 4 CPU machine) and allocate about
              1GB of RAM per node on a 4GB machine...
              Regards,
              Steve
              

    I'm running a WLS 6.0 SP2 clustered application on HP-UX 11i. I'm seeing          heap and
              > thread issues on start-up or invocation of my application as I deploy
              EJB's and create
              > DB connection pools. These are fairly trivial tasks that don't give me any
              issues
              > when starting the first node. It's only when I invoke the second node.
              >
              > I'm pretty sure that my issue is tied to the following kernel areas:
              >
              > 1. Thread allocation
              > 2. Heap allocation
              > 3. Max Processes per user
              >
              > Can anyone make some kernel recommendations that might be beneficial to my
              deployment?
              > My app runs on NT/2000 and Sun as well, and I haven't seen these issues. I
              typically
              > allocate 50 to 100 threads per node (3 nodes on a 4 CPU machine) and
              allocate about
              > 1GB of RAM per node on a 4GB machine...
              HP has some kernel tuning guidelines for Java server apps on their web site,
              and BEA has some notes as well in their platform support page.
              What issues specifically are you seeing?
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              Clustering Weblogic? You're either using Coherence, or you should be!
              Download a Tangosol Coherence eval today at http://www.tangosol.com/
              "Steve Feldman" <[email protected]> wrote in message
              news:3cdc046e$[email protected]..
              >
              

  • CFImage - Out of memory

    Seeking help and advice ...
    My project has a high traffic load where image uploading is a
    key feature for our users. We process images that the user has
    either resized already, or that came directly off a camera in the
    camera's image format and size.
    I have been running into a issue where I receive the
    following error message:
    Java heap space null
    and through CF Fussion Reactor have found in logs:
    # An unexpected error has been detected by Java Runtime
    Environment:
    # java.lang.OutOfMemoryError: requested 23970816 bytes for
    jbyte in C:\JAVA\jdk6_04\hotspot\src\share\vm\prims\jni.cpp. Out of
    swap space?
    # Internal Error (allocation.inline.hpp:42), pid=7284,
    tid=5880
    # Error: jbyte in
    C:\JAVA\jdk6_04\hotspot\src\share\vm\prims\jni.cpp.
    This occurs at ramdom intervals, the size of the image does
    not seem matter, and occasionaly the server will restart due to
    this error.
    In trying to narrow down the source of this error I have used
    <CFLOG statements all around the steps I implement when
    uploading an image and further in resizing the image.
    Summary as follows:
    1. User input FORM for image file upload
    2. Process File via <cffile upload -> to a TEMP dir on
    the server
    3. perform validatation of file type (JPE and JPEG are seen
    as OCTETSTREAM **another issue but this step allows us to accept
    OCTETSTREAM mimetypes then check the file extension before
    proceeding)
    4. If FILE passes then COPY the file to desired location
    <CFFILE *this step also updates the file name to remove spaces
    and special characters, as well as creates a unique name (as the
    copy or move action does not seem to allow for 'MAKEUNIQUE')*
    5. Update the database with the final filename / location
    6. Create object for resize CFC to Check the Image size
    (call imageresize.cfc)
    7. the image CFC performs a <CFIMAGE to load the image
    into the cfimage object
    8. Check the height and width to determine if resize as
    needed
    9. IF resize is true call ImageScaleToFit
    10. Re-Write the image file <cfimage action="write" to
    overwrite the original file.
    11. DONE - show user a confirm page with resized image.
    Each step above has a custom <CFLOG for START step ????
    -> <CFLOG for END step ???? (also meaning SUCCESS of that
    Step)
    From the log messages I am seeing that the process starts
    into step 7. and fails with the above messages. the END STEP log is
    not reached. Loading the image into the CFIMAGE object fails.
    The size of the image does NOT appear to matter, the server
    simply gets bogged down, probably from a high number of other
    requests running and other memory loads. Some images will be
    processed, others fail, and every 12 hours or so, the server will
    crash with no explaination other than the log messages above.
    Can anyone adivse me on how to better handle this process?
    Manage memory? dump memory? manage threads? Create a Loop to retry
    the resize process?
    The server I am using is a new WIN2008 server, CF8 (latest
    hotfix applied), 4 Gig of RAM. The java Heap is set to 512MB MIN -
    1300MB MAX. MaxPermsize=192m
    Thank You

    looks like a memory leak
    http://www.performanceengineer.com/blog/java-memory-leaks/
    http://docs.sun.com/app/docs/doc/819-4673/gezax?l=fr&a=view
    Error Log for Web Server or Application Server
    Exception in thread "service-j2ee"
    java.lang.OutOfMemoryError:
    requested 53515 bytes for jbyte in
    /BUILD_AREA/jdk1.5.0_10/hotspot/src/share/vm/prims/jni.cpp.
    Out of swap space?
    Description:
    The native heap allocation failed and the native heap may be
    close to exhaustion.
    Cause:
    A native code leak, for example the C or C++ code,
    continuously requires memory without releasing it to the operating
    system. There could be indirect causes like an insufficient amount
    of swap space or another process that is consuming all memory or
    leaking it.
    Solution:
    For further diagnosis of a native code memory leak, see the
    Java 5.0 Troubleshooting and Disgnostic Guide at
    http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf.
    In the section “Diagnosing Leaks in Native Code.” See
    the information about tools for different operating systems. The
    tools include mdb and dbx (runtime trace) for Solaris 9 U3 or
    later, mtrace , libnjamd for Linux, and windb or userdump for
    Windows."
    http://www.adobe.com/cfusion/webforums/forum/messageview.cfm?forumid=1&catid=143&threadid= 1305386&enterthread=y

Maybe you are looking for