OutOfMemoryError on W2K, not on NT 4

Hi,
I have a program that reads text files and sends them through JMS.
This works really fine on a machine with windows NT 4, but on another machine with windows 2000, I get OutOfMemory errors very quickly.
I have tried running it with the -verbosegc option, and there is something I do not understand :
GC 4497K->3983K(4568K), 0.0010714 secs]
[GC 4498K->3983K(4568K), 0.0011437 secs]
[GC 4496K->3983K(4568K), 0.0010714 secs]
[GC 4496K->3984K(4568K), 0.0012275 secs]
[GC 4497K->3984K(4568K), 0.0010948 secs]
[GC 4498K->3995K(4568K), 0.0011479 secs]
[Full GC 63551K->4222K(65280K), 0.1820189 secs]
[GC 7604K->5328K(65280K), 0.0067830 secs]
[GC 7377K->5329K(65280K), 0.0010540 secs]
[GC 7379K->5335K(65280K), 0.0011856 secs]
[GC 7384K->5336K(65280K), 0.0011800 secs]
[GC 7385K->5337K(65280K), 0.0012083 secs]
[GC 7386K->5337K(65280K), 0.0012331 secs]
(this is on my W2K box)
I don't understant the Full GC 63551K, while it was just 4498K on the line above. And when I get this line, I also get an OutOfMemoryError. Can someone help me ?
I am running my program with JDK1.3.1_09.
Thanks !
Julien

Hi Julien,
We have similar problem in Unix that the Garbage collection happened every 1 minutes and took 17 seconds...
We went through the url :
http://java.sun.com/docs/hotspot/gc/index.html
to understand what happens...
We identified that we had been caching one of our RMI's Remote instance in our Session which was creating this Garbage collection. This might not be what you are facing but you can have a look into this link and see what is wrong.
More over I heard that JDK1.3.1_11 version has resolved most of the JVM related Issues.
Thanks and regards,
Pazhanikanthan. P

Similar Messages

  • Java.lang.OutOfMemoryError  solved (was NOT a leak)

    Alright I'm just posting this to hopefully save someone some time in the future.
    This was a bug that dragged out for over a year. An application would be deployed to a production app server and low and behold OutOfMemoryError's would start to be thrown, naturally the new application would look like the most likely culprite. So now the unlucky developer gets to spend hours running the memory profiler and looking in vain for a memory leak that does not exist.
    The actual problem was not the Heap running out of memory as in most memory leaks. No, the problem was the app server runs out of Perm Generation space (this is where class loaders and reflection data is stored) by default the Max Perm space is 64MB which is fine for most apps, but on an application server (OC4J in our case) when you have 10 apps with tons of JSP's this space can fill up.
    You can check out your perm gen usage by running the jmap tool with the -heap option and pass the pid of the oc4j instance to it, this will tell you your max perm gen space and your current usage.
    Anyway to up your max perm gen space, start the java process for your app server with the parameter -XX:MaxPermSize=128m
    Problem solved.

    Good Luck,
    If 10.1.3 uses 1.5.x of the JVM then I believe they added an extra string on the end of of the outofmemoryerror that says Heap or PermGen so you have that hint to go on.
    You should take a look at the jmap tool in 1.5 though, you can make it print a dump of what's in the heap then open it up in the tool jhat to take a graphical look of what's hanging out in memory, very useful.
    By the way, thanks very much for sharing your
    experience.
    I will surely encounter your problem since I have a
    big app.
    I begin deployment to OAS 10.1.3 at the end of the
    month, I waited for the latest version since it is
    much easier to maintain then previous versions.
    Regards
    Fred

  • Java.lang.OutOfMemoryError: PermGen space Not able to open applications

    Hi ,
    I am facing issue in running the application when I deploy 10 applications successfully to the Integrated weblogic server [in jdev].
    Then after some time I am not able to open any of the application.
    Thanks and Regards,
    Vivek Pemawat

    Set MaxPermSize to a higher value.
    export JVM_ARGS="-Xmx1024m -XX:MaxPermSize=256m"
    https://forums.oracle.com/thread/1140785

  • Encounter java.lang.OutOfMemoryError when notifyDestroyed

    Hi...
    i am developing a j2me application using WTK... when my code reaches notifyDestroyed(), it will return me java.lang.OutOfMemoryError and does not exit the midlet. any help?

    There's a bug in the code you didn't think to post here.
    db

  • SAP R/3 Authentication with Active Directory on Win2k server.

    Hello list ,
    We are running SAP R/3 4.7 with WebAS 6.2 on Solaris and a Windows 2000 Active Directory domain. Our users access SAP in 3 ways
    1) SAP GUI .
    2) SAP BW
    3) Travel & Expense - a java application that records users travel details and posts a transaction to SAP using the SAP userid and password.
    Wish to implement SSO for all our users.
    Some research we have done suggests
    1) Using Kerberos for authentication. while it appears that microsoft krb 5 implementation will work only on windows servers, it is not clear how well are other krb implementations supported by SAP. OSS note # 150380 and link http://help.sap.com/saphelp_nw2004s/helpdata/en/44/0ebf6c9b2b0d1ae10000000a114a6b/content.htm
    2) OSS note # 352295 suggest there could be some issue using KRB 5 shipped with unixes.
    "All of the major Unix vendors seem to be shipping a version of Kerberos 5 these days. These implementations should be wire-interoperable with each other and with Microsoft W2K (not necessarily W2K3!), however they may not be interoperable with SAP's shared library interface to GSS-API v2 mechanisms."
    3) There are some commercial solutions like - CyberSafe that provides krb based SSO at a fee. Has anyone tried this software ?
    I have created an OSS ticket but we are still in a queue since 5 days already.
    Has any one from the list implemented a similar solution ? What are the best practices and way to go for a robust solution.
    4) Another option that we have is to start with user synchronization. Where in Users created in Active Directory get synchronized with SAP .
    What is mandatory for us is that Users marked disabled in Active Directory should be blocked in SAP by synchronizing user information at regular interval. If anyone has implemented this solution I will appreciate if they give me some pointers.
    Thanks in advance.
    Harsh Busa

    Tim,
    you are perfectly right: that Vintela product is not certified (as SNC solution).
    But you are not quite right regarding the separate treatment. The major difference between that product and the SNC certified products (such as CyberSafe, Entrust, ...) is: Vintela uses different SNC libraries on the client side (=> our Windows SSPI wrappers, see <a href="http://service.sap.com/~iron/fm/011000358700000431401997E/352295">SAP note 352295</a>) and the server side (=> their own SNC library, not certified). And that is actually also one reason why that solution cannot be certified ...
    Well, those Windows SSPI wrappers provided by SAP (=> gsskrb5.dll, for example) are also not "SNC certified", but SAP provides support (being in contact with Microsoft). Well, as some people might know, there are also some interoperability issues between different Microsoft OS versions ... - resulting in reactive patches of our SSPI wrappers.
    I really do <u>not</u> want to promote <u>any</u> product - neither the one of Quest Software Inc., nor the one of <a href="http://www.cybersafe.ltd.uk/">CyberSafe Ltd</a>, nor <a href="http://www.entrust.com">Entrust Inc.</a>, nor <a href="http://www.secude.com/">SECUDE IT Security GmbH</a>, nor ...
    I do not even want to disencourage anyone from implementing his own Kerberos-based solution (or any other solution which provides an GSS API), provided that this person is able to help himself. Reason: if products of different vendors are used and interoperability problems occur the usual finger-pointing will start. In the end you'll not get support by anyone ... - as long as you are aware of this (and capable of helping yourself) you can go ahead. Some (known) universities are belonging to that group ... - but it might not be appropriete to the vast majority of customers.

  • Issue relating to posting period

    Hi Gurus,
    we have an issue that the normal users also able to post the special periods.
    from per 1 to To per 1 are belongs to authoriztion group
    AAA and (special users )
    from per 2  to To Per 2 are belongs to authorization group BBB which are normal users.But these users are able to post the special periods also.
    can we specify any where or map the interval ( From per 1 to TO period 1 to the authorization field ie the particular interval to be valid for the authorization specified)
    thanks in advance.

    Hi,
    I have increased my heap size to 8GB ,but still getting the same error.
    the error is **Can't call method getObjectNames on class com.waveset.ui.FormUtil ==> java.lang.OutOfMemoryError: **
    I'am not able to find FormUtil in my war file somehow i foound the file is html file from my local war but even if i try making changes in the file i cant change the menthod getObjectNames as it is a default method...
    please suggest the solution
    Thanks

  • Reference.clear method

    I think that I do not understand the Reference.clear method. I am trying to use SoftReferences to store objects (actually Sets) that can be retrieved from filesystem files. And I am trying to use (override) the Reference.clear method to handle writing said file when one of them is about to go "byebye"...
    But what is happening is that my clear method never gets called AND the SoftReference.get method returns null. (Which means that the referent is gone without clear having been called...which means I don't understand the clear method!)
    SOOOOOO, can someone please either explain how I can be alerted to handle something just before something gets gc-ed, or else enlighten me to a better way to implement this?
    Here's the longer story:
    I have many "records", far too many to even consider keeping them all in memory at once. So I started storing them in "minisets" which I can serialize to files and retrieve as needed. The "minisets" are values in a HashMap with string keys.
    When one of these Sets gets "big enough", I start storing a SoftReference to it instead of "it". Then when I need a particular set, I do a hashmap.get(key), then either (a) I get the Set, or (b) I get the the softref from which I can get the Set, or (c) I get the softref from which I cannot get the Set, in which case I go reconstruct the Set by reading a file.
    The problem is writing the file... if I write a file every time I edit a (potentially large) Set then this program will take 5 days to run. So I think I want to write the file only when absolutely necessary....right before the gc eats it.
    Thank you for help,
    /Mel

    Okay, here it is.
    MyRef: My extension of PhRef. Also holds referent data that we need for cleanup, since we can't get to referent once Ref is enQ'ed.
    Dummy: thie is the class that you're storing in the Sets or whatever, and that gets put in a reference.
    Dummy's id_ member: This holds the place of whatever data you need for pre-bye-bye cleanup, e.g. a filename and the internal data of Dummy. MyRef grabs this when it's created.
    (Note: what you may want to create a new class, say DummyWrapper that holds a Dummy and provides the only strongly reachable path to it, pass DummyWrapper to the Ref's constructor, and then have the Ref store the wrapped Dummy (e.g. wrapper.getDummy()) as its member var in place of the id_ that I'm storing--that is, in place of storing a bunch of internal data individually. Like I said, this approach seems overly complex, but I've not found a simpler way that works. I originally tried having the Ref store the Dummy directly as a member var, but then the Dummy was always strongly reachable.)
    Poller: Thread that removes Refs from queue. You can probably just do this at the point in your code where you decide the set is "big enough", rather than in a separate thread.
    dummies_: List that holds Dummies and keeps them strongly reachable until cleanup time. I think you said your stuff was in a Set or Sets. This is the parallel to your Set(s).
    refs_: List of PhRefs to keep them reachable until they get enqueued.
    When I don't use Refs properly to clean things up, I get OutOfMemoryError after about 60 or 120 iterations. (I tried a couple different sizes for Dummy, so I forget where I am now.) With the Refs in place, I'm well over 100,000 iterations and still going.
    import java.lang.ref.*;
    import java.util.*;
    public class Ref {
        public static ReferenceQueue rq_ = new ReferenceQueue();
        // one of these lists was giving ConcurrentModificationException
        // so I sync'ed both cuz I'm lazy
        public static List dummies_ = Collections.synchronizedList(new ArrayList());
        public static List refs_ = Collections.synchronizedList(new ArrayList());
        public static Poller po_ = new Poller();
        public static void main(String[] args) {
            new Thread(po_).start();
            new Thread(new Creator()).start();
        public static class Poller implements Runnable {
            public void run() {
                // block until something can be removed, then remove
                // until nothing left, at which point we block again
                while (true) {
                    try {
                        MyRef ref = ((MyRef)rq_.remove());
                        // This println is your file write.
                        // id_ is your wrapped Dummy or the data you need to write.
                        // You probably want a getter instead of a public field.
                        System.err.println("==================== removed "
                                + ref.id_  + "             ======  ===  ==");
                        ref.clear();  // PhRefs aren't cleared automatically
                        refs_.remove(ref); // need to make Ref unreachable, or referent won't get gc'ed
                    catch (InterruptedException exc) {
                        exc.printStackTrace();
        public static class Creator implements Runnable {
            public void run() {
                int count = 0;
                while (true) {
                    // every so often, clear list, making Dummies reachable only thru Ref
                    if (count++ % 50 == 0) {
                        System.err.println("                 :::: CLEAR");
                        dummies_.clear();
                    // give Poller enough chances to run  
                    if (count % 16 == 0) {
                        System.gc();
                        Thread.yield();
                    // Create a Dummy, add to the List
                    Dummy dd = new Dummy();
                    System.err.println("++ created " + dd.id_);
                    dummies_.add(dd);
                    // Create Ref, add it to List of Refs so it stays
                    // reachable and gets enqueued
                    Reference ref = new MyRef(dd, rq_);
                    refs_.add(ref);
        public static class MyRef extends PhantomReference {
            private long id_;  // data you need to write to file
            public MyRef(Dummy referent, ReferenceQueue queue) {
                super(referent, queue);
                id_ = referent.id_;
            public void clear() {
                System.err.println("-------CLEARED " + get());
            public String toString() {
                return "" + id_;
        public static class Dummy {
            // just a dummy byte array that takes up a bunch of mem,
            // so I can see OutOfMemoryError quickly when not using
            // Refs properly
            private final byte[] bb = new byte[1024 * 512];
            private static long nextId_;
            public final long id_ = nextId_++;
            public String toString() {
                return String.valueOf(id_);
    }

  • Issue relating to Active  Sync

    Hi All,
    i'am facing an issue while trying to run Active Sync.When i try opening Active Sync Wizard ,I'am getting the below error...
    **Can't call method getObjectNames on class com.waveset.ui.FormUtil ==> java.lang.OutOfMemoryError: **
    Please suggest where might be the problem...
    Thanks & Regards,

    Hi,
    I have increased my heap size to 8GB ,but still getting the same error.
    the error is **Can't call method getObjectNames on class com.waveset.ui.FormUtil ==> java.lang.OutOfMemoryError: **
    I'am not able to find FormUtil in my war file somehow i foound the file is html file from my local war but even if i try making changes in the file i cant change the menthod getObjectNames as it is a default method...
    please suggest the solution
    Thanks

  • IMS 5.2 windows 2k (no choice) ims_sssetup.pl

    We have to upgrade all servers to Windows 2k server (since MS announced the end of lifetime for NT4 a while ago). We're in a bind here because we can't get IMS5.2 working.
    Now, I understand that it's not officially supported, but it seems like it should work. We can't get the ims_dssetup.pl script to work. We want to use IDS5.1 on the same machine as the ldap server - since the IMS installation says not to use the included DS for new installs.
    First we install iDS5.1 using all the defaults. Then we run ims_dssetup.pl, but get an error about a ns-schema.conf file (which doesn't exist anywhere on this clean install machine). We've also tried copying over the msg/config directory from the ims5.2 install cd to the c:\iplanet\servers directory (where the ids is installed). Still no luck.
    We've posted on the netscape.server.mail and netscape.server.directory newsgroups on this, but haven't gotten anywhere. See thread ims_dssetup in the directory newsgroup and "Installation Issue: Messaging Server 5.2 on Win2k Server" in mail newsgroup.
    What needs to be changed with the iDS5.1 to allow a iMS5.2 install to work? We get the error:
    A serious problem occured while installing the iPlanet Messaging Server
    Domain Component Tree (msg.ugldap.dctree.inf). It reported the problem:
    Server configuration for the domain component tree
    (msg.ugldap.dctree.inf) can not be created.
    After clicking OK, I get the message:
    Due to serious problems, the iPlanet Messaging Server is unable to
    continue. Please examine the log file
    (mytempdir)/iplanet-msg-install.log for more information.
    The log doesn't contain anything that I can figure out (but it is posted on the newsgroup if you think this might help).
    Any ideas?? We're running out of time and I'm afraid that they're going to make us switch to exchange if I can't get this working!!

    <rant on>You're right W2K not supported. Since you're using W2K your user base can not be that large. In which case I'd go to eBay and get yourself an old Sun machine and put iMS on that.
    Seriously you're upgrading NT4 to W2K because Microsoft has said they won't support it anymore, yet you are putting iMS on a platforum we won't support. Does not make much sense to me.
    </rant off>
    The iMS install is not working because ims_dssetup.pl did not work. Don't waste your time trying to install iMS if ims_dssetup.pl did not work.
    If you're user and group LDAP already exists then install iMS 5.1 with everything local. Post install go back and change the configuration to be what you really want to do. I do this sort of thing all the time. The installer is very picky and rather than waste time I install local and then change things post install.
    If your U/G tree is not established already set it up seperate of the iMS install. Then do what I said above.

  • Java program structure !

    hi,
    i'm developing a program and i've some memory problems in the runtime.
    my program is something like that:
    public class Service extends JFrame {
    JPanel panel = new JPanel();
    public Service(){
    ListServices();
    public void ListServices(){
    DBServices dbservices = new DBServices(panel);
    public class DBServices(){
    JPanel panel = new JPanel();
    public DBServices(JPanel iPanel){
    panel = iPanel;
    public fetchDatabase(){
    // do a select statement
    while(rowset != null){
    JButton detail = new JButton("detail");
    detail.addactionlistener ( .......
    Detail det = new Detail();
    fecthDatabase();
    JLabel label = new JLabel("name");
    panel.add(detail);
    panel.add(label);
    public class Detail extends JDialog{
    public Detail(){
    Ok, this program makes a list of services, when we press a button(the text could be service n�), a dialog opens and shows his detail. When i close the detail window, he sould retrieve again the list from the database.
    This example works, but every time i press the button to see the detail, the memory taken by the java process increases, and he don't release any memory.
    What can i do to make my program work ?
    tks

    Never mind... you're later on clearing the panel, so
    the button (and its listener) should garbage collect
    at some point. I don't know... you haven't really
    shown all the code (not that I'm promising to look at
    it all), and I don't know if you really have a
    memory leak. From other topics you opened where you
    were concerned about a leak, others told you you might
    not - depends on when the VM decides to garbage
    collect. As long as it doesn't grow uncontrollably
    where it could eventually throw an OutOfMemoryError,
    there might not be a problem - just perceived.But after some operations he throws OutOfMemoryError, and the program hangs, i already try to call GC in many places of the code, but doesn't work.
    The code is very extensive, so i just reduce him to the logical part, the other thing are just variables e calculations....

  • WTK Memory capacity

    What is the current memory capacity if wtk 2.2?? How can we handle out of memory error??

    I dont remeber the exact value , but you can change the ammount of memory for the emulator from the WTK-> prefrences -> storage-> heap size
    this will emulate a device with the ammount of memory you just specified.
    how to handel out of memory1- the keyword"new" is not that simple to use( kep that in mind)
    2- srround the most suspected memory consumer code with try and catch , to catch OutOfMemoryError. ( please note this is not Exception , this is error)
    3- after catching the erroe run gc(), to clean up some resource.
    4- gc() is not that cheap to call , so call it when u just released a heavy ass object and u about to create new heavier ass one.
    finally
    good code will be out of memory safe. ,btu bad code not only OutOfMemoryError generator , but also a main cause of the phone restart issue
    amr

  • BufferedImage causing OutOfMemoryError not getting GC'd

    I'm writing a photo library application just for practice. I'm trying to display all the jpegs in an album by displaying thumbnails of them as ImageIcons in JLabels on a JFrame.
    To get the images I use ImageIO.read(File) into a BufferedImage. Then using Image.getScaledInstance I pass the resized image to a new ImageIcon which is added to the JLabel. This all happens in a final class ThumbDisplay, method showPic which returns the JLabel.
    I call ThumbDisplay.showPic(File) in a loop, and it can read about 5 files before throwing an OutOfMemoryError. I have been able to successfully display no more than 4 images. Most of my images were taken on my digital camera and are around 2560X1920.
    I read a bit about the java heap space and I understand how 5 BufferedImages of that size open at once can easily eat up memory. But the code looks to me that any instantiated BufferedImages would be GC'd as soon as the showPic method returned the JLabel. JHAT proved otherwise and there were still 5 instances of BufferedImage when I got the OutOfMemoryError.
    So, the following is my example code block and the StackTrace. No extra stuff required. Just throw about 10 extra large jpegs in whatever path you choose to put in 'File directory' on line 9, compile and run. I'd really appreciate some help on this. I've searched countless message boards for that error and found many similar topics but ZERO answers. Can someone tell me why these aren't getting GC'd?
    code:
    1. import javax.swing.*; 
       2. import java.awt.image.*; 
       3. import javax.imageio.*; 
       4. import java.awt.*; 
       5. import java.io.*; 
       6.  
       7. public class ThumbTest{ 
       8.     public static void main(String args[]){ 
       9.         File directory = new File("c:\\pictemp\\"); 
      10.         File[] files = directory.listFiles(); 
      11.         JFrame jf = new JFrame(); 
      12.         jf.setSize(1000,1000); 
      13.         jf.setLayout(new GridLayout(10,10,15,15)); 
      14.         for(File file : files) 
      15.             jf.add(ThumbDisplay.showPic(file)); 
      16.          
      17.         jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); 
      18.         jf.setVisible(true); 
      19.  
      20.  
      21.     } 
      22. } 
      23.  
      24. final class ThumbDisplay{ 
      25.      
      26.     public static JLabel showPic(File f){ 
      27.         BufferedImage img = null; 
      28.         try{ 
      29.             img = ImageIO.read(f); 
      30.         }catch (IOException e){ 
      31.             e.printStackTrace(); 
      32.         } 
      33.         if(img != null){ 
      34.             float ratio = 100 / (float) img.getHeight(); 
      35.             int w = Math.round((float)img.getWidth() * ratio); 
      36.             int h = Math.round((float)img.getHeight() * ratio); 
      37.             return new JLabel(new ImageIcon(img.getScaledInstance(w,h,Image.SCALE_DEFAULT))); 
      38.         } else 
      39.             return new JLabel("no image"); 
      40.     } 
      41. }exception:
       1. D:\java\Projects\PhotoLibrary>java ThumbTest 
       2. Exception in thread "main" java.lang.OutOfMemoryError: Java heap space 
       3.         at java.awt.image.DataBufferByte.<init>(Unknown Source) 
       4.         at java.awt.image.ComponentSampleModel.createDataBuffer(Unknown Source) 
       5.         at java.awt.image.Raster.createWritableRaster(Unknown Source) 
       6.         at javax.imageio.ImageTypeSpecifier.createBufferedImage(Unknown Source) 
       7.         at javax.imageio.ImageReader.getDestination(Unknown Source) 
       8.         at com.sun.imageio.plugins.jpeg.JPEGImageReader.readInternal(Unknown Sou 
       9. rce) 
      10.         at com.sun.imageio.plugins.jpeg.JPEGImageReader.read(Unknown Source) 
      11.         at javax.imageio.ImageIO.read(Unknown Source) 
      12.         at javax.imageio.ImageIO.read(Unknown Source) 
      13.         at ThumbDisplay.showPic(ThumbTest.java:29) 
      14.         at ThumbTest.main(ThumbTest.java:15) 

    sjasja wrote:
    ImageIO.read() does not cache images. Run ImageIO.read() in a loop for a large image. No OOME.
    Run the OP's original program under hprof. See in hprof's output how the original image data are retained. Gives an OOME, and hprof's output shows where the memory goes.
    After creating a resized image with Image.getScaledInstance(), make a copy of the resized image and discard the first resized image. (Create a BufferedImage, get a Graphics2D, and blit with g.drawImage()). Discarding the resized image will also make the pointer to the original large image go away, allowing GC. No OOME.
    In the OP's program, edit like so:
    // return new JLabel(new ImageIcon(img.getScaledInstance(w,h,Image.SCALE_DEFAULT))); 
    new JLabel(new ImageIcon(img.getScaledInstance(w,h,Image.SCALE_DEFAULT)));
    return new JLabel("yes image");You are now doing all the image loading and resizing the original program does. But because the JLabel is discarded, the pointer to the resized image is discarded, thus the pointer to the original image is discarded, and everything can be GCd. No OOME.
    If you want to see how the scaled image retains a pointer to the original image, see the interaction of Image.getScaledImage(), BufferedImage.getSource(), java.awt.image.FilteredImageSource, sun.awt.image.OffScreenImageSource, and sun.awt.image.ToolkitImage.
    By these experiments, in a reply above, I guesstimated: "As far as I can figure out, Image.getScaledInstance() keeps a pointer to the original image."
    Yes, getScaledInstance() is somewhat slow. Here is the FAQ entry for a better way to resize an image: http://java.sun.com/products/java-media/2D/reference/faqs/index.html#Q_How_do_I_create_a_resized_copy
    I have a problem with this because it should mean that the code fragment I posted in reply #5 should make no difference but I can load over a thousand images using it whereas the original code allowed me to load only 30 or so.
    I can see how creating a buffered image from the scaled image might help since discarding the scaled image will discard any reference to the original image stored in the scaled image.

  • OutOfMemoryError in 1.4.1, not in 1.3.1

    I'm having trouble diagnosing an OutOfMemoryError in my application and I would like to double-check my assumptions here, if I may. I apologize in advance for the length of this post... I figured the more information the better.
    The situation is this:
    When running the application, it will eventually generate an OutOfMemoryError.
    The OutOfMemoryError always happens during garbage collection and there is plenty of collectable garbage. What I mean to suggest is that I did not actually run out of memory.
    If I specify the minimum heap size to be larger than default (equal to the max heap size at the extreme), I effectively increase the space allowed for allocation before a garbage collection which makes for a longer garbage collection which causes the error to occur much more quickly - usually within the first 3 garbage collections.
    I am currently testing with the Windows JDK1.4.1 on Windows 2000/SP3.
    My expectation (I have not verified this yet, perhaps someone here can confirm this) is that the JVM is supposed to guarantee that any time an allocation is attempted that, if there isn't enough memory available on the heap, the GC will run and attempt to give me my memory before it throws an error. I do not believe that this is what is happening.
    The application uses several threads and my belief is that this must be some sort of race condition, incorrect or incomplete synchronization. When the OutOfMemoryError occurs during the collection, I will get anywhere from 1 to 11 (so far) threads reporting the error.
    The reason I really don't think I'm running out of memory is based on observing the output of a profiler (OptimizeIt). I start with "-Xms192m -Xmx192m". Heap usage starts out at a nominal value of 15m. Application runs under load, eventually usage gets to within 1K of the limit and a "Full GC" begins. The first couple of times, memory usage drops back to the starting value of 15m. Then it fails. Either I "sprung" a really big leak or there is some other problem.
    I have observed that, when running under 1.3.1, the GC runs while there is a LOT more room before reaching the heap limit. For example, in this case it will run when heap usage reaches around 180m. I have also noticed that, when running "-verbose:gc -XX:+PrintGCDetails", it appears that there is never a non-Full GC.
    If anyone has any suggestions on where to look for this problem, I'm all ears. I've used a profiler to determine where most of the allocations are occuring, but that just doesn't help. I'm not really running out of memory, this could be a single byte allocation for all I know that's happening when it shouldn't. I can't believe that it's a VM bug or I would have heard more about it.

    Well, as I mentioned, I am starting at only 15m of heap usage. I then climb to 192m and drop back down to 15m after the collection. Even under load, my baseline memory usage is only about 32m. There is a lot of temporary creation (I've done a lot to reduce it, which has helped, but still quite a bit).
    The duration of the GC appears to be part of the equation. If the GC lasts a long time, it is more likely to happen.
    Maybe I should also point out this: If, rather than specify "-Xms192m -Xmx192m", I use "-Xms128m -Xmx192m", I may get an OutOfMemoryError before heap size even grows to 192m.

  • Oracle 8i Enterprise will not install under w2k sp3

    Hi!
    My problem is, that the setup-programm from cd does not start when opend.
    My pc is an p4 m, 1gb RAM with w2k sp3.
    Do you have any hints?
    Regards
    Heiko

    No, not realy!
    I have found some info about trouble with installtion on a p4-machine.
    The work around is discribed as
    - copy your cd to harddrive
    - search vor symcjit.dll and rename it
    - start installation
    It worked, but there are some failures. When i tried to deinstall, i had the same problem again. But this time the workaround won't work.
    Regards
    Heiko

  • J2EE 70 does not start - OutOfMemoryError rc=666

    We have NTW04s ABAP+J2EE (ut. EP, BI) on Win 32 bit, 2048 RAM installed.
    The J2SDk is 1.4.2_17.
    We applied the parameters of note 723909 and 1044330 for Win32 (-Xms and -Xmx=1024m, Perm and MaxPerm=256m, new and MaxNew=160m) anyway the J2EE server0 does not come up.
    It start and arrive in state 'Starting Apps' in MMC and into the log std_server0.out we see 'Framework started' but after that we see these entries:
    730.927: [GC 730.927: [ParNew: 239922K->70400K(245760K), 0.3252431 secs] 293939K->128044K(966656K), 0.3253927 secs]
    FATAL: Caught OutOfMemoryError! Node will exit with exit code 666
    ===========================================
    getThreadDump : Tue Jun 10 15:42:50 2008
    FATAL: Caught OutOfMemoryError! Node will exit with exit code 666
    ===========================================
    java.lang.OutOfMemoryError: unable to create new native thread ......."
    As note 1044330 says the ut BI Java require more Heap in the Perm area, and we suspect there is not free heap memory to start the J2EE applications, we try to enlarge it to 512m (perm and maxperm) but the J2EE server0 does not come up at all.
    Into the  std_server0.out we see these entries:
    "....Reserved 1610612736 (0x60000000) bytes before loading DLLs.
    [Thr 4876] MtxInit: -2 0 0
    Error occurred during initialization of VM
    Could not reserve enough space for object heap"
    We try also with perm and maxperm equal to 320m, but the result is the same.
    We reduced the Abap instance at minimal as number of work process, buffers and Phys_mem_size. We reduced also the Oracle SGA at minimal.
    Any advise ?
    bye

    Thanks for the fedbacks but....
    We read these notes, anyway any attempt to increase the Heap size produce the error 'Could not allocate ....'.
    This is a 32 bit system.
    From the other side if we leave the Xmx=Xms=1024M and the other parameters suggested in the Oss notes for 32bit platform the server0 starts but after a while it give the error "Out of memory...."
    It seems the Heap is not enought to start all the applications.
    We installed the Sap Adddress Viever and identified the JLAUCH process responsible for the server0.
    If we do "List/Rebase" we see 8 dlls: 7 from the J2SDK 1.4.2_17 in use and one from Windows.
    Is it possible that the J2SDK 1.4.2_17 compete with the JLAUCH for teh Heap memory ?

Maybe you are looking for