Out of Memory with 64MB?

I've made a decent sized application (771k jar) and run it just
fine using the jre 1.1.6 on my machine with 128MB of memory.
When I run it on any 64MB machine (everything else is the same),
it gets realllllly slow and has trouble loading the last 1/3 of
classes and then the database connection times out. It looks
like it's running out of memory, but what can I do to fix this?
Surely it shouldn't be running out with 64MB (it's not that big
an application and I don't have a ton of data in memory or
anything).
Any ideas?
Thanks,
Kevin Williams
null

I have the same problem with this version on Mac OS 10.9.1 with 16GB RAM, 2TB hard drive, 1,7TB available. I have to restart InDesign after a while, it happens even with single page INDD files without any image. I have informed some form the InDesign team but I have not enough hard facts to submit a bug report, it is not happening all the time but when it starts it is anoying.
Which other programs are running at the very same time? Backgound? I want to know it to find why it is happening here and why it is happening with you so that I can gather some hard facts to submit a bug report.

Similar Messages

  • Out of memory with no swap causes disk activity

    Can someone explain what exactly is being read/written from/to disk in this situation?
    I have 2 GB of RAM and no swap partitions. Occasionally I'll forget how inefficient gwenview is at displaying very large images and accidentally double-click one. The entire system freezes; even alt+sysrq keystrokes are ineffective (and yes I do have them enabled).
    For about 5 minutes, the system is locked up and the hard drive light is flickering. That scares me a bit, because with no swap, what could it possibly be doing for 5 straight minutes? I used to think it was synching before doing the OOM-killing, but there's no way a sync could take that long. Judging from the sound of the hard drive, it's hda (the drive / and all the other system partitions are on).
    A few times after recovering from this I've run extensive data verification and never found any evidence of corruption, but I'd like to know for sure that the kernel isn't randomly deciding to use some filesystem as swap space.
    In the mean time, I'm playing with disabling overcommit -- setting vm.overcommit_memory = 2 in /etc/sysctl.conf. That enforces a hard memory commit limit of swap size + overcommit_ratio * ram size (so I've read) -- and I've also read that the default overcommit_ratio is only 50%. What the bloody hell? It's almost like someone thinks swap is more important than RAM -- hell-llo, I have 2 GB of RAM so that I can get *away* from swap!
    Anyway, I've set the ratio to 97% and so far things seem happy -- if I deliberately run out of memory, the process that did it always gets killed instantly and the system doesn't freeze up on OOM anymore."
    Another thing -- in all my out of memory situations so far, VMWare has been running. I suppose it's possible that VMWare is the one doing the swappage; I'll have to investigate that further.
    ~Felix.

    I think I've finally figured this out. It's a kernel bug -- I'm guessing that under normal circumstances, the "cached" column in the free command "doesn't count" towards how much memory the system thinks it's using. After all, it's just cached copies of stuff that should be elsewhere, and if you run out of memory, you can safely dump that, right? Unfortunately, /dev/shm is counted under cached rather than used memory (as I discovered in an earlier post).
    So if I've got 500 MB of stuff in /dev/shm * (which is where I mount my /tmp), there's now 500MB of stuff in the "cached" column that really does count -- system reaches all RAM full, decides it needs to dump cache, and suddenly finds that the 500MB it thought it could use isn't usable. For some reason it takes about 5 minutes of hard drive thrashing (probably because it's already chucked all of the system libraries, etc. out of cached and needs to re-read them from disk every time) before something finally figures out that it really is out of memory and that that 500MB isn't letting go and invokes OOM-killer.
    *: VMWare does this; it creates a 512MB file (the amount of RAM in my virtual machine) then hides it by keeping the file open and deleting it, so the inode's still there, but you can't see it and it makes the df command really perplexing... but that's another story.
    I haven't had a chance to try this with a newer kernel (maybe they've fixed it now?); I'm still running 2.6.23-ARCH here. (pacman -Syu upgrades are a major production for me because I have lots of RAID arrays and things, and an nvidia graphics card, and I use gnucash which sometimes needs manual recompiling, and so on...)

  • Out of memory with lots of memory

    this is the log output:
    05/08/27 19:18:05 Runtime.getRuntime().freeMemory()=856 M
    05/08/27 19:18:05 Runtime.getRuntime().totalMemory()=1012 M
    05/08/27 19:18:05 In use=156 M
    05/08/27 19:18:05 Runtime.getRuntime().maxMemory()=1012 M
    05/08/27 19:18:09 java.lang.OutOfMemoryError
    as u see there are enough memory and still out of memory error comes.
    I run standalone server with 1.5GB ram. heap size is set to 1024 and also Xss and MaxPermSize are set to appropriate sizes.
    what could be the error source ?

    That must be puzzling. I do not have the answer, but a little more data from you would be helpful.
    If you are using Windows, can you go to task manager to verify how much memory your java process is using? On unix, the command "top". Set your MaxPerSize to 256M just for sure. If the process hangs when the OutOfMemoryError happens, do a full stack dump by "ctrl+break" on Windows or "ctrl+^" on unix/linux to see what is happening. If a stackTrace of the thread that causes the OutOfMemoryError can be obtained, that will be the most helpful.

  • Out of Memory with LR 1.1 - yes, just point me to the FAQ or something

    Yes, I know this topic has been discussed ad nauseum, but I've yet to see an actual description of the problem and possible solutions that actually work.
    I run LR1.1 on Windows XP with 2gb ram.
    1.1 is still slow, it's a super pig when you have >10k images. I just can't take the out of memory issues any longer. Starting and restarting LR every 10 minutes is really boring.
    anyone, anyone....? Come on, blaming it on XP is lame, UNLESS you can describe to me exactly why this is happening.
    Or, is this one of those bugs that the LR team punted b/c it really is an OS issue, yet they could do a fix, but fixing it would be too expensive?

    Hooray!
    After struggling with issues similar to yours on a fast computer running Windows XP Professional, I found the solution!!
    According to someone's post, Lightroom 1.1 has virtual memory problems. This is solved very simply by turning off (deselecting the checkbox) to automatically write changes to XMP in the catalog settings. Then after you have developed or made changes to an image, simply select the image and under photo menu, press "save metadata to file" (Control/Command S). Apparently Lightroom is trying to read/write the metadata on every image when this "automatically write changes to XMP" is selected. This is why CPU resources are going up so high and why your images will not load quickly in the develop module.
    Try this, I think this is the answer to many problems not associated with faulty database conversions.
    Good Luck with this tip.

  • Out of Memory with InDesign CC

    I have a new computer (ASUS-Windows 8 (I hate it), Intel Core i7-4700HQ [email protected], 16 GB DDRS, 64-bit OS, x64-based processor, 1024 GB 5400 rpm Hard Drive, 256 GB Solid-State Drive). While working in InDesign CC on a 24 page newsletter, I get an "out of memory" popup that forces me to leave InDesign. I have done this newsletter for the past ten years, so the parameters have never changed. I've had no problems until using CC. I have a creative cloud membership (I love it). This also happened on my old computer. I'd done this same project on my old computer for three years using CS4 and CS6. When I downloaded CC, I got the "out of memory" popup. This prompted me to buy a new computer (which I figured it was time anyway). But now I have the same problem. I bought the best computer I could afford and I don't think I'm lacking in any area.
    Any thoughts?

    There's something about that newsletter that is corrupt. Have you been using the same file for the last ten years? Or the same art, logos, etc.? You may want to trash it and rebuild from scratch, or at least export INX/IDML and reopen & resave.
    My clients mostly ignored my advice to not open old INDDs in CC and resave, but instead to export INX or IDML and open that in CC. So I've seen quite a few buggy documents over the last two months.
    I'm pretty sure that 100% of the "out of memory" errors I've encountered using InDesign since CS1 have boiled down to bugs or corrupt documents or memory leaks, not actual problems with the size of my publications and the amount of memory installed.

  • Out of memory with 16gb ram?

    I am running CS4 on an 3.0 8 Core Mac Pro with 16gb of ram and a raid 0 scratch drive made from 6x15k drives. I had my memory usage set to 92% with 10.5.6. I recenetly upgraded to Snow Leoaprd and was working on a 3mb file, with no other applications runining, besides firefox.  I was trying to use the quick selection tool and when I tried taking away from the selection I got the "out of memory (ram)" error. So I dropped the ram usage for ps it down to 85% and that seemed to work. Is this normal for Snow Leoaprd? In the past, people had recommended that if you had over 10gb of ram to set your ram usage to 100%, but I see that must no longer be the case?
    Thanks,
    Steve

    Now I just tried opening up one page of a 9pg pdf file in CS4, and I got the "not enough memory" error again? I have 10.5.6 on another drive, so I went back to that operatinig system and tried opeing the same file in CS4 with the memory usage set at 92% and it opened up fine.

  • "Out of memory" - with silly proportions

    Hello everyone,
    Even though I´m not the most frequent user of Encore, I do DVD-productions in my studies from time to time. So I was surprised to get an error on my new computer configuration, and cannot help to think that it might be something wrong that I do not know of.
    I run a Wintel with;
    * Windows 7, 64-bit Ultimate (updated, genuine build - as in no beta/RC)
    * 8Gb RAM
    * HDD´s with at least 20Gb left on system drive and 60Gb left on drive with projects
    * CS4, Master edition (updated, student licence)
    trying to render a project with;
    * ONE menu, 6 chapters
    * A 3second transition-effect
    * A pre-rendered (Premiere) "DVD-MPEG2" file that is a total of 1,1Gb in size
    and no matter how I do it (making new timelines, re-render the main file, burn directly vs ISO-file vs folder) - I get a "Out of memory"-fail at the end of the creation.
    So how can it go wrong? Is it that Encore CS4 is not "compatible" with Win7 64-bit or what? Since Win7 and getting 8Gb of RAM is the only thing that is "new" since I last rendered a DVD.
    Thank you for your quick replies, I need to get this sorted for a presentation today... preferably.
    Best regards

    >Windows 7, 64-bit Ultimate
    Scan the forum for titles, or do a search... LOTS of problems with Win7 64bit and Encore
    Try this http://www.microsoft.com/windows/virtual-pc/download.aspx
    I have not used it (building a Win7 computer next year) but it MAY help
    Some other ideas...
    Win7 Help http://social.technet.microsoft.com/Forums/en-US/category/w7itpro/
    Compatibility http://www.microsoft.com/windows/compatibility/windows-7/en-us/Default.aspx
    Adobe Notes http://kb2.adobe.com/cps/508/cpsid_50853.html#tech
    Optimizing http://www.blackviper.com/Windows_7/servicecfg.htm

  • What causes my iphone 5s to keep on running out of memory with any new download. I recently updated to  ios 8.2

    What causes my iphone 5s to keep on running out of memory without any new download. I recently updated to  ios 8.2

    I meant to say without downloading or receiving anything

  • Running out of memory with Tomcat !!!!!

    Hello gurus and good folk:
    How can I ensure that the a JSP page that sets a ResultSet doesn't run out of memory? I have set the X flag to j2Se to be 1024mb and still runs out of memory! The size of the data being queried is only 30 MB. One would think the JDBC driver will be optimized for large ResultSet. Any pointers will be very helpful.
    Many thanks
    Murthy

    Hi
    As far as i believe, 30 mb data is pretty big for an online app. If you have too many rows in ur resultset, you could(or should) consider implementing paging and fetch x records at a time. Or you could just have a max limit for the records to be fetched(typically useful for 'search and list' type of apps) using Statement.setMaxRows(). This should ensure that Out of memory errors do not happen.
    If your data chunk per row is large, consider displaying only a summary in the result and fetching the 'BIG' data column only when required(e.g. fetch the column value for a particular row only when that row is clicked).
    Hope this helps !

  • General error and out of memory with Final Cut Pro 6

    I just upgraded to a Mac mini i7. What setting should I use to avoid these errors?

    So you were successful in getting FCS 2 apps installed in Mountain Lion?
    Are these messages you're getting now or ones that you have gotten in the past using FCP?
    Out of memory warnings often caused by still images that are not RGB or trying to use unsupported media.
    Russ

  • Out Of Memory with tons of free mem

    I need help diagnosing a problem I'm having with a server application.
    The problem:
    I'm getting general flakeyness and sometimes "OutOfMemory: cannot create new native thread" after the application has been running several hours (with 150 or so logins and 300 or so signs rendered).
    The environment:
    We run Tomcat 3.2.4, Xalan2, JAXP1.1, mysql w/ mm.sql JDBC driver v1.2.0.13, JDK 1.3.1_03
    on a SunFire v880 w/ 4 processors, 8Gig RAM running Solaris 8. JVM args set to -server -Xms3g -Xmx3g -XX:NewSize=1024m -XX:MaxNewSize=1024m -XX:SurvivorRatio=16 -XX:+DisableExplicitGC
    What I've tried:
    I've run under OptimizeIt and found only small amounts of mem being consumed ongoing. Something the GC ought to be able to handle. I've played with the GC parameters in an effort to get this problem under control. I monitor the freemem and can see it go up and down through out the day. The lowest freemem has come down is to 900Meg, then bounces back up to 2.7Gig or so. I suspected fragmentation, but from what I read, hotspot shouldn't allow that to happen.
    That exception is thrown from native code in "jvm.cpp" in the JVM_StartThread() method. Comments say it could happen when memory is low (duh).
    Our App:
    We render in store signs for a major US department store. Our app provides simplified sign composition. We render the signs for printing (generating postscript for the printers). Sometimes we deal with buffers as large as 112Meg. We use XML for sign templates and jobs. We spit out XML from servlets, using XSLT to transform to HTML for the browser. Our sign templates use some XSLT to add common things into the templates as they are loaded. We support image upload to a repository that the sign creators can use in the signs. So, we deal with lots if image data and XML throughout the app. We hit the database throughout the workflow.
    HELP!

    Hi,
    I had the same problem under Linux, but found a solution:
    There are 2 different factors for this problem.
    There is the "Xss" option of Java and the "stack size"
    as reported/set by ulimit:
    ulimit -acore file size (blocks) 0
    data seg size (kbytes) unlimited
    file size (blocks) unlimited
    max locked memory (kbytes) unlimited
    max memory size (kbytes) unlimited
    open files 1024
    pipe size (512 bytes) 8
    stack size (kbytes) 8192
    cpu time (seconds) unlimited
    max user processes 16896
    virtual memory (kbytes) unlimited
    When using java 1.3.1, the "Xss" option is ignored, so
    only the "stack size" of ulimit counts. Setting the
    stack size using ulimit is done using the -s option:
    ulimit -s 8129This sets the stack size per Java thread to 8MB.
    The default stack size in Redhat Enterprise Server 3.1
    is 10MB. When a Java thread is started, this 10MB is not taken
    immediately, but the Linux kernel does guarantee this thread 10MB
    if it needs it. Suppose you hava 2GB memory in your machine, then
    after starting 200 Java threads, the light goes out even though
    "top" reports 1.5GB of free memory. The memory is free, but the
    kernel has already promised all of it to the 200 executing Java
    threads.
    Under java 1.4.2, the Xss option is not ignored. It has the same
    effect as the "stack size" set by ulimit; specifying a large amount
    of memory for Xss will limit the amount of threads you can start.
    Using java 1.4.2 it seems these settings both have to be tuned.
    Greetz,
    Chris Twigt

  • Generating large amounts of XML without running out of memory

    Hi there,
    I need some advice from the experienced xdb users around here. I´m trying to map large amounts of data inside the DB (Oracle 11.2.0.1.0) and by large I mean files up to several GB. I compared the "low level" mapping via PL/SQL in combination with ExtractValue/XMLQuery with the elegant XML View Mapping and the best performance gave me the View Mapping by using the XMLTABLE XQuery PATH constructs. So now I have a View that lies on several BINARY XMLTYPE Columns (where the XML files are stored) for the mapping and another view which lies above this Mapping View and constructs the nested XML result document via XMLELEMENT(),XMLAGG() etc. Example Code for better understanding:
    CREATE OR REPLACE VIEW MAPPING AS
    SELECT  type, (...)  FROM XMLTYPE_BINARY,  XMLTABLE ('/ROOT/ITEM' passing xml
         COLUMNS
          type       VARCHAR2(50)          PATH 'for $x in .
                                                                let $one := substring($x/b012,1,1)
                                                                let $two := substring($x/b012,1,2)
                                                                return
                                                                    if ($one eq "A")
                                                                      then "A"
                                                                    else if ($one eq "B" and not($two eq "BJ"))
                                                                      then "AA"
                                                                    else if (...)
    CREATE OR REPLACE VIEW RESULT AS
    select XMLELEMENT("RESULTDOC",
                     (SELECT XMLAGG(
                             XMLELEMENT("ITEM",
                                          XMLFOREST(
                                               type "ITEMTYPE",
    ) as RESULTDOC FROM MAPPING;
    ----------------------------------------------------------------------------------------------------------------------------Now all I want to do is materialize this document by inserting it into a XMLTYPE table/column.
    insert into bla select * from RESULT;
    Sounds pretty easy but can´t get it to work, the DB seems to load a full DOM representation into the RAM every time I perform a select, insert into or use the xmlgen tool. This Representation takes more than 1 GB for a 200 MB XML file and eventually I´m running out of memory with an
    ORA-19202: Error occurred in XML PROCESSING
    ORA-04030: out of process memory
    My question is how can I get the result document into the table without memory exhaustion. I thought the db would be smart enough to generate some kind of serialization/datastream to perform this task without loading everything into the RAM.
    Best regards

    The file import is performed via jdbc, clob and binary storage is possible up to several GB, the OR storage gives me the ORA-22813 when loading files with more than 100 MB. I use a plain prepared statement:
            File f = new File( path );
           PreparedStatement pstmt = CON.prepareStatement( "insert into " + table + " values ('" + id + "', XMLTYPE(?) )" );
           pstmt.setClob( 1, new FileReader(f) , (int)f.length() );
           pstmt.executeUpdate();
           pstmt.close(); DB version is 11.2.0.1.0 as mentioned in the initial post.
    But this isn´t my main problem, the above one is, I prefer using binary xmltype anyway, much easier to index. Anyone an idea how to get the large document from the view into a xmltype table?

  • System running out of memory

    I have deployed a Windows Embedded Standard 7 on a x64 machine. My answer file includes the File Based Write Filter and my system has 8GB RAM installed. I have excluded some working folders for a specific software and other than that no big change would
    happen in the system. I have set the overlay size of FBWF to be 1GB.
    Now my problem is that after the system works for some time, the amount of free memory starts to decline and after around 7-8 hours the available memory reaches a critical amount and the system is unusable and I have to reset the system manually. I have
    increased the size of the overlay to 2GB but this happens again.
     Is it possible that this problem is due to FBWF? If I set the overlay size to be 2GB the system should not touch any more than that 2GB so I would never run out of memory with 8GB installed RAM. am I right?

    Would you please take a look at my situation and give me a possible diagnosis:
    1- I have "File Based Write Filter" on Windows Embedded Standard 7 x64 SP1.
    2- The installed RAM is 8GB and size of overlay of FBWF is set to 2GB.
    3- When the system is giving the critical memory message the conditions are as follows:
    a) The consumed memory in task manager is somewhere around 4 to 4.5 GB out of 8GB
    b) A process schedule.exe (from our software) is running more than a hundred time and is consuming
    memory,
    but its .exe file is located inside an unprotected folder.
    c) executing fbwfmgr.exe /overlaydetail is reporting that only 135MB of overlay volume is full!
    Memory consumed by directory structure: 35.6 MB
    Memory consumed by file data: 135 MB
    d) The CPU usage is normal
    I don't know what exactly is full? Memory has free space, FBWF overlay volume has free space, then which memory is full?
    p.s.: I checked my answer file and paging file is disabled as required.

  • Problem with out of memory and reservation of memory

    Hi,
    we are running a very simple java program on HP-UX that do some text substitution - replacing special characters with other characters.
    The files that are converted are sometimes very large, and now we have come to a point where the java server doing the work crashes with "Out of memory" message. (no stack) when it process one single 500MB large file.
    I have encountered this error before(with smaller files) and then I have made the maximum Heap larger, but now when I try to set it to 4000M
    i get the message:
    "Error occurred during initialization of VM
    Could not reserve enough space for old generation heap"
    When it crash with this message, my settings are:
    -XX:NewSize=500m -XX:MaxNewSize=1000m -XX:SurvivorRatio=
    8 -Xms1000m -Xmx4000m
    If I run with Xmx3000m instead the java program starts but I get Out of memory error like:
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    The GC log file created when it crashes looks like:
    <GC: -1 31.547669 1 218103808 32 219735744 0 419430400 0 945040 52428800 0 109051904 524288000 877008 877008 1048576 0.934021
    >
    <GC: -1 62.579563 2 436207616 32 218103808 0 419430400 945040 944592 52428800 109051904 327155712 524288000 877008 877008 1048
    576 2.517598 >
    <GC: 1 65.097909 1 436207616 32 0 0 419430400 944592 0 52428800 327155712 219048400 524288000 877008 877008 1048576 2.061976 >
    <GC: 1 67.160178 2 436207616 32 0 0 419430400 0 0 52428800 219048400 219048400 524288000 877008 877008 1048576 0.041408 >
    <GC: -1 128.133097 3 872415232 32 0 0 419430400 0 0 52428800 655256016 655256016 960495616 877008 877008 1048576 0.029950 >
    <GC: 1 128.163584 3 872415232 32 0 0 419430400 0 0 52428800 655256016 437152208 960495616 877008 877008 1048576 3.971305 >
    <GC: 1 132.135106 4 872415232 32 0 0 419430400 0 0 52428800 437152208 437152208 960495616 877008 876656 1048576 0.064635 >
    <GC: -1 256.378152 4 1744830464 32 0 0 419430400 0 0 52428800 1309567440 1309567440 1832910848 876656 876656 1048576 0.058970
    >
    <GC: 1 256.437652 5 1744830464 32 0 0 733282304 0 0 91619328 1309567440 873359824 1832910848 876656 876656 1048576 8.255321 >
    <GC: 1 264.693275 6 1744830464 32 0 0 733282304 0 0 91619328 873359824 873359824 1832910848 876656 876656 1048576 0.103764 >
    We are running:
    java version "1.3.1.02"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1.02-011206-02:17)
    Java HotSpot(TM) Server VM (build 1.3.1 1.3.1.02-JPSE_1.3.1.02_20011206 PA2.0, mixed mode)
    We have 132GB of physical memory and a lot of not used Swap space, so I cant imagine we have a problem with that.
    Can anyone please suggest what to do proceed troubleshooting or to change some settings? I'm not into this Java really so I really need some help.
    Usually the java program handles thousands of smaller files (around 500 KB - 1 MB sized files).
    Thanks!

    You have a one to one mapping? Where one character is replaced with another?
    And all you do is read the file, replace and then write?
    Then there is no reason to have the entire file in memory.
    Other than that you need to determine if the VM (which is not a Sun VM) has an upper memory bound. That would be the limit that the VM will not go beyond regardless of memory in the system.
    We have 132GB of physical memory and a lot of not used Swap spaceOne would wonder why you have swap space at all.

  • I've a few issues, my old computer was out of memory and i changed to a new one which was already on another itunes account.  When i upgraded to os5 my library has gone it has synced with the new computer - how do i restore the old ipad information?  ple

    I've a few issues, my old computer was out of memory.  I changed to a new one which already had another itunes account (for my business)  when i upgraded to os5 my library has dissappeared as it has syncd with the new computer.  Is there any way i can restore the information previously from my ipad?  really appreciate any advice thanx in advance

    Lynda-
    If you still have the old computer, you should be able to connect the iPad, run iTunes and restore from a backup from a previous sync.  Now that you have it updated, you will not need to do that again.
    Fred

Maybe you are looking for