Time based memory leak?

Greetings All..
    Some strangeness seen in the last few days.  I have a machine that is pretty much dedicated to editing images (mac pro, 8g ram, 8cores,raid, etc). Previous versions of LR I would edit in over several days; not bothering to exit the program.  So I would have the program up, work on it randomly during the day, leave it up overnight, hit it in the morning.
  LR3 seems to have a problem doing that.  I left it up for a couple of days (out shooting), and when I started editing again, it took between 3 and 5 seconds before it would add a keyword, or move to the next picture, etc.
  I exited LR, came back in, and everything was fast again.
No other software was running at the time, and I tried for awhile wondering what is going on.
Any ideas?

I'm having similar issues too.
My activity is showing my page ins & pageouts quickly creep into GBs

Similar Messages

  • Yosemite Time Machine memory leak chewing through memory?

    I installed Yosemite 10.10 on my 27" iMac (Intel Core i5) with 12 GB of RAM. After plugging in my external drive to make a backup with Time Machine, I noticed the indicator in my menu bar from Memory Clean was red and showing less than 15 MB of free memory available! I ran Memory Clean to recover some memory while the backup was in progress and it made roughly 3 GB available. Then the rapid drop in available memory began again as Time Machine continued to back up 40+ GB of data.
    When finished, the available memory was very slowly released and crept up to around 220 MB. After 5 minutes without seeing a major release of memory I then ran Memory Clean again and it made 1.96 GB free for the system to use.
    Before the Yosemite upgrade I didn't have Memory Clean installed so I don't think I ever checked to see if system memory was being chewed through in such a massive way under Mavericks. Is this common or is this yet another memory leak associated with Yosemite?
    EDIT: In case this might be helpful, the drive is a Seagate Backup Plus 2 TB USB 3.0.

    Similar machine though16Gb of RAM.   The leak seems to be related to long sleep ( I used to never turn it off)  and then Time Machine so again, similar.   Time Machine (to LaCie 2TB via USB) wanted to re-archive the whole hard drive and via Memory Clean I could watch the RAM get eaten up.   Interesting tidbit, Activity Monitor is oblivious to the leak... completely oblivious. 
    Today, I begin the fun part of copying the 800Gb of data and reinstalling Yosemite.   (Sarcasm)  That should only set me back and hour or two.... sheesh.
    Of course, I cannot make any edits - NONE - in the new notfication center.   I can only get local area weather, the stocks apple provides and the time in Cupertino, two time zones away.
    BTW, should you read this, we're out in the cold on Continuity though every now and again, I can see my iMac on my 4Gen iPad.   Weird.

  • Firefox 4 slowes down after some time - possibly memory leak related

    After couple minutes of intensive Firefox 4 usage (multiple tabs opened and closed, however no more than 15 tabs open simultanously) browser becomes '''unresponsive''' and periodically '''generates high cpu load''' (100% of one of two cores for about 4 seconds), '''memory consumption''' becomes around 450MB.

    It would be helpful if Mozilla could provide a memory profiling tool that would help us civilians track down memory problems either in the core products or in extensions. The suggestions to create new profiles, re-add extensions, and do A/B (C/D/E) testing along the way, while accurate, aren't practical when we discover that FF 4 on Win 7 is using north of 1GB of memory.

  • Memory leak issues persist in Safari 6.0.5 (it's about time that Apple actually fixed this)

    Safari has been notorious for its memory leaks for years, as I'm sure many of you know. I stopped using Safari as my main browser in 2010 or so, and I instead began to use Chrome. However, I recently had to use Safari to access some webpages, and I neglected to quit out of it. A few hours later, my computer slowed to a complete crawl. I was confused, because my computer never slows down to such a crawl, so I went into Activity Monitor to find the 'Safari Web Content' process using all of my available RAM, which was nearly 6 GB. Needless to say, Safari was force quit after that. I've heard that some plugins will cause this, but Flash was my only active plugin. Flash is, indeed, a heaping pile of crap that will readily eat resources as it sees fit- but this never happens to me in Chrome, and I use Flash all of the time n Chrome. This was after it had been sitting idle for quite some time, and I imagine that it would have used more RAM if it could.

    Using 6.0.5 here without any of those issues ...
    Regardless of whether Flash is being used, from your Safari menu bar click Help > Installed Plug-ins.
    Try troubleshooting extensions and third party plug-ins.
    From your Safari menu bar click Safari > Preferences then select the Extensions tab. Turn that off if there are any installed. Quit and relaunch Safari to test. If that helped, turn extensions back on then uninstall one a time to test.
    If it's not an extensions issue, try troubleshooting third party plug-ins.
    Back to Safari > Preferences. This time select the Security tab. Deselect:  Allow all other plug-ins. Quit and relaunch Safari to test.
    If that made a difference, instructions for troubleshooting plugins here.
    my computer slowed to a complete crawl.
    Checked to see exactly how much space is available ??
    Click your Apple menu icon top left in your screen. From the drop down menu click About This Mac > More Info > Storage
    Make sure there's at least 15% free disk space.
    Checked the startup disk lately?
    Launch Disk Utility located in HD > Applications > Utilities
    Select the startup disk on the left then select the First Aid tab.
    Click:  Verify Disk  (not Verify Disk Permissions)
    If the disk needs repairing, restart your Mac while holding down the Command + R keys. From there you can access the built in utilities in OS X Recovery to repair the startup disk.
    message edited by:  cs

  • Memory leak in Real-Time caused by VISA Read and Timed Loop data nodes? Doesn't make sense.

    Working with LV 8.2.1 real-time to develop a host of applications that monitor or emulate computers on RS-422 busses.   The following screen shots were taken from an application that monitors a 200Hz transmission.  After a few hours, the PXI station would crash with an awesome array of angry messages...most implying something about a loss of memory.  After much hair pulling and passing of the buck, my associate was able to discover while watching the available memory on the controller that memory loss was occurring with every loop containing a VISA read and error propogation using the data nodes (see Memory Leak.jpg).  He found that if he switched the error propogation to regular old-fashioned shift registers, then the available memory was rock-solid.  (a la No Memory Leak.jpg)
    Any ideas what could be causing this?  Do you see any problems with the way we code these sorts of loops?  We are always attempting to optimize the way we use memory on our time-critical applications and VISA reads and DAQmx Reads give us the most heartache as we are never able to preallocate memory for these VIs.  Any tips?
    Dan Marlow
    GDLS
    Solved!
    Go to Solution.
    Attachments:
    Memory Leak.JPG ‏136 KB
    No Memory Leak.JPG ‏137 KB

    Hi thisisnotadream,
    This problem has been reported, and you seem to be exactly reproducing the conditions required to see this problem. This was reported to R&D (# 134314) for further investigation. There are multiple possible workarounds, one of which is the one that you have already found of wiring the error directly into the loop. Other situations that result in no memory leak are:
    1.  If the bytes at port property node is not there and a read just happens in every iteration and resulting timeouts are ignored.
    2.  If the case structure is gone and just blindly check the bytes at port and read every iteration.
    3.  If the Timed Loop is turned into a While loop.
    Thanks for the feedback!
    Regards,Stephen S.
    National Instruments
    Applications Engineering

  • Memory Leak..Big Time

    I've recently purchaced El Gato's EyeTV HDHomeRun & associated software to record and then export content to AppleTV2. The system can record 2 shows simultaneously and also automatically export to iTunes for streaming to AppleTV. That's where the fun begins. The recording works just fine but when the export to iTunes process begins, memory leaks, no gushes out of my system, turning my 12GB of Ram to 12MB within an hour. Paging begins, finally leading up to a Kernel Panic and shutting down my system (lost recordings).  A system restart will clear out the RAM but starting the export process agin will just wipe out all of my memory again. I've also tried repairing permissions (with SL software install disk) but the problem will persist days following. I've also purchased and installed new RAM from OWC, so I know that's not it and even reinstalled EyeTV software after contacting their Tech Support. No other software applications are runnning. Can anyone shed light on what is going on here? I'm thinking about a fresh install of 10.6.8 or going to Lion but don't know if this will even make any difference. Please HELP!

    It is entirely possible that Elgato's software writes to the memory controller in such a way that if the memory is marginal in certain bits of the board, that it would panic.     The fact the issue remanifests itself under new RAM may even indicate the problem is not the RAM, but the RAM controller itself!   An old piece of code I know from "C" programming days, malloc is the memory allocation code word.      They should be digging in the malloc codes making sure they aren't asking for points in memory they don't know exist, or create some debug routines around every malloc piece of code until it trips on your machine.

  • Memory leak in Waveform Graph?

    Either thier is a huge memory leak in the waveform graph or I am really doing something wrong.
    I created an example app with a waveform graph and a button the contructor looks as follows:
    Form1(void)
    InitializeComponent();
    vals = gcnew array<double>(60000) ;
    for (int i=0; i<60000; i++)
    vals[i]=Math:in(Math:: PI*2*60/6000.0*i) ;
    and the click event looks like this:
    System::Void button1_Click(System:: Object^ sender, System::EventArgs^ e)
    this->waveformGraph1->Plots->Clear() ;
    NationalInstruments::UI::WaveformPlot^ plot = gcnew NationalInstruments::UI::WaveformPlot(xAxis1, yAxis1) ;
    plot->PlotY(vals) ;
    plot->LineColor = Color::Red ;
    this->waveformGraph1->Plots->Add(plot) ;
    every time I click the button the memory used on my system goes up by about 10 MB.  I tried this also with using this->waveformGraph1->PlotY(vals) and the memory usage stays solid as a rock.
    Am I doing something wrong or what is causing the leak so I can work around it, my program plots 4 arrays of this size on one graph per test result.

    The plot uses some unmanaged resources (gdi objects and other handles), which is why it implements IDisposible. Because of the GC, resource cleanup is not deterministic, clean up occurs when the GC deems it necessary. Calling delete (which based on C++/CLI syntax) ultimately ends up calling Dispose and this forces the object to release any handles it might have immedietly. See the documentation for the .NET Dispose pattern for more information.
    If you don't call delete, what would end up happening is that eventually at some point in the application, the GC would fire and cleanup all the objects and handles and you would see a drop in the applications memory footprint, but you would need to run the application for a while before that might happen. In a long application run, things would end up stabilizing.
    Bilal Durrani
    NI

  • Memory leak in DIADEM 8.1

    Hello,
    I'm using DIADEM 8.1.1292 on a 3GHz WinXP ProSP2-PC with
    2Gytes of RAM.
    I'm doing some extensive data manipulation with DIADEM-VBScript and discovered (?) a memory leak which seriously disturbs my work:
    After finishing the script DIADEM doesn't free the used memory (RAM). I checked this behaviour using XP task manager. After starting DIADEM, it uses about 300MBytes . During my Script the memory consumption goes up to over 1Gbyte. After successfully finishing my script DIADEM doesn't free the Windows memory. Next time I run my script DIADEM stops with an error at about 1,5Gbyte memory consumption.
    If I restart DIADEM everything works fine until the memory is full again.
    Any hints? How can I tell DIAdem to free the memory?
    Than
    ks in advance
    Mathias

    Hi mstadler,
    My guess is that if you are talking about the DIAdem thread using over a Gigabyte of memory, your VBScript must be creating new DATA channels. If that's the case, then let me ask: when you re-run your VBScript, does it first delete the new channels it created the first time? Would you be willing to send over the VBScript you are running for us to take a look at it? You may well have discovered a memory leak, but based on your description my suspicion is that we can avoid this behavior by changing the details of the VBScript and perhaps other memory management properties. Does your VBScript load data from a file? Could it perhaps register that data instead (File>>Register file...)? Your operating system will only give DIAdem (or any other W
    indows application) up to but not exceeding 2 GBytes of virtual memory. DIAdem has the ability to manage its own virtual memory far in excess of this limit, but in DIAdem 8.1 this still has to be manually preconfigured. In DIAdem 9.1 the DIAdem-managed virtually memory automatically kicks in when needed and configures itself.
    Please send over your VBScript if that's feasible,
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • Memory Leak With Spatial queries

    We are using 8.1.6 on NT (4.0) for spatial data queries. We are facing memory leak problems. At the starting our job will run very fast and after some time it will start slipping. I'm monitoring PGA size from v$sessionstat/v$sysstat and it is regularly increasing. Same is the case for memory on NT machine when I'm monitoring thru performance monitor. I have already applied the spatial patch available for 8.1.6 but no improvement.
    Please let me know If there is any workaround. When I'm submitting my job in parts and shutdown the database in between then It is releasing all the memory and It is working fine. Without shutting the database it is not relasing the memory even though I stop my spatial data batch job.
    null

    Hi,
    Thanks for your responses.
    This is the query:
    SELECT a.geo_id, mdsys.sdo_geom.sdo_length(
    mdsys.sdo_cs.transform(
    mdsys.sdo_geometry(2002, 8307, null,
    mdsys.sdo_elem_info_array(1,2,1),
    mdsys.sdo_ordinate_array(' &#0124; &#0124;
    longi &#0124; &#0124;
    ', ' &#0124; &#0124;
    lati &#0124; &#0124;
    a.geo_geometry.sdo_point.x,
    a.geo_geometry.sdo_point.y )),
    mdsys.sdo_dim_array(mdsys.sdo_dim_element(' &#0124; &#0124;
    '''' &#0124; &#0124;
    'X' &#0124; &#0124;
    '''' &#0124; &#0124;
    ',-180,180, .00000005),
    mdsys.sdo_dim_element(' &#0124; &#0124;
    '''' &#0124; &#0124;
    'Y' &#0124; &#0124;
    '''' &#0124; &#0124;
    ',-90,90, .00000005)), 41004),
    .00000005) * 6.213712e-04 distance_in_miles
    FROM ' &#0124; &#0124;
    t_name &#0124; &#0124;
    ' a
    WHERE mdsys.sdo_nn(a.geo_geometry,
    mdsys.sdo_geometry(1, 8307,
    mdsys.sdo_point_type(' &#0124; &#0124;
    longi &#0124; &#0124;
    ', ' &#0124; &#0124;
    lati &#0124; &#0124;
    ', null),
    null, null),' &#0124; &#0124;
    '''' &#0124; &#0124;
    'SDO_NUM_RES=5' &#0124; &#0124;
    '''' &#0124; &#0124;
    ') = ' &#0124; &#0124;
    '''' &#0124; &#0124;
    'TRUE' &#0124; &#0124;
    '''' &#0124; &#0124;
    AND a.geo_id ' &#0124; &#0124;
    filter &#0124; &#0124;
    ORDER BY 2;
    Here we are passing the tname and filter dynamically based on certain conditions and the memory leak is almost 100K to 200K per query.
    First I tried to closing the session only but that didn't work. Database shutdown is only able to release the memory. I'm monitoring v$sysstat/v$sesstat and size of oracle.exe in NT performance monitor. Please let me know If something else need to be monitor.
    Thanks.
    Sandeep
    null

  • Memory Leak with cloneModelFromCastMember()?

    Hello Experts!
    I have been experiencing an apparent memory leak within
    Director 11 when
    using cloneModelFromCastMember().
    I was making the assumption that calling resetWorld() on a
    w3D member
    onBeginSprite() would garbage collect any models previously
    cloned into that
    when I previously ran the movie.
    However, if I repeatedly start and stop the movie Director
    Gobbles roughly
    10Mb more memory each time. The memory usage does not reduce
    upon calling
    resetWorld()
    A good way to replicate this is to use
    cloneModelFromCastMember() on a
    largeish model in a repeat with i = 1 to 50 loop on the on
    beginSprite
    handler.
    Start and stop the movie over and over to see Director's
    memory usage hike
    up.
    Anybody have any advice why this is happening? Do I need to
    explicitly
    delete all models cloned into a member on stopMovie????
    Cheers
    Richard Smith

    Hi Zzzorro,
    Thanks for the advice!
    Why does cloning from external w3D members help? Does it
    avoid the memory
    leak? It never used to happen on Director 8.5 so it has to be
    a new Version
    10 / 11 bug right?
    I need to import several weightmapped boned characters into a
    3D member, and
    due to export issues each character has to have it's own w3D
    file.
    So I have to perform cloning at runtime to build the world. I
    also need to
    clone these characters based on the level, so I can't use
    just one single 3D
    member for both these reasons.
    Thanks for any further ideas.
    Richard Smith
    "zzzorro" <[email protected]> wrote in
    message
    news:gd4sn2$2l8$[email protected]..
    > as a rule of thumb:
    > whenever possible avoid cloneModelFromCastMember in the
    first place.
    > It is highly unrecommended and the intel engineers
    always recommended to
    > use
    > loadFile() with an external w3d file, which is much
    better than having the
    > w3d
    > file in the castlib and using cmfcm.
    > each cmfm rebuilds the whole scene and takes a lot of
    time the bigger the
    > scene is.
    > apart from glitches like leaks, which you found right
    now and other
    > things.
    >
    > I work very much with sw3d and I barely have more than
    one shockwave3d
    > member
    > in any of my movies. in very rare cases I use 2 sw3d
    members. Other than
    > that I
    > use one member where I build and load everything into
    from external w3d
    > files
    > with loadFile(), which is much more appropriate. the
    only downside is that
    > I
    > can't change the model name, but there are ways to deal
    with it.
    >

  • Finding Process Memory Leak?

    I have been tracking down memory leaks in a Java server-based application. I recently ran the application for several hours, and monitored memory usage with both JConsole and the Unix �top� command. I encountered some behaviors that I don�t understand.
    With JConsole, I observed that the heap memory usage was fairly constant over time. The Code Cache space, however, gradually increased in size over the life of the test. I expected some increase as additional methods in the application were executed, but it seems that eventually the rate of increase would level out. It did not. Any ideas why that might happen?
    The resident memory (as reported by top) increased significantly over the life of the test. Using pmap and jmap, I was able to determine that the heap space was increasing (even though JConsole reported that the heap was constant.) So, it seems as though the heap usage of the application is constant, but the heap usage of the process is still growing? I am using Java HotSpot VM for Solaris (1.6.0-b105).
    Any suggestions or insight would be appreciated.

    I'm not sure exactly what you're saying, but it sounds like this:
    A tool that's monitoring the VM says the memory used by your objects is not increasing, but other tools say the memory the VM has taken from the OS is increasing.
    Is that the case? If not, nevermind--I've misunderstood.
    If that is the case, then it's totally expected. The VM doesn't usually give memory back to the OS. If it hasn't used up the amount specified by -Xmx (or the default if you didn't specify -Xmx), it's free to just keep grabbing memory from the OS, rather than running a full GC to reclaim memory that it has used for objects but that is no longer reachable.

  • Tuxedo Memory Leak Issue (Tuxedo 8.1 - Windows Server 2003)

    Hi
    We are running tuxedo 8.1, 32 bit with patch level 258 in our windows server 2003 based production environment. We are currently facing an issue where the memory usage of machine slowly keeps on going higher and higher eventually resulting in “Memory Allocation Failure” to tuxedo servers. We then have to do a complete restart of tuxedo which stabilizes the system for other few days.
    We have been analyzing the our source code in development/test environment using different tools like a customized Alzheimer tool and IBM purify but both tools reported no memory leaks. We then developed a test tuxedo server exposing a tuxedo service which simply allocates a memory to a response buffer and then returns the response buffer. I then configured tuxedo queue with same name “MEMTEST3” and configured a TMQForward server to call this “MEMTEST3” service every time a message is en-queued to the MEMTEST3 queue.
    unsigned long _LIBENTRY ulTPAlloc(FBFR32 **ppc, long size)
    unsigned long ulRes = MSG_SUCCESS_c;
    ppc = (FBFR32 ) tpalloc("FML32", (char *) 0, size);
    if (*ppc == (FBFR32 *) 0) {
    vLogMessage(hGetLogHandle(), MSG_MEM_ALLOC_ERR_c, (char *) 0, (Event_t *) 0,
    BM_NOSUPPRESS_c, size);
    ulRes = MSG_MEM_ALLOC_ERR_c;
    return (ulRes);
    /*==============================================================================
    Service MEMTEST2
    ==============================================================================*/
    void MEMTEST3(TPSVCINFO *pRequest)
    FBFR32 *pFmlResponse = NULL;
    FBFR32 *pFml = NULL;
    unsigned long ulRes = MSG_SUCCESS_c;
    unsigned long ulActionCode = 0;
    int iExitValue = 0;
    long lTpurcode = 0;
    FBFR32 *pFmlNULL = NULL;
    userlog("Starting MEMTEST3 service.");
    if (pRequest == NULL || pRequest->data == NULL)
    vLogMessage(hGetLogHandle(), MSG_API_ARGS_ERR_c, NULL, NULL, BM_NOSUPPRESS_c);
    ulRes = MSG_API_ARGS_ERR_c;
    else
    pFml = (FBFR32 *) pRequest->data;
    userlog("MEMTEST3: GET THE MEM");
    ulRes = ulTPAlloc(&pFmlResponse, 1024);
    userlog("Ending MEMTEST3 service.");
    tpreturn( iExitValue, lTpurcode, ( char * ) pFmlResponse , 0L, 0L );
    While I was en-queuing the messages to the queue, I kept on monitoring the memory usage of the server hosting the service. What I observed was that I saw an initial hike in the usage of memory of the server followed by small jumps in memory increase. I kept on monitoring the server for a long time and the memory was never returned. What I suspect is that there is memory leak in tuxedo TMQforward process as it never released the memory allocated in the service.
    Can anyone help how this situation can be avoided ?
    Kind Regards,
    Asim

    Hi Todd,
    Also as well as my previous question, I also found your reply to another user posting something similar at Re: Memory leaks in Tuxedo libraries
    You mention that:
    In general Tuxedo will free anything it allocates, although there are cases where memory is allocated and not freed because:
    +1) it is one time or a fixed number of times allocations that will not continue to grow, and freeing up the memory just before exiting isn't of any benefit.+
    +2) the memory is under Tuxedo's memory management functions where we manage our own look aside lists to provide better buffer allocation performance and again freeing these before process termination is of little benefit.+
    Our code does only issue TPALLOC once and then a TPRETURN - could point (1) of your comment above also be something of concern to us, where we would observe a continuous growth of memory usage?
    I know it may sound like a stupid question but do we need to run our code 20,000 times before memory gets freed?
    Kind Regards,
    Asim

  • Memory Leak in NK.exe

    Hi All,
    OS: Windows Embedded Compact 7 with updates till Feb 2015.
    Hardware: AM335x based 
    Applications running: one serial port application and one tcpclient and tcpserver apps. all are managed (C#) applications
    I am facing memory leak issue with our headless device. 
    When I connect the device to LAN network, memory usage keep increases and after few hour (some times <1 hour, some times 4-5 hour) devices go to hang state due to low memory.
    I also tried to run the resource leak detector and found
           1. NK.exe heap is increasing
           2. on startup : API Handle Count = 4118, DuplicatedHandle - Count : 4,082, Size : 4,082 bytes
    After few minutes: APIHandle - Count : 49,172, Size : 49,172 bytes, DuplicatedHandle - Count : 48,810, Size : 48,810 bytes
    NK.exe heap increases as available RAM decreases.
    our application heap is constant only. please find below memory snapshots taken by devhealth.
     1. On device start up after all apps started
    2. After 1 hour of device running. - refer attachment
     where exactly might be this leak, any Guess?
    Thank you...
    rakesh

    Hi tomleijen,
    Thanks for your suggestions.
    Even we tried without any user apps, then also we found ~1 MB increase in NK.exe heap every 30-40 min.
    we have 2 images 1. with all WEC7 updates (Till feb 2015) and 2. without any of the WEC7 updates
    almost same problem we are facing with both images.
    rakesh

  • Huge memory leak in java jvm after update 2 for Snow leopard

    Since I updated to Java Update 2 for Snow Leopard my JVM suddenly grows massive (10GB+ real memory - -Xmx=3500m) consuming all memory and rendering my iMac unusable. This does not happen predictably but does happen several times a day now requiring I power off and on again.
    I had been living happily with update 1 with no such problem.
    I need to either go back to update 1 (should have it on Time Machine) or find a solution for this problem.

    Our application is a j2ee-based commercial application facing to specified customers, having about 120 access request an hour.
    We ' re doing stress test on the test server. The strange memory leak occurs at 1:20 am this morning while we're out of company , and no job was scheduled to run at that time. So I have the tendency to image that there is something inside oc4j had occured.
    I have used OptimizeIt to monitor the heap status. However , as the memory leak problem occurs very occasionally ,and that tool deadly slows our server, we are currently using no profiling tools.

  • Memory leak under GNU/Linux when using exec()

    Hi,
    We detected that our application was taking all the free memory of the computer when we were using intensively and periodically the method exec() to execute some commands of the OS. The OS of the computer is a GNU/Linux based OS.
    So, in order to do some monitoring we decided to wrote a simple program that called exec() infinite number of times, and using the profiler tool of Netbeans we saw a memory leak in the program because the number of surviving generations increased during all the execution time. The classes that have more surviving generations are java.lang.ref.Finalizer, java.io.FileDescriptor and byte[].
    We also decided to test this simple program using Windows, and in that OS we saw that the memory leak disappeared: the number of surviving generations was almost stable.
    I attach you the code of the program.
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    public class testExec
        public static void main(String args[]) throws IOException, InterruptedException
            Runtime runtime = Runtime.getRuntime();
            while (true)
                Process process = null;
                InputStream is = null;
                InputStreamReader isr = null;
                BufferedReader br = null;
                try
                    process = runtime.exec("ls");
                    //process = runtime.exec("cmd /c dir");
                    is = process.getInputStream();
                    isr = new InputStreamReader(is);
                    br = new BufferedReader(isr);
                    String line;
                    while ((line = br.readLine()) != null)
                        System.out.println(line);
                finally
                    process.waitFor();
                    if (is != null)
                        is.close();
                    if (isr != null)
                        isr.close();
                    if (br != null)
                        br.close();
                    if (process != null)
                        process.destroy();
    }¿Is anything wrong with the test program we wrote? (we know that is not usual to call infinite times the command ls/dir, but it's just a test)
    ¿Why do we have a memory leak in Linux but not in Windows?
    I will appreciate any help or ideas. Thanks in advance.

    Hi Joby,
    From our last profiling results, we haven't found yet a proper solution. We think that probably the problem is caused by the byte[]'s/FileInputStreams created by the class UNIXProcess that manage the stdin, stdout and stderr streams. It seems that these byte arrays cannot be removed correctly by the garbage collector and they become bigger and bigger, so at the end they took all the memory of the system.
    We downloaded the last version of OpenJDK 6 (build b19) and modified UNIXProcess.java.linux so when we call its method destroy(), we assign to null those streams. We did that because we wanted to indicate to the garbage collector that these objects could be removed, as we saw that the close() methods doesn't do anything on their implementation.
    public void destroy() {
         // There is a risk that pid will be recycled, causing us to
         // kill the wrong process!  So we only terminate processes
         // that appear to still be running.  Even with this check,
         // there is an unavoidable race condition here, but the window
         // is very small, and OSes try hard to not recycle pids too
         // soon, so this is quite safe.
         synchronized (this) {
             if (!hasExited)
              destroyProcess(pid);
            try {
                stdin_stream.close();
                stdout_stream.close();
                stderr_stream.close();
                // LINES WE ADDED
                stdin_stream = null;
                stdout_stream = null;
                stderr_stream = null;
            } catch (IOException e) {
                // ignore
                e.printStackTrace();
        }But this didn't work at all. We saw that we were able to execute for a long time our application and that the free memory of the system wasn't decreasing as before, but we did some profiling with this custom JVM and the test application and we still see more or less the same behaviour: lots of surviving generations, at some point increase of the used heap to the maximum allowed, and finally the crash of the test app.
    So sadly, we still don't have a solution for that problem. You could try to compile OpenJDK 6, modify it, and try it with your program to see if the last version works for you. Compiling OpenJDK 6 in Linux is quite easy: you just have to download the source and the binaries from here and configure your environment with something like this:
    export ANT_HOME=/opt/apache-ant-1.7.1/
    export ALT_BOOTDIR=/usr/lib/jvm/java-6-sun
    export ALT_OUTPUTDIR=/tmp/openjdk
    export ALT_BINARY_PLUGS_PATH=/opt/openjdk-binary-plugs/
    export ALT_JDK_IMPORT_PATH=/usr/lib/jvm/java-6-sun
    export LD_LIBRARY_PATH=
    export CLASSPATH=
    export JAVA_HOME=
    export LANG=C
    export CC=/usr/bin/gcc-4.3
    export CXX=/usr/bin/g++-4.3Hope it helps Joby :)
    Cheers.

Maybe you are looking for