Huge reserved memory according to DebugDial (memory leak?)

We experience memory problems in production. 
On the 2008 Windows Server there are many web api services. I see that most of them have the same problem(?) which is quite a big number as reserved memory.
Below are informations from one othe services which I got using DebugDiag.
The service uses Linq2Sql, another WebApi service, FileSystem only to write to LogFile, and sends Email.
.NET GC Heap Information
GC Heap Size 84,77 MBytes  
Total Commit Size  153 MB
Total Reserved Size    17254 MB
Virtual Memory Summary
Size of largest free VM block   7,97 TBytes 
Free memory fragmentation   0,11% 
Free Memory   7,98 TBytes   (99,79% of Total Memory) 
Reserved Memory   17,03 GBytes   (0,21% of Total Memory) 
Committed Memory   384,59 MBytes   (0% of Total Memory) 
Total Memory   8 TBytes 
Largest free block at   0x00000005`3f380000 
Virtual Memory Details
Virtual Allocations  17,19 GBytes
Loaded Modules  179,97 MBytes
Threads  17,27 MBytes
System 4 KBytes
Page Heaps 0 Bytes
Native Heaps  28,76 MBytes
Virtual Allocation Summary
Reserved memory   17 GBytes 
Committed memory   185,82 MBytes 
Mapped memory   15,5 MBytes 
Reserved block count   94 blocks 
Committed block count   129 blocks 
Mapped block count   30 blocks 
In Resource Monitor I have following informations:
Commited: 257 MB
Working 394 MB
Private 198 MB
Should I ignore this information about reserved memory or it tells me something really important?
I would be grateful for any hint.
   

Yep, you should ignore the information about reserved memory. This refers to virtual address space, not physical memory usage. On 64 bit OSes the address space is huge so the GC can afford to reserve a huge amount of address space up front.
What you usually care about is the GC heap size - 85 MB in your case, a very reasonable size.
Sometimes you may also want to look at committed/working/private numbers. Reserved numbers are only interesting on 32 bit OSes where the address space is very small.

Similar Messages

  • How to fix huge iTunes memory leak in 64-bit Windows 7?

    iTunes likes to allocate as much as 1.6GB of memory on my dual-quad XEON 8GB 64-Bit Windows computer and then becomes unresponsive.
    This can happen several times a day and has been going on for as long as I can remember.  No other software that I use does this - only Apple's iTunes.  Each version I have installed of iTunes appears to have this same memory leak.  Currently I am running version 10.7.0.21.
    I love iTunes when it works.  But having to constantly kill and relaunch the app throughout the day is bringing me down.
    Searching for a fix for this on the internet just surfaces more and more complaints about this problem - but without a solution.
    Having written shrinkwrapped software for end users as well as for large corporations and governments for more than 25 years I know a thing or two about software.  A leak like this should take no more than a day or two to locate using modern software tools and double that to fix it.  So why with each new version of iTunes does this problem persist?  iTunes for Windows is the flagship software product Apple makes for non-Mac users - yet they continue to pass up each opportunity they have had over the years with each new release to fix this issue.  Why is this?
    Either the software engineers are not that good or they have been told NOT to spend time on this issue.  I personally believe that the engineers at Apple are very good, and therefore am left thinking that the latter is more likely the case.  Maybe this is to coax people to purchase a Mac so that they can finally run iTunes without these egregious memory leaks.  I would like to offer another issue to consider.
    Just as Amazon sold Kindles and Google sold Nexus tablets at low cost - not counting on margin for profit - but instead they wanted to saturate the marketplace with tools for making future purchases of content almost trivial to do with their devices.  Apple also counts on this model with their pricer hardware - but they also have iTunes.  Instead of trying to get people to switch to a MAC by continuing to avoid fixing this glaring issue in iTunes for Windows I would like to suggest that by allowing their engineers to address this issue that Apple will help keep Windows users from jumping ship to another music app.  The profit to be made by keeping those Windows users happy and wedded to the iTunes store is obvious.
    By continuing to keep this leak in iTunes for Windows all it does is lower my esteem for the company and start to make me wonder if the software is just as buggy on Macs.

    I have same issue. Ongoing for more than 1 year and currently running iTunes 11.3.
    My PC is Dell OptiPlex 990 I7 processor, 8GB ram, W7 64 [always keep things patched up to latest OS updates etc]
    I use this iTunes install to stream music videos etc to multiple appleTVs, ipads, iphones etc .. via Home Sharing
    Store all my media including music, videos and apps on separate NAS  .. so the iTunes running on PC is only doing the traffic cop role and streaming / using files stored on NAS .. creates lots of IO across my network
    Previous troubleshooting suggest possible contributing causes include
    a) podcast updates  .. until recently I had this auto updates on multiple podcast subscriptions, presumably the iTunes would flow this from the PC to save on the NAS across the network .. if the memory leak is in the iTunes network communication layer (?bonjour?)  this may be sensitive to IO that would not normally occur if the iTunes file saving was local on the same PC
    b) app updates .. have 200+ apps in my library and there is always a batch of updates .. some updates 100s of MB is size .. routinely see 500MB to 1GB of updates in single update run .. all my apps are
    c) streaming music / movies .. seems when we ramp up streamlining of music or movies . memory leak grows faster .. ie within hours of clean start
    c) large syncs of music or videos to ipads or iphones .. noticed that get big problems when I rebuild an ipad .. I typically have 60+ GB of data in terms of apps /  music / videos to load .. have to do rebuild in phases due to periodic lockups

  • How to deal with Memory Leaks, that are caused by Binding

    Hi, I recently noticed (huge?) memory leaks in my application and suspect bindings to be the cause of all the evil.
    I made a little test case, which confirms my suspicion:
    import javafx.application.Application;
    import javafx.beans.property.SimpleStringProperty;
    import javafx.beans.property.StringProperty;
    import javafx.scene.Scene;
    import javafx.scene.control.Button;
    import javafx.scene.layout.VBox;
    import javafx.stage.Stage;
    public class TestAppMemoryLeak extends Application {
        public static void main(String[] args) {
            launch(args);
        @Override
        public void start(Stage stage) throws Exception {
            VBox root = new VBox();
            Button button = null;
            for (int i = 0; i < 100000; i++) {
                button = new Button();
                button.textProperty().bind(text);
                button.textProperty().unbind(); // if you don't call this, you can notice the increased memory of the java process.
            root.getChildren().add(button);
            Scene scene = new Scene(root);
            stage.setScene(scene);
            stage.show();
        private StringProperty text = new SimpleStringProperty("test");
    }Now the problem is, HOW can I know, when a variable is no longer needed or overwritten by a new instance.
    Just an example:
    I have a ListView with a Cell Factory. In the updateItem method, I add a ContextMenu. The textProperty of each MenuItem is bound to a kind of global property, like in the example above. I have to do it in the updateItem method, since the ContextMenu differs depending on the item.
    So every time the updateItem method is called a new ContextMenu is created, which binds some properties, but the old context menus remain in memory.
    I guess there could be many more example.
    How can I deal with it?

    I've dealt with this situation and created a Jira issue for it, but I also have a work-around that is a bit unwieldy but works. I'll share it with you.
    The bug that deals with this (in part atleast): http://javafx-jira.kenai.com/browse/RT-20616
    The solution is to use weak invalidation listeners, however they are a bit of a pain to use as you cannot do it with something simplistic as "bindWeakly"... and you need to keep a reference around to the wrapped listener otherwise it will just get garbage collected immediately (as it is only weakly referenced). Some very odd bugs can surface if weak listeners disappear randomly because you forgot to reference them :)
    Anyway, see this code below, it shows you some code that is called from a TreeCell's updateItem method (I've wrapped it in some more layers in my program, but it is essentially the same as an updateItem method):
    public class EpisodeCell extends DuoLineCell implements MediaNodeCell {
      private final WeakBinder binder = new WeakBinder();
      @Override
      public void configureCell(MediaNode mediaNode) {
        MediaItem item = mediaNode.getMediaItem();
        StringBinding episodeRange = MapBindings.selectString(mediaNode.dataMapProperty(), Episode.class, "episodeRange");
        binder.unbindAll();
        binder.bind(titleProperty(), MapBindings.selectString(mediaNode.dataMapProperty(), Media.class, "title"));
        binder.bind(ratingProperty(), MapBindings.selectDouble(mediaNode.dataMapProperty(), Media.class, "rating").divide(10));
        binder.bind(extraInfoProperty(), Bindings.when(episodeRange.isNull()).then(new SimpleStringProperty("Special")).otherwise(episodeRange));
        binder.bind(viewedProperty(), item.viewedProperty());
        subtitleProperty().set("");
    }This code makes use of a class called WeakBinder -- it is a helper class that can make Weak bindings and can keep track of them. When you call unbindAll() on it, all of the bindings it created before are released immediately (although they will also disappear when the Cell itself is garbage collected, which is possible because it only makes weak references).
    I've tested this extensively and it solves the problem of Cells keeping references to objects with much longer life cycles (in my case, the MediaNode passed in has a longer lifecycle than the cells and so it is important to bind weakly to it). Before this would create huge memory leaks (crashing my program within a minute if you kept refreshing the Tree)... now it survives hours atleast and the Heap usage stays in a fixed range which means it is correctly able to collect all garbage).
    The code for WeakBinder is below (you can consider it public domain, so use it as you see fit, or write your own):
    package hs.mediasystem.util;
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import javafx.beans.InvalidationListener;
    import javafx.beans.Observable;
    import javafx.beans.WeakInvalidationListener;
    import javafx.beans.property.Property;
    import javafx.beans.value.ObservableValue;
    public class WeakBinder {
      private final List<Object> hardRefs = new ArrayList<>();
      private final Map<ObservableValue<?>, WeakInvalidationListener> listeners = new HashMap<>();
      public void unbindAll() {
        for(ObservableValue<?> observableValue : listeners.keySet()) {
          observableValue.removeListener(listeners.get(observableValue));
        hardRefs.clear();
        listeners.clear();
      public <T> void bind(final Property<T> property, final ObservableValue<? extends T> dest) {
        InvalidationListener invalidationListener = new InvalidationListener() {
          @Override
          public void invalidated(Observable observable) {
            property.setValue(dest.getValue());
        WeakInvalidationListener weakInvalidationListener = new WeakInvalidationListener(invalidationListener);
        listeners.put(dest, weakInvalidationListener);
        dest.addListener(weakInvalidationListener);
        property.setValue(dest.getValue());
        hardRefs.add(dest);
        hardRefs.add(invalidationListener);
    }Let me know if this solves your problem.

  • Huge memory leaks in using PL/SQL tables and collections

    I have faced a very interesting problem recently.
    I use PL/SQL tables ( Type TTab is table of ... index by binary_integer; ) and collections ( Type TTab is table of ...; ) in my packages very widely. And have noticed avery strange thing Oracle does. It seems to me that there are memory leaks in PGA when I use PL/SQL tables or collections. Let me a little example.
    CREATE OR REPLACE PACKAGE rds_mdt_test IS
    TYPE TNumberList IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    PROCEDURE test_plsql_table(cnt INTEGER);
    END rds_mdt_test;
    CREATE OR REPLACE PACKAGE BODY rds_mdt_test IS
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    END;
    END rds_mdt_test;
    I run the following test code:
    BEGIN
    rds_mdt_test.test_plsql_table (1000000);
    END;
    and see that my session uses about 40M in PGA.
    If I repeat this example in the same session creating the PL/SQL table of smaller size, for instance:
    BEGIN
    rds_mdt_test.test_plsql_table (1);
    END;
    I see again that the size of used memory in PGA by my session was not decreased and still be the same.
    The same result I get if I use not PL/SQL tables, but collections or varrays.
    I have tried some techniques to make Oracle to free the memory, for instance to rewrite my procedure in the following ways:
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    x.DELETE;
    END;
    or
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    FOR indx in 1 .. cnt LOOP
    x.DELETE(indx);
    END LOOP;
    END;
    or
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    x TNumberList;
    empty TNumberList;
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    x(indx) := indx;
    END LOOP;
    x := empty;
    END;
    and so on, but result was the same.
    This is a huge problem for me as I have to manipulate collections and PL/SQL tables of very big size (from dozens of thousand of rows to millions or rows) and just a few sessions running my procedure may cause server's fall due to memory lack.
    I can not understand what Oracle reseveres such much memory for (I use local variables) -- is it a bug or a feature?
    I will be appreciated for any help.
    I use Oracle9.2.0.1.0 server under Windows2000.
    Thank you in advance.
    Dmitriy.

    Thank you, William!
    Your advice about using DBMS_SESSION.FREE_UNUSED_USER_MEMORY was very useful. Indeed it is the tool I was looking for.
    Now I write my code like this
    declare
    type TTab is table of ... index binary_integer;
    res TTab;
    empty_tab TTab;
    begin
    res(1) := ...;
    res := empty_tab;
    DBMS_SESSION.FREE_UNUSED_USER_MEMORY;
    end;
    I use construction "res := empty_tab;" to mark all memory allocated to PL/SQL table as unused according to Tom Kyte's advices. And I could live a hapy life if everything were so easy. Unfortunately, some tests I have done showed that there are some troubles in cleaning complex nested PL/SQL tables indexed by VARCHAR2 which I use in my current project.
    Let me another example.
    CREATE OR REPLACE PACKAGE rds_mdt_test IS
    TYPE TTab0 IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    TYPE TRec1 IS RECORD(
    NAME VARCHAR2(4000),
    rows TTab0);
    TYPE TTab1 IS TABLE OF TRec1 INDEX BY BINARY_INTEGER;
    TYPE TRec2 IS RECORD(
    NAME VARCHAR2(4000),
    rows TTab1);
    TYPE TTab2 IS TABLE OF TRec2 INDEX BY BINARY_INTEGER;
    TYPE TStrTab IS TABLE OF NUMBER INDEX BY VARCHAR2(256);
    PROCEDURE test_plsql_table(cnt INTEGER);
    PROCEDURE test_str_tab(cnt INTEGER);
    x TTab2;
    empty_tab2 TTab2;
    empty_tab1 TTab1;
    empty_tab0 TTab0;
    str_tab TStrTab;
    empty_str_tab TStrTab;
    END rds_mdt_test;
    CREATE OR REPLACE PACKAGE BODY rds_mdt_test IS
    PROCEDURE test_plsql_table(cnt INTEGER) IS
    BEGIN
    FOR indx1 IN 1 .. cnt LOOP
    FOR indx2 IN 1 .. cnt LOOP
    FOR indx3 IN 1 .. cnt LOOP
    x(indx1) .rows(indx2) .rows(indx3) := indx1;
    END LOOP;
    END LOOP;
    END LOOP;
    x := empty_tab2;
    dbms_session.free_unused_user_memory;
    END;
    PROCEDURE test_str_tab(cnt INTEGER) IS
    BEGIN
    FOR indx IN 1 .. cnt LOOP
    str_tab(indx) := indx;
    END LOOP;
    str_tab := empty_str_tab;
    dbms_session.free_unused_user_memory;
    END;
    END rds_mdt_test;
    1. Running the script
    BEGIN
    rds_mdt_test.test_plsql_table ( 100 );
    END;
    I see that usage of PGA memory in my session is close to zero. So, I can judge that nested PL/SQL table indexed by BINARY_INTEGER and the memory allocated to it were cleaned successfully.
    2. Running the script
    BEGIN
    rds_mdt_test.test_str_tab ( 1000000 );
    END;
    I can see that plain PL/SQL table indexed by VARCHAR2 and memory allocated to it were cleaned also.
    3. Changing the package's type
    TYPE TTab2 IS TABLE OF TRec2 INDEX BY VARCHAR2(256);
    and running the script
    BEGIN
    rds_mdt_test.test_plsql_table ( 100 );
    END;
    I see that my session uses about 62M in PGA. If I run this script twice, the memory usage is doubled and so on.
    The same result I get if I rewrite not highest, but middle PL/SQL type:
    TYPE TTab1 IS TABLE OF TRec1 INDEX BY VARCHAR2(256);
    And only if I change the third, most nested type:
    TYPE TTab0 IS TABLE OF NUMBER INDEX BY VARCHAR2(256);
    I get the desired result -- all memory was returned to OS.
    So, as far as I can judge, in some cases Oracle does not clean complex PL/SQL tables indexed by VARCHAR2.
    Is it true or not? Perhaps there are some features in using such way indexed tables?

  • Huge memory leaks after upgrading from kernel 2.6.38(?)

    My system began leaking memory some months ago and I've little to no idea what is causing it. The closest hint I can give is that I think these problems started around upgrading from Linux 2.6.38 to 2.6.39 and have continued with 3.0 aswell. Sometimes my whole system freezes due to extensive swapping (HDD usage led stays lit), sometimes it stops after consuming almost all of my 4 GB RAM.
    I think the leak has to come from inside the kernel because all sysmon apps claim no application is eating too much RAM yet the system memory consumption is at 100 % or even using swap. Even after I kill Xorg the memory usage will stay at least at 2.0 GB, when it should be less than half a gig.
    I'd like to guess the blame is on Nouveau, because it was giving me a hell of a time back with Linux 2.6.39. It had a habit of freezing my system when re-enabling suspended compositing (both with Kwin and Mutter). Also video playback was corrupted sometimes. These probs have since been fixed with Linux 3.0.
    I cannot name any spesific app that might trigger this memory leaking. I don't play any games and mainly use Firefox, Chromium and VLC day in, day out.
    There seems to be absolutely no way of reclaiming the reserved memory than rebooting the whole system. I have even tried unloading Nouveau.
    I am currently running the [testing] repo with ~daily upgrades.
    Last edited by Verge (2011-08-16 14:04:09)

    OK, this kind of surprised me. No wonder I couldn't find any relevant stuff while googling for kernel issues. I have had Strigi disabled for some time now because it is miserably broken, but I now disabled the whole Nepomuk too. Let's see if my problem now goes away or what...
    The release of KDE 4.7 Beta 1 actually lines up with the launch of Linux 2.6.39 so this really might be the case. I moved to KDE 4.7 already with the first beta.
    Last edited by Verge (2011-08-16 14:30:37)

  • Huge memory leak when closing PDF from Hyperlink

    I was wondering if anyone else has experienced this issue with Adobe Reader 11.0.10 on Windows 7 64bit:
    1. I have a list of hyperlinks in an Access Table to certain PDF files on a local network folder.
    2. Clicking the hyperlink opens the corresponding PDF.
    The PDF file opens just fine, and renders normally. The issue is when I attempt to close the PDF. This results in an instant Memory leak that will grow to 4GB in under 10 seconds. System crashes completely. I can reproduce the crash in Safe Mode as well. I am able to open/close the PDF from its source location normally without incident. Clean Uninstall/Reinstall produces the same results.
    Downgrade to Adobe 10 fixes the problem completely. I can reproduce the problem on all computers on my network (all running windows 7 32bit or 64bit) by upgrading to Reader 11.0.10.
    I prefer to keep my software updated to prevent vulnerabilities, so any help would be appreciated.

    That is very strange because it is a 32-bit program and cannot (according to popular wisdom) grow over 2 GB. Also, if it were to reach 2 GB it would simply crash, not break the system.
    Do you have a screen shot showing the 4 GB? There might be clues there what is happening.

  • Huge Memory Leak - Need help.

    Hi,
    There is a huge memory leak in our application. Because of this there are frequent session timeout. When we analysed the heap dump using Memory Analyser Tool, we got the leak suspects and the problem suspects are as below:
    One instance of "org.apache.jasper.compiler.JspRuntimeContext" loaded by "org.apache.catalina.loader.StandardClassLoader @ 0x87ff8098" occupies 161,832,200 (23.30%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".
    One instance of "org.apache.catalina.tribes.tipis.LazyReplicatedMap" loaded by "org.apache.catalina.loader.StandardClassLoader @ 0x87ff8098" occupies 133,578,920 (19.23%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".
    185 instances of "org.apache.jasper.runtime.BodyContentImpl", loaded by "org.apache.catalina.loader.StandardClassLoader @ 0x87ff8098" occupy 133,502,776 (19.22%) bytes.
    What could be the root cause for this error and how to resolve this?

    First, in general you not provided enough details to get any help. OS? OS version? Hardware specifics?
    Second, are you running custom code? If so, post it so you can get help.
    Third, if you're not running custom code, the odds of you having a "huge memory leak" are pretty small.

  • Firefox 4.0 RC has huge memory leak + slow down problem

    Firefox 4.0 RC Seems to have a huge memory leak issue. After leaving the program open for around half an hour the RAM usage climbs to over 1 gb.
    However, I have 12 gb of physical ram, so firefox staking 1 gb for itself is not too big of a problem (and is probably a blessing as it means it's caching more data), but the problem is that the UI begins to slow down and become clunky after about 30 minutes.
    Browsing, switching tabs, even typing becomes jittery and not very responsive.
    This happens regardless of how many tabs I have open.
    Now it's also crashing every once in a while.

    Even with Firefox 4.0 (Release) running in Safe Mode, with all add-ons and themes disabled, I'm still inclined to think there's something screwy going on here.
    I was watching Page Faults/sec, Page File Bytes and Working Set in Performance Monitor and tailing the Privoxy log for requests. Even with Firefox minimized and "doing nothing" (making no requests, anyway), over the space of a 10 minute period the Working Set grew from 244,375,552 bytes to 274,939,004 bytes (averaging 50,939 bytes/second). This behaviour doesn't seem consistent though - sometimes it doesn't seem to grow at all.
    Additionally the Page Faults/Sec went nuts, accompanied by a step in Page File Bytes and Working Set, whenever a request got made to http://safebrowsing.clients.google.com/safebrowsing/downloads which seems to happens on a regular basis (approximately every 30 minutes).

  • Firefox 17 huge memory leak

    Since the last update today, Firefox seems to got a huge memory leak. After one hour idle on some pages it was up to 750MB memory usage, and constant 100% load of one CPU core.
    Looks like something is not going well with the garbage collection, if I visit http://www.robertsspaceindustries.com/forums/forum/forum-category-2/ for example.
    Had to disable the graphic hardware acceleration, because of the newly introduced "clear type" font bug.
    Edit: May also be related to mail.google.com, which I have running in an app tab.

    For what it's worth, I had exactly the same problem (except that it's on Linux) since the upgrade to 17.0. When I disabled greasemonkey, the problem instantly disappeared, so it looks like the compatibility with greasemonkey got broken.
    Likewise, I'm very disappointed with the frequent bugs/annoyances that come with the rolling upgrade system; I find myself more and more using Chromium because of it. Mozilla really needs to give people the option to run a stable branch that receives security updates only.

  • HUGE memory leak when using MP3

    I downloaded a new FME 2.0 and noticed they added an option
    to stream in MP3 format. DO NOT USE that mode (!). If you start
    streaming in MP3 format you will get a huge memory leak on the
    client side. Your browser starts eating memory like crazy and it
    will bury your machine withing an hour or two by taking all the
    memory available. I have proved it on several machines, notifed
    Adobe. They confirmed it as a bug and promised to fix in the future
    releases. Nice, huh? and what people are supposed to do with the
    current build?

    A new build 2.0.1.1114 has been posted . Please try and let
    us know if you face this issue with this build.

  • Huge memory leak in java jvm after update 2 for Snow leopard

    Since I updated to Java Update 2 for Snow Leopard my JVM suddenly grows massive (10GB+ real memory - -Xmx=3500m) consuming all memory and rendering my iMac unusable. This does not happen predictably but does happen several times a day now requiring I power off and on again.
    I had been living happily with update 1 with no such problem.
    I need to either go back to update 1 (should have it on Time Machine) or find a solution for this problem.

    Our application is a j2ee-based commercial application facing to specified customers, having about 120 access request an hour.
    We ' re doing stress test on the test server. The strange memory leak occurs at 1:20 am this morning while we're out of company , and no job was scheduled to run at that time. So I have the tendency to image that there is something inside oc4j had occured.
    I have used OptimizeIt to monitor the heap status. However , as the memory leak problem occurs very occasionally ,and that tool deadly slows our server, we are currently using no profiling tools.

  • Suddenly, I'm getting a huge memory leak

    Since somewhere in Firefox version 36 and continuing to the present (ver 37.0.1), I am getting a major memory leak. I am using Mac OS 10.9.5. And this is what I've done so far: reduced the number of tabs (now at about 4 or 5). Have a minimal amount of plugins and extensions. Maybe 5. I've reset Firefox but the leak still occurs.
    This happens when I leave Firefox open overnight. It's only been happening for about the last month. In the morning I find that all my apps have been paused for lack of memory. Looking at the activity monitor I see that my kernel_task is using 9.5GB of memory (when it should be about 1GB). Quitting Firefox solves the problem. But this has never happened before.
    Also, sometimes Firefox will crash saying the Shockwave plugin quit. I have updated to the latest Flash plugin but it still happens.

    The three Add-ons you mentioned are extensions (Firefox/Tools > Add-ons > Extensions).
    Note that an extension like Firefox bug can cause issues with memory leaks.
    If it works in Safe Mode and in normal mode with all extensions (Firefox/Tools > Add-ons > Extensions) disabled then try to find which extension is causing it by enabling one extension at a time until the problem reappears.
    Close and restart Firefox after each change via "Firefox > Exit" (Windows: Firefox/File > Exit; Mac: Firefox > Quit Firefox; Linux: Firefox/File > Quit)
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes
    You can try to close the browsing area and only leave the menu bar visible to see if the leak still happens.
    What I meant above was plugins like Flash or Java (Firefox/Tools > Add-ons > Plugins) that might be used on tabs if you keep the browsing area open with some tabs with web pages.
    *https://support.mozilla.org/kb/Troubleshooting+plugins

  • Huge Memory Leak, CPU Usage

    We are noticing in several of our apps a very bad memory
    leak. See the below code. Its called every 100ms to process any
    messages as we use this like a queue system. Overnight, not a
    single message was sent...as the app was just sitting idle on my
    screen. My server log confirm this, not a single post was done.
    However, memory usage jumped 300MB, and by the end of today, my
    browser is going to crash when it runs out of memory.
    The memory leak seems to be directly related to the
    setTimeout function, as that is the only thing being called
    here...there was nothing put on the queue all night.
    Specifically, Iw as at roughly 50MB of memory usage yesterday
    afternoon. After a bunch of testing, the memory usage was up to
    175MB. Overnight we are now over 475MB. This is the internal memory
    used by Flash, and my Activity monitor confirms this.
    This issue happens on OS X, and Windows XP, Flash 9 and Flash
    10. In 3 different application we have that do this sort of queue
    setTimeout loop.
    Anyone else seeing this?
    How can I work around this?
    Thanks,
    Ben

    "Ben Spink" <[email protected]> wrote in
    message
    news:gonn46$afd$[email protected]..
    > How would either of those resolve the issue?
    AFAIK, there are no inbuilt memory leak problems with using
    an enterFrame
    event handler. LiveCycle Data Services will push the
    information to you, so
    you don't have to constantly try to pull, so to speak..
    > I'm just looking for a fast timer situation that is
    polling my queue for
    > any
    > requests that need to be sent.
    >
    > More importantly...why does this issue exist? Why hasn't
    this issue been
    > fixed...it seems its been around for quite some time.
    It's possible that it's not Flex, but you. Are you constantly
    adding event
    listeners without removing them? Are you removing objects
    from the stage
    without removing all references to them?
    > I can't leave my browser open in the background if I
    have a flex app open.
    > Its going to hang my browser and system...as its already
    done during a a
    > live
    > demo to a customer.
    >
    > There are other bugs, and I can excuse them, but
    something as basic as
    > this
    > creating a memory leak is really terrible.
    I wouldn't be so quick to blame Flex. You have to make sure
    you clean up
    after yourself, or you can cause these types of issues with
    your code.

  • Huge Memory Leak - Simple To Reproduce

    I have an application that updates the display every so often using a timer, and as I run the application it comsumes increasingly more memory. After some extensive effort I narrowed it down to a very simple scenario:
    The code below creates a 40x30 matrix of objects that extend UIComponent (the same problem exists if I extend Sprite). A timer fires every 3 seconds and repaints each object with a random color on updateDisplayList(). Running this (whether from Flex IDE or swf) produces an infinitely increasing memory leak, even though no new memory should be allocated on each timer event. I see an increase of roughly .5MB to 1MB with each call. Calling gc() explicitly (which I know I shouldn't) doesn't help. I'm using Flex Builder 3.
    What am I missing? Is this a known bug? Any comment would be greatly appreciated.
    <?xml version="1.0"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" applicationComplete="startUp();" horizontalAlign="left">
        <mx:Script>
            <![CDATA[
             private var _myUIObjects:Array = new Array();
                private var _timer:Timer = new Timer(3000);
                private function startUp():void
                 var myUIObject:MyUIObject;
                 for (var i:int = 0; i < 1200; i++) {
                  myUIObject = new MyUIObject();
                  _myUIObjects.push(myUIObject);
                  canvas.addChild(myUIObject);
                 for (var iRow:int = 0; iRow < 30; iRow++) {
                  for (var iCol:int = 0; iCol < 40; iCol++) {
                   myUIObject = _myUIObjects[iRow * 40 + iCol];
                   myUIObject.x = iCol * 18;
                   myUIObject.y = iRow * 18;
                   myUIObject.width = 15;
                   myUIObject.height = 15;
                  _timer.addEventListener(TimerEvent.TIMER, handle_timer);
                  _timer.start();
                private function handle_timer(event:Event):void
                 for (var i:int = 0; i < 1200; i++) {
                  MyUIObject(_myUIObjects[i]).invalidateDisplayList();
            ]]>
        </mx:Script>
        <mx:Canvas id="canvas" width="1000" height="1000"/>
    </mx:Application>
    package
    import mx.core.UIComponent;
    public class MyUIObject extends UIComponent
      protected override function updateDisplayList(unscaledWidth:Number, unscaledHeight:Number):void
       this.graphics.beginFill(Math.random() * 500000);
       this.graphics.drawRect(0,0,15,15);

    You are missing a call to graphics.clear().  Your code is actually adding
    another fill on top of previously existing fills.

  • Memory leak on SunOne Web Server 6.1 on application reload

    Hi!
    I am pretty sure that i have found a memory management problem in
    SunOne Web Server 6.1 .
    It started with an OutOfMemory error we got under heavy load . After
    some profiling with Jprofiler i didn't find any memory leaks in the
    application.Even under heavy load (generated by myself) i can't find
    anything ,more, i can't reproduce the error! The memory usage is
    about 20Mb and does not go up .
    However it is pretty simple to see the following behavior:
    [1] Restart the server (to have a clear picture) and wait a little for
    memory usage to stabilize.
    [2] In the application dir. touch .reload or one of the classes:
    The memory usage goes up by another 50Mb (huge amount of mem. taking
    into account the fact that it used only 20Mb under any load befor).
    Do this another time and another 20Mb gone etc..
    The JProfiler marks the memory used by classes . And it can be
    clearly seen the GC can't release most of it.
    I AM sure this is not the application that takes all the memory.
    Another hint : after making the server to reload application i can see
    that the number of threads ON EVERY RELOAD is going up by ~10-20
    threads .The # of threads goes lower over time but not the mem usage.
    My system:
    Sparc Solaris 9 ,Java 1.4.2_04-b05, Sun ONE Web Server 6.1SP5
    Evgeny

    my guess is that - because of '.reload' , web container tries to
    recompile all the classes that you use within your web application and
    hence the memory growth is spiking up.What do you mean by "tries to recompile"?The classes in
    Web-inf are already compiled! And i have only ~5 jsp's .
    (the most part of the applic. is a complicated business logic)
    If you are talking about reloading them ,yes,that's the purpose of .reload,
    isn't it? :).But it seems that container uses the memory for it's own
    classes: the usage of memory for my classes don't really grow
    that much (if at all) after reload (according to profiler)
    Also the real problem is that the memory usage grows to much for
    too long (neither seen it going down) and thus ends with OutOfMemory.
    if you are seeing the memory growth to be flat in stress environment,
    then I am not sure that why do you think that there is a memory leak ?There is no memory leak in stress environment.
    There is memory leak while reloading the application.
    It is a memory hog for sure (~20-30Mb for every reload).
    Memory leak?It seems that way because i can't see memory usage go
    down and after a lot of reloads OutOfMemory is thrown.
    also, what is jvm heap that you use ? did you try jvm tune options like -
    XX:+AggressiveHeap ?256Mb.I can set it bigger ,but how do i know that it will not just delay
    the problem ?
    Thanks for response.
    Evgeny

Maybe you are looking for

  • Table does not exist when creating FK Constraint across schemas

    Hi all, This will probably boil down to a permissions issue since I'm sketchy on the various levels.... I'm testing a conversion to Oracle from our legacy system. There are 4 schemas which I've created and each of those schema users have been granted

  • Cannot update ipod software from 1.1 to 1.2 - support pages NO HELP AT ALL

    HELP ME PLEASE! Every time I try to reset, restore or update my ipod to install the 1.2 software for iPod I get error message: "ipod" cannot be updated because it contains files that are in use by another application. So how do I get rid of these fil

  • Phone 4s reboots on its own

    What do I do when my phone shutdown and restarts itself even after I replaced the battery

  • What display bracket can I use to raise my new 21.5in iMac?

    I just ordered a 21.5 in iMac and I'm trying to find an desk mount that will allow me to raise it off the desk. All the found arm mounts seem to say they are not compatible with the latest 21.5 in iMacs. Which one can I use?

  • Vendor Detail

    Hi all Is there any standard SAP report which for particular Time Period ( input) will give following vendor details Vendor Name,Vendor code,Po No. Po Date,Po value Description, Buyer,Payment terms Thanks in advance Regards Rajesh