Memory leak caused by de-serialization

i have a memory leak.
i am "pretty" sure it is related to serialization .
for(;;) {
  sok.receive(dgPak);
  ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dgPak.getData()));
  MyInterface x = (MyInterface) ois.readObject();
  x.go();
  x = null;
  System.gc();
}after the above code loops for 100-200 times, i always get OutOfMemoryError .
important :
public class MyClass implements MyInterface {
  public void go() {
  public void finalize() {
    System.out.println("object disposed. memory released. but, what might be leaking here??");
}in my testing: " *public void go();* " and " *public void finalize();* " execute the same number of times.
so... my guess is allocating memory via " *new* " and de-serialization is different, right?
this must be related to the problem?

i forgot to include one line, but i didn't see why it made a difference:
ConcurrentMap<Connection, Boolean> connPool = this.createConnPool();
for(;;) {
  sok.receive(dgPak);
  ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dgPak.getData()));
  MyInterface foo = (MyInterface) ois.readObject();
  foo.assignSharedMap(connPool);  // <-- forgot to include this one line.
  foo.doStuff();
  foo = null;
  System.gc();
}the number of de-serializations still equals the number of " +public void finalize();+ " invocations.
note: i never add new connections to the pool. it "looks" like a ConcurrentMap
does not allow objects that reference it to compleley die. (if this behaviour is as to be expected , my apologies, and could you explain a safe way for multiple objects to share a common Collection ?)
this code works fine:
for(;;) {
  sok.receive(dgPak);
  ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dgPak.getData()));
  MyInterface foo = (MyInterface) ois.readObject();
  foo.doStuff();
  foo = null;
  System.gc();
}at this stage of de-bugging i could have tested with just a regular java.util.Map<Connection, Boolean> but forgot. i am using a
completely different approach now, so i'm not going back to test, though i bet it is related to java.util.concurrent. .

Similar Messages

  • [LPX] Drummer/Drum Kit Designer memory leak causing crash

    After working with LPX for a couple of days after release with almost no problems save for a few system overload errors that always seemed to sort themselves out, I loaded up a project I had been working on to find that it crashed every time I tried to play the track. Other, smaller projects, still played in varying degrees, some having to wait for a long time seemingly for samplers to load up etc.
    I did some testing (painstaking, considering that I was having to force-quit the application for my computer to even run again; as an aside there were never any error reports for whatever reason) and found that it appears to be Drummer/Drum Kit Designer that's causing massive memory usage. Looking at the activity monitor when loading the project revealed that the memory usage would just go up and up until it maxed out my (rather modest) 4GB RAM, at which point there would be no free memory and the computer would slow almost to a standstill. Removing all instances of Drum Kit Designer and Drummer reduces the load to a point where it is once again manageable.
    This seems like a fault in Drummer, I'm thinking possibly a memory leak. Has anyone else been experiencing this problem/anyone have any thoughts as to how it could be fixed?
    Thanks in advance,
    Jasper

    This is not a memory bug. It's simply the nature of the new Drummer.
    Drummer uses a LOT of samples. It's a 13GB download for a reason. You will need more than 4GB to use drummer and not run into issues.
    The nature of the modern Logic engine - which seems unchanged from v9 - makes it very bad at being able to tell when it's running out of memory. Logic will simply freeze or crash most of the time it runs out. The freeze would be down to it using literally every last MB on your RAM without overstepping the boundary, and the crash would be downt to it claiming to need more real RAM than you have spare.
    Producer kits use 5.1 surround samples, and submixes them to stereo in a way that posistions them around the space. Whilst a doddle for a modern machine, you Mac is rather long in the tooth to be trying to do that whilst struggling to handle 1.2GB patches with 4GB of RAM total.

  • Help needed: Memory leak causing system crashing...

    Hello guys,
    As helped and suggested by Ben and Guenter, I am opening a new post in order to get help from more people here. A little background first...  
    We are doing LabView DAQ using a cDAQ9714 module (with AI card 9203 and AO card 9265) at a customer site. We run the excutable on a NI PC (PPC-2115) and had a couples of times (3 so far) that the PC just gone freeze (which is back to normal after a rebooting). After monitor the code running on my own PC for 2 days, I noticed there is a memory leak (memory usage increased 6% after one day run). Now the question is, where the leak is??? 
    As a newbee in LabView, I tried to figure it out by myself, but not very sucessful so far. So I think it's probably better to post my code here so you experts can help me with some suggestions. (Ben, I also attached the block diagram in PDF for you) Please forgive me that my code is not written in good manner - I'm not really a trained programmer but more like a self-educated user. I put all the sequence structures in flat as I think this might be easier to read, which makes it quite wide, really wide.
    This is the only VI for my program. Basically what I am doing is the following:
    1. Initialization of all parameters
    2. Read seven 4-20mA current inputs from the 9203 card
    3. Process the raw data and calculate the "corrected" values (I used a few formula nodes)
    4. Output 7 4-20mA current via 9265 card (then to customer's DCS)
    5. Data collection/calculation/outputing are done in a big while loop. I set wait time as 5 secs to save cpu some jucie
    6. There is a configuration file I read/save every cycle in case system reboot. Also I do data logging to a file (every 10min by default).
    7. Some other small things like local display and stuff.
    Again I know my code probably in a mess and hard to read to you guys, but I truely appreciate any comments you provide! Thanks in advance!
    Rgds,
    Harry
    Attachments:
    Debug-Harry_0921.vi ‏379 KB
    Debug-Harry_0921 BD.pdf ‏842 KB

    Well, I'll at least give you points for neatness. However, that is about it.
    I didn't really look through all of your logic but I would highly recommend that you check out the examples for implementing state machines. Your application suffers greatly in that once you start you basically jumped off the cliff. There is no way to alter your flow. Once in the sequence structure you MUST execute every frame. If you use a state machine architecture you can take advantage of shift registers and eliminate most of your local variables. You will also be able to stop execution if necessary such as a user abort or an error. Definitely look at using subVIs. Try to avoid implementing most of your program in formula nodes. You have basically written most of your processing there. While formula nodes are easier for very complex equations most of what you have can easily be done in native LabVIEW code. Also if you create subVIs you can iterate over the data sets. You don't need to duplicate the code for every data set.
    I tell this to new folks all the time. Take some time to get comfortable with data flow programming. It is a different paradigm than sequential text based languages but once you learn it it is extremely powerful. All your data flow to control execution rather than relying on the sequence frame structure. A state machine will also help quite a bit.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Massive Memory Leak Caused by USB Connection

    I have a Sansa Fuze MP3 player.  I noticed when I connected it via USB both to an iMac 27" late 2009 and MacBook Pro 15-inch early 2011, both machines would grind to a halt. get very hot, and run the CPUs at full blast.  The Finder would quickly become unresponsive.  There were huge (like 500 GB) virtual memory files.  Disk Utility could not repair the drive.  Reformatting the MP3 player using its System menu finally cured the problem.  Seems like Lion should have a better way of handling misbehaving peripherals.

    i forgot to include one line, but i didn't see why it made a difference:
    ConcurrentMap<Connection, Boolean> connPool = this.createConnPool();
    for(;;) {
      sok.receive(dgPak);
      ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dgPak.getData()));
      MyInterface foo = (MyInterface) ois.readObject();
      foo.assignSharedMap(connPool);  // <-- forgot to include this one line.
      foo.doStuff();
      foo = null;
      System.gc();
    }the number of de-serializations still equals the number of " +public void finalize();+ " invocations.
    note: i never add new connections to the pool. it "looks" like a ConcurrentMap
    does not allow objects that reference it to compleley die. (if this behaviour is as to be expected , my apologies, and could you explain a safe way for multiple objects to share a common Collection ?)
    this code works fine:
    for(;;) {
      sok.receive(dgPak);
      ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dgPak.getData()));
      MyInterface foo = (MyInterface) ois.readObject();
      foo.doStuff();
      foo = null;
      System.gc();
    }at this stage of de-bugging i could have tested with just a regular java.util.Map<Connection, Boolean> but forgot. i am using a
    completely different approach now, so i'm not going back to test, though i bet it is related to java.util.concurrent. .

  • SystemUIServer and loginwindow process showing memory leak causing Lion to stop responding

    I have shut down almost all apps, except Activity Monitor, and both SystemUIServer and loginwindow process continue to grow over 1GB. 
    I could kill SystemUIServer and it will relaunch itself and the symptom resurfaces quickly.  loginwindow couldn't be killed.
    The virtual memory involved are even larger in terms of 10GB to 20GB until my hard disk runs in panic mode.
    Restarting my Lion doesn't fix the issue.  Login using a different user profile does not experience the same symptom. 
    Is this a Lion bug?

    I reinstalled Lion OS X using the command-R feature at startup. Everything solved.

  • Memory leaks with Third party classes

    Hello all,
    This is fairly known problem. But I guess, my problem gives a new dimention to the problem.
    I have an application which is developed by me. This application ideally needed to use third party classes ( obviously no source code is supplied ). These third party classes provide extra functionality required.
    The problem is, when I don't use third party classes in my application, every thing is fine. When I include third party classes, I am having memory leaks.
    Then I tried to investigate memory leaks with Optimizeit tool. It is new to me. As of now, I understood, we can identify where the memory leaks are occuring.
    finally the problem is, in order to solve this, I need some patches in the code. But I don't have source code for those classes. How to solve this problem?
    For example,
    I use a third party classes in my code like this,
    ThirdPartyMemoryLeakClass obj = new ThirdPartyMemoryLeakClass();
    This 'obj' is made static, as it takes lot of time to create this object. Obviously this object contains several references to other objects, which I can't control.
    In the process of reusing this object, I am getting memory leaks.
    Any ideas regarding, how one has to deal this type of situations? What are the issues involved with this case? Are there any similar problems, which have been solved? are most welcome.
    many thanks for your time.
    Madhav

    Decompile it using jad. Find leak.Yes, I too got the idea and tried to decompile those classes and recompile. I had some problems while recompiling. Is this is the only way to get rid of this problem?
    I was refering to powersoft.datawindow.DataStore class. Does any body here has worked on these?
    Can you suggest me how to find the memory leak causes? if you were needed to find out memory leak causes, what would be your approach?
    Madhav

  • Memory leak in a function registered using set_restore_function()

    I experience a problem with memory leak caused by the following function:
    void RestorePhonemesSet(PhonemesSetStructType &phonemesSet, const void *src) {
    char p = (char ) src;
    memcpy(&phonemesSet.len, p, sizeof (int));
    p += sizeof (int);
    memcpy(&phonemesSet.whichFile, p, sizeof (int));
    p += sizeof (int);
    memcpy(&phonemesSet.whichPosition, p, sizeof (int));
    p += sizeof (int);
    phonemesSet.phonemes = (int *)malloc(sizeof(int)*phonemesSet.len);
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ here the problematic code
    memcpy(phonemesSet.phonemes, p, sizeof (int) * phonemesSet.len);
    This function is registered using a following call: DbstlElemTraits<PhonemesSetStructType>::instance()->set_restore_function(RestorePhonemesSet);
    The culprit it the malloc memory allocation. If I leave out the malloc the program crashes. If I free the phonemeSet.phonemes memory segment at the end of the restore function I lost data, and if I use the malloc there is a large memory leak while reading every record from the database.
    What should I do to prevent the memory leak?
    Regards,
    markur
    Edited by: 904259 on 2011-12-24 05:42

    the solution is using the memory allocated for p, no need to allocate new memory space for the variable: the problematic line should look like .phonemes = (int *)p;
    the memcpy function in the following line is thus superfluous.
    Edited by: 904259 on 2011-12-24 05:43

  • SSDT Memory Leak

    The simple act of opening and closing packages in SSDT seems to lead to some sort of memory leak. I've tested this on my own solution and managed to run the memory usage of devenv.exe up to over 2GB. The memory required to open and display a package doesn't
    appear to ever get de-allocated when the package is later closed. When opening the same package over again, the memory usage of devenv.exe increases by the same amount as the first time it was opened as well, so this does not appear to be any sort of caching.
    This can also be easily re-created with the adventure works refresh sample package located here: http://msftisprodsamples.codeplex.com/. Simply open and close it over and over, and watch the memory
    usage climb. It doesn't get de-allocated until devenv.exe is closed. Example snapshots of the WS of devenv.exe when opening and closing a package a few times:
    process
    state
    WS bytes
    devenv.exe
    start
             188,579,840
    devenv.exe
    open pkg
             296,566,784
    devenv.exe
    close pkg
             300,191,744
    devenv.exe
    open pkg
             327,499,776
    devenv.exe
    close pkg
             327,434,240
    devenv.exe
    open pkg
             354,357,248
    devenv.exe
    close pkg
             349,966,336
    devenv.exe
    open pkg
             373,522,432
    devenv.exe
    close pkg
             377,413,632
    devenv.exe
    open pkg
             397,266,944
    devenv.exe
    close pkg
             396,820,480
    devenv.exe
    open pkg
             413,175,808
    devenv.exe
    close pkg
             413,765,632
    While opening and closing the same package over and over again is hardly a likely scenario to run across, editing my own solution (which contains over 90 packages) very easily leads to devenv.exe running out of memory and crashing.

    Hi Mike,
    For extensions I have SQL Connect from RedGate, and the Python Tools.
    For Add-ins I have SQL Prompt, BIDS Helper, and ANTS.
    I've uninstalled all of these extensions, and add-ins, upgraded to the September 2012 release, and I still have the same issue.
    By opening and closing a package I mean working with an SSIS solution, opening a dtsx package, working on it, closing it - the memory consumption of Data Tools while doing this just keeps rising and rising until it finally runs out of memory.
    Today for instance, I had to go in and modify several dtsx packages to turn off check-constraints on certain destinations. I would typically open a few packages make the changes, then hit save-all, right click the toolbar and select close-all-but this, continue
    on for more modifications. At the end of modifying my 90+ packages, I set it up to do a debug run through - with the only my master package being open in the editor. It didn't get more than 2 packages in from the master package before it started throwing up
    an out of memory error every few seconds with the dev environment saying (Not Responding) and the UI not updating. When I looked at how much memory devenv.exe was using, it was up over 2.3GB.
    I've since done several simple tests, like opening an SSIS solution, opening a few packages, closing them, re-opening them, closing them, etc - the memory consumption of devenv.exe does not go down, even when closing the solution entirely. I've tried it
    two different machines now, one that only has the SQL Prompt tool installed, and my workstation which had the above tools installed - and it's the same behaviour with or without those tools.
    I've had a suspicion that there was a memory leak causing issues for a while, as I also noticed it when trying to get to the root of another problem here:
    http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/c578c327-9edd-4bba-bd4b-f1fa0d60cc4b/#0e3dfbfd-d525-4c90-a25f-758d9702ab05

  • JSF: partial page rendering is causing memory leak leading to outofmemory

    JDeveloper 10.1.3.2.0
    JDK: 1.6.0_06
    Operating System: Windows XP.
    I test my application for memory leaks. For that purpose, I use jconsole to monitor java heap space. I have an edit page that has two dependent list components. One displays all countries and the other displays cities of the selected country.
    I noticed java heap space keeps growing as I change country from country list.
    I run garbage collection and memory usage does not go down. If I keep changing the province for 5 minutes, then I hit a java heap space outofmemory exception.
    To narrow down the problem, I removed the second city component and the problem still exists.
    To narrow it down further, I removed autosubmit attribute from the country component and then memory usage stopped increasing as I change country.
    country/city partial page rendering is just an example. I am able to reproduce the same problem on every page where i use partial page rendering. My conclusion is PPR is causing memory leak or at least the autosubmit attribute.
    This is really bad. Anyone out there experienced same issue. Any help/advice is highly appreciated !!
    Thanks
    <af:panelLabelAndMessage
    inlineStyle="font-weight:bold;"
    label="Country:"
    tip=" "
    showRequired="true"
    for="CountryId">
    <af:selectOneChoice id="CountryId"
                   valuePassThru="true"
                   value="#{bindings.CountryId.inputValue}"
                   autoSubmit="true"
                   inlineStyle="width:221px"
                   simple="true">
         <af:forEach var="item"
              items="#{bindings.CountriesListIterator.allRowsInRange}">
         <af:selectItem value="#{item.countryId}"
                   label="#{item.countryName}"/>
         </af:forEach>
    </af:selectOneChoice>
    </af:panelLabelAndMessage>
    <af:panelLabelAndMessage
    inlineStyle="font-weight:bold;"
    label="City:"
    tip=" "
    showRequired="true"
    for="CityId">
    <af:selectOneChoice id="CityId"
                   valuePassThru="true"
                   value="#{bindings.CityId.inputValue}"
                   partialTriggers="CountryId"
                   autoSubmit="true"
                   inlineStyle="width:221px"
                   unselectedLabel="--Select City--"
                   simple="true">
         <f:selectItems value="#{backing_CountryCityBean.citiesSelectItems}"/>
    </af:selectOneChoice>
    </af:panelLabelAndMessage>

    Samsam,
    I haven't seen this problem myself, no.
    To clarify - are you seeing this behaviour when running your app in JDeveloper, or when running in an application server? If in JDeveloper, a copuple of suggestions:
    * (may not matter, but...) It's not supported to run JDev 10g with JDK 6
    * have you tried the [url http://www.oracle.com/technology/pub/articles/masterj2ee/j2ee_wk11.html]memory profiler
    Best,
    John

  • How to deal with Memory Leaks, that are caused by Binding

    Hi, I recently noticed (huge?) memory leaks in my application and suspect bindings to be the cause of all the evil.
    I made a little test case, which confirms my suspicion:
    import javafx.application.Application;
    import javafx.beans.property.SimpleStringProperty;
    import javafx.beans.property.StringProperty;
    import javafx.scene.Scene;
    import javafx.scene.control.Button;
    import javafx.scene.layout.VBox;
    import javafx.stage.Stage;
    public class TestAppMemoryLeak extends Application {
        public static void main(String[] args) {
            launch(args);
        @Override
        public void start(Stage stage) throws Exception {
            VBox root = new VBox();
            Button button = null;
            for (int i = 0; i < 100000; i++) {
                button = new Button();
                button.textProperty().bind(text);
                button.textProperty().unbind(); // if you don't call this, you can notice the increased memory of the java process.
            root.getChildren().add(button);
            Scene scene = new Scene(root);
            stage.setScene(scene);
            stage.show();
        private StringProperty text = new SimpleStringProperty("test");
    }Now the problem is, HOW can I know, when a variable is no longer needed or overwritten by a new instance.
    Just an example:
    I have a ListView with a Cell Factory. In the updateItem method, I add a ContextMenu. The textProperty of each MenuItem is bound to a kind of global property, like in the example above. I have to do it in the updateItem method, since the ContextMenu differs depending on the item.
    So every time the updateItem method is called a new ContextMenu is created, which binds some properties, but the old context menus remain in memory.
    I guess there could be many more example.
    How can I deal with it?

    I've dealt with this situation and created a Jira issue for it, but I also have a work-around that is a bit unwieldy but works. I'll share it with you.
    The bug that deals with this (in part atleast): http://javafx-jira.kenai.com/browse/RT-20616
    The solution is to use weak invalidation listeners, however they are a bit of a pain to use as you cannot do it with something simplistic as "bindWeakly"... and you need to keep a reference around to the wrapped listener otherwise it will just get garbage collected immediately (as it is only weakly referenced). Some very odd bugs can surface if weak listeners disappear randomly because you forgot to reference them :)
    Anyway, see this code below, it shows you some code that is called from a TreeCell's updateItem method (I've wrapped it in some more layers in my program, but it is essentially the same as an updateItem method):
    public class EpisodeCell extends DuoLineCell implements MediaNodeCell {
      private final WeakBinder binder = new WeakBinder();
      @Override
      public void configureCell(MediaNode mediaNode) {
        MediaItem item = mediaNode.getMediaItem();
        StringBinding episodeRange = MapBindings.selectString(mediaNode.dataMapProperty(), Episode.class, "episodeRange");
        binder.unbindAll();
        binder.bind(titleProperty(), MapBindings.selectString(mediaNode.dataMapProperty(), Media.class, "title"));
        binder.bind(ratingProperty(), MapBindings.selectDouble(mediaNode.dataMapProperty(), Media.class, "rating").divide(10));
        binder.bind(extraInfoProperty(), Bindings.when(episodeRange.isNull()).then(new SimpleStringProperty("Special")).otherwise(episodeRange));
        binder.bind(viewedProperty(), item.viewedProperty());
        subtitleProperty().set("");
    }This code makes use of a class called WeakBinder -- it is a helper class that can make Weak bindings and can keep track of them. When you call unbindAll() on it, all of the bindings it created before are released immediately (although they will also disappear when the Cell itself is garbage collected, which is possible because it only makes weak references).
    I've tested this extensively and it solves the problem of Cells keeping references to objects with much longer life cycles (in my case, the MediaNode passed in has a longer lifecycle than the cells and so it is important to bind weakly to it). Before this would create huge memory leaks (crashing my program within a minute if you kept refreshing the Tree)... now it survives hours atleast and the Heap usage stays in a fixed range which means it is correctly able to collect all garbage).
    The code for WeakBinder is below (you can consider it public domain, so use it as you see fit, or write your own):
    package hs.mediasystem.util;
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import javafx.beans.InvalidationListener;
    import javafx.beans.Observable;
    import javafx.beans.WeakInvalidationListener;
    import javafx.beans.property.Property;
    import javafx.beans.value.ObservableValue;
    public class WeakBinder {
      private final List<Object> hardRefs = new ArrayList<>();
      private final Map<ObservableValue<?>, WeakInvalidationListener> listeners = new HashMap<>();
      public void unbindAll() {
        for(ObservableValue<?> observableValue : listeners.keySet()) {
          observableValue.removeListener(listeners.get(observableValue));
        hardRefs.clear();
        listeners.clear();
      public <T> void bind(final Property<T> property, final ObservableValue<? extends T> dest) {
        InvalidationListener invalidationListener = new InvalidationListener() {
          @Override
          public void invalidated(Observable observable) {
            property.setValue(dest.getValue());
        WeakInvalidationListener weakInvalidationListener = new WeakInvalidationListener(invalidationListener);
        listeners.put(dest, weakInvalidationListener);
        dest.addListener(weakInvalidationListener);
        property.setValue(dest.getValue());
        hardRefs.add(dest);
        hardRefs.add(invalidationListener);
    }Let me know if this solves your problem.

  • Possible causes for memory leak in Java and Tomcat

    I would like to enquire what are the typical mistakes done by programmers that will cause memory leak in Java and Tomcat.

    Please refer the below site. It will give more points about the memory leak and how to rectify it.
    http://www.experts-exchange.com/Software/Server_Software/Application_Servers/Q_20981562.html?cid=336

  • Memory leak in Real-Time caused by VISA Read and Timed Loop data nodes? Doesn't make sense.

    Working with LV 8.2.1 real-time to develop a host of applications that monitor or emulate computers on RS-422 busses.   The following screen shots were taken from an application that monitors a 200Hz transmission.  After a few hours, the PXI station would crash with an awesome array of angry messages...most implying something about a loss of memory.  After much hair pulling and passing of the buck, my associate was able to discover while watching the available memory on the controller that memory loss was occurring with every loop containing a VISA read and error propogation using the data nodes (see Memory Leak.jpg).  He found that if he switched the error propogation to regular old-fashioned shift registers, then the available memory was rock-solid.  (a la No Memory Leak.jpg)
    Any ideas what could be causing this?  Do you see any problems with the way we code these sorts of loops?  We are always attempting to optimize the way we use memory on our time-critical applications and VISA reads and DAQmx Reads give us the most heartache as we are never able to preallocate memory for these VIs.  Any tips?
    Dan Marlow
    GDLS
    Solved!
    Go to Solution.
    Attachments:
    Memory Leak.JPG ‏136 KB
    No Memory Leak.JPG ‏137 KB

    Hi thisisnotadream,
    This problem has been reported, and you seem to be exactly reproducing the conditions required to see this problem. This was reported to R&D (# 134314) for further investigation. There are multiple possible workarounds, one of which is the one that you have already found of wiring the error directly into the loop. Other situations that result in no memory leak are:
    1.  If the bytes at port property node is not there and a read just happens in every iteration and resulting timeouts are ignored.
    2.  If the case structure is gone and just blindly check the bytes at port and read every iteration.
    3.  If the Timed Loop is turned into a While loop.
    Thanks for the feedback!
    Regards,Stephen S.
    National Instruments
    Applications Engineering

  • Whether our application caused the memory leak

    whether have the command to find out whether our application caused the memory leak !
    We are using Jdev10g + JHeadstart(Release 9.0.5.1, BC4J + Struts + JSP) to build our enterprise application,
    but when the application running some days in our Application Server(RedHat AMD64, with 8GB RAM), the memory consumed was growth seriously in production environment (one OC4J instance will caused almost 2G memory ulilization after running 2 days) that is caused us to restart the instance each day ! we doubt and worry
    about maybe our application suffer the memory leak problem, so we want to know have the command to find out whether our application caused the memory leak
    issue and which program is the major murderer for memory.

    Ting,
    Unfortunately there is no 'command' that will show you the location of a memory leak. First I would scrutinizing your code for any obvious 'leaks'. Then, you should obtain some statistics about the usage of your system. One important aspect is how long a HttpSession usually lives. If you have thousands of users that stay online the entire day and never 'time out', and if you have users on the system 24 hours a day, then the sheer number of HttpSessions might be a problem.
    JHeadstart 9.0.x tends to have rather 'heavy' session objects. This can easily be solved by adding some actions to clear up DataObjects and DataObjectSets of previous 'Groups' when entering a new Group. A good place would be between the 'DynamicActionRouter' and the 'ActionRouters'. Just before entering the 'EmployeeActionRouter', you could remove all DataObjects and DataObjectSets that might be left on the Session by actions of the other ActionRouters.
    Also it would be interesting to see if the garbage collector can do its thing when the system is less busy. For instance, if your application has a smaller load during the weekend, what is the memory usage on Sunday at midnight compared to, say, Friday at noon. Has the memory load dropped consistently with the decreased number of online users, or does too much memory stay allocated? If so, then there's more going on than just HttpSession objects.
    If all this does not lead to a solution, I suggest using a profiling tool such as OptimizeIt to investigate the memory usage of the application in detail.
    Kind regards,
    Peter Ebell
    JHeadstart Team

  • The Security ID is not valid causes memory leak in Ldap

    Hi, all:
    We are using the Novell LDAP Provider to allow our server application
    be configured in LDAP mode. One of our clients is experiencing a memory
    leak and we believe that the problem could be related to a "The Security
    ID is not valid" error. When he changes to native Active Directory mode
    the memory leak dissapears (he still gets the "Security ID" error, but
    all works fine). So, we think that the problem caused by the "Security
    ID" error is affecting the Novell Ldap Provider library. He is using a
    Windows 2008 R2 platform.
    My question is: Do you know if these kind of errors are properly
    handled so that resources are released?
    We are using the 2.1.10.1 version of the library.
    Many thanks for you help,
    Luis
    luixrodix
    luixrodix's Profile: http://forums.novell.com/member.php?userid=107647
    View this thread: http://forums.novell.com/showthread.php?t=435894

    ab;2091346 Wrote:
    > -----BEGIN PGP SIGNED MESSAGE-----
    > Hash: SHA1
    >
    > Have the exact error message? Is it safe to assume you are querying
    > MAD
    > and not eDirectory? I make that assumption because you mentioned
    > SIDs.
    >
    > Good luck.
    >
    > Yes the message is: "The Security ID structure is invalid". It is the
    > error 1337 of Windows and cannot be fixed manually since SIDs cannot be
    > modified manually.
    >
    >
    > When the client has configured our tool as Active Directory via LDAP
    > (using the Novell Ldap library) the error is not logged anywhere (that's
    > another reason for thinking that the library is not handling the error
    > correctly) and the application has memory leaks everytime the AD server
    > is queried, but when they use the same Active Directory but in a pure AD
    > configuration (using System.DirectoryServices and Windows API directly)
    > the error is logged and the memory of the application remains stable.
    >
    > I asked the client to fix the Security ID problem, trying to find the
    > user(s) whose SID are wrong and re-creating them, but if the Novell.Ldap
    > library is not handling this error correctly the potential problem is
    > still there.
    >
    >
    >
    >
    >
    > On 03/30/2011 08:36 AM, luixrodix wrote:
    > >
    > > I mean, everytime a query is sent to the LDAP server an Invalid SID
    > is
    > > reported, and some resources are not released. We think that the
    > problem
    > > could be in the LDAP Novell library.
    > >
    > >
    > -----BEGIN PGP SIGNATURE-----
    > Version: GnuPG v2.0.15 (GNU/Linux)
    > Comment: Using GnuPG with Mozilla - 'Enigmail: A simple interface for
    > OpenPGP email security' (http://enigmail.mozdev.org/)
    >
    > iQIcBAEBAgAGBQJNk2bcAAoJEF+XTK08PnB5K1oP/RB5AUOUtXi13jS/3bSG0wVC
    > uErEfdqBj6R7yliZ8oqkLApQXzEomMwmSRwa4K6v+Rj1MDFBW+ nBFTOv4aHVgq53
    > ANslfM0inboZIxuQxBEhB/5HD062s4yGHTgL81LgeKdYyZvx0np6zmgDOVA/Ogx5
    > GS0nQfAhUZ+tAlgrhzRj3FB9WaamSQsdmEbXCTLjIrhy2FjH14 RidAmY0civvAsw
    > 5EoAlPe56JzQWhdzyIMhodVB2lIa2b+ttoKY7+Q35PsW2KJ3zl +O2MgHBdBtGUOQ
    > DekIR3h5kOjsRGAia8Td1eqSjziNB04fBcjR++B1vLuzE7YSGR mfRVAofOdjtlsR
    > lQ7sRX5Wg9cKN0KniMmvgrKaMqYcnl3wGgvhbVDA+vgriOxnRt PRssrTckrRaOcU
    > KvE3efwvgbdWeRNdfVAwU2qMPrsEA051XBtRKCclv5Ebi6AgwZ uuT3rYRm2Ycusy
    > TebrcX+YkiCZE+GYfULzN2KDUoxCbB85xBGwsg9Iz2/nTt6mkHT0+KqNM713uNJX
    > OqtJJP1fvfw6JMeQW5rS0VrTl2yGncJHf+cvrp0cXx8l+CJkB3 X7phZ5N0c1ttTc
    > 9cPCT+WuC7lCn4QviC8QlmZUYfbGDZmYbxm4ewUalB4J6uoBgy HPSbDITHXIeTz9
    > ISovMI9iFXzrS+Cjd7dk
    > =HEVD
    > -----END PGP SIGNATURE-----
    luixrodix
    luixrodix's Profile: http://forums.novell.com/member.php?userid=107647
    View this thread: http://forums.novell.com/showthread.php?t=435894

  • Why does system exec cause a memory leak to occur?

    Sorry if this has been posted before, but I can't quite figure out why this VI is causing a gigantic memory leak (losing nearly 500K of memory a second). It's a simple system exec call, and I even trivialized everything in order to try and discover the problem.
    Does anyone see a huge problem here? I've tried calling up taskkill afterward as well, though that seemed to make little difference.
    If you don't trust my trivial program (and you probably shouldn't), I've also included the source files for the program - it's a one-line C++ program that returns immediately.
    Any insight is greatly appreciated.
    Thanks!
    Attachments:
    Dilution.vi.zip ‏11 KB

    Try putting a wait - even a 0 second wait - in your loop and see if it makes a difference.  Without a wait, LabVIEW will try to run that loop as fast as possible, possibly so fast that it never yields time to free resources.

Maybe you are looking for

  • Audio books in I-tunes 10

    I just updated to i-tunes 10, and have discovered that there no longer seems to be a books or audiobooks tab under the library section (for files on the computer). I still have a books heading for my I-phone under "devices," and can still see the lis

  • HT5299 Thunderbolt to Gigabit Ethernet Adapter

    Can 2 Thunderbolt to Gigabit Ethernet Adapters and Ethernet cable be used to create a long Thunderbolt cable?

  • HRMS API HELP

    Hi Everyone, I need help in writng a pl/sql script(API) to move a thousand employees or more from an old business group to a new business group. I am having a challenge doing this as it does not pick individual employees, What can i include in my API

  • Order with delivery date always on perticular day

    Hi, My clients requirement is orders should always be ordered with delivery date Tuesdays for one sales organisation say X. planning calender wont work here because in order confirmed delivery date calculate with many criteria taking into considerati

  • Audit  enqueue dequeue

    I am using Oracle 11.2.0.3. I have point to point persistent queue implemented through PL/SQL  API.  I have set retention of the queue for 12 months. However,  none of the queue tables/view has information on which server dequeued the message. How ca