How to free up memory leak?

Hi Friends,
DB = Oracle 10g
Apps=EBS R12
OS = Linux RHEL 4
My server physical RAM is 4Gb.
At fresh reboot the Memory consumption is 80M only.
When I started DB is goes up to 800Mb memory usage.
When I started Apps is goes up to 2Gb.
But when I shutdown DB and Apps I am expecting is will go back to 80M.
but still the 2Gb is used.
When I startup DB and Apps again the usage goes up to 4Gb.
How do I resolve this please :(
What is command in Linux that will list all memory usage per module/apps sorted memory size ?
Thanks a lot

have a look at this thread:
tuning Linux semaphores

Similar Messages

  • How to free up memory in MacBook Pro?

    How to free up memory in MacBook Pro? (How to avoid getting the revolving rainbow?)

    I compressed some programs and data. I now have 4.56 GB free.
    SMART UTILITY http://www.volitans-software.com/smart_utility.php
    CAPACITY 80.0 GB
    SMART STATUS: YELLOW and "FAILING"
    "Removed 29 bad sectors
    Total errors 1708
    Reallocated 308 bad sectors"
    Do I need to buy Smart Utility?
    Had 5 simultaneous errors at READ DMA EXT. (I don't know what that means). I don't see all the windows that the web site shows. I guess that is for purchased downloads.
    I'd like to reinstate MobileMe's Backup function. Apple says it will work until June so that I could back that up until I get the backup hard drive. I was in the store last week; iCloud will not work on my computer.
    Thanks, DOTro

  • How to define the memory leak in stability test

    Hi
    Usually, we run our system with 70% CPU load for 72 hours for stability test (on solaris 10), because some plugin of our system is using mtmalloc, and we use prstat to monitor the memory of each plugin. But because of the complex memory usage of our application, we don't know when the application will use the Max memory, and because of the "mtmalloc", the memory showed by "prstat" will not released but continually increased.
    So fro the test point of view, there is the risk of memory leak, but actually, there may be no memory leak, so my question is how to define the memory leak in such condition.

    kevin wrote:
    Thank you for the input and all the info.
    isn't java heap the same as memory allocated to the java process in my weblogic
    starup script ?The heap is sized by the -Xmx and -Xms parameters you pass on the java
    command-line. The permanent generation is separate.
    >
    let me also download jprobe and try to run it and see what it gives me.
    I'd start by running with -verbose:gc. I'd want to know whether you're
    running out of heap or permgen space.
    -- Rob
    kevin
    Rob Woollen <[email protected]> wrote:
    Unfortunately memory leaks are not fun to track down even with tools.
    I'd first suggest determining whether you're running out of space in
    the
    permanent area (where classes are loaded), or you've exhausted the java
    help space.
    I'd start by adding -verbose:gc. Look at the gc messages right before
    you hit the OutOfMemoryError. If there's plenty of space left, I'd
    suspect you're running out of perm space. Search these newsgroups and
    the web for MaxPermSize, and you should see plenty of info.
    If you're running out of java heap, tools like jprobe and OptimizeIt
    are
    helpful. If you can tell me a little more about your application and
    how you're testing it, I can offer some more tips.
    -- Rob
    kevin wrote:
    Iam new to JAVA and weblogic. I have an application that runs out ofmemory time
    and again.
    please let me know how to pin point this problem and moreover, howto interpret
    or understand that there is a problem. I have downloaded JPROFILE tool,but it
    is very confusing to understand what is goin on in this tool.
    If somebody can let me know how to interpret and understand the memoryleak, that
    will be great !!!
    thank you.

  • How to root out memory leak with  Java JNI & Native BDB 11g ?

    We are testing a web application using the 32-bit compiled native 11g version of BDB (with replication) under 32-bit IBM 1.5 JVM via JNI under 64-bit RedHat Linux. We are experiencing what appears to be a memory leak without a commensurate increase in Java heap size. Basically the process size continues to grow until the max 32-process size is reached (4Gb) and eventually stops running (no core). Java heap is set to 2Gb min/max. GCs are nominal, so the leak appears to be native and outside Java bytecode.
    We need to determine whether there is a memory leak in BDB, or the IBM JVM or simply a mis-use of BDB in the Java code. What tools/instrumentation/db statistic should be used to help get to root cause? Do you recommend using System Tap (with some particular text command script)? What DB stats should we capture to get to the bottom of this memory leak? What troubleshooting steps can you recommend?
    Thanks ahead of time.
    JE.
    Edited by: 787930 on Aug 12, 2010 5:42 PM

    That's troublesome... DB itself doesn't have stats that track VM in any useful way. I am not familiar with SystemTap but a quick look at it seems to imply that it's better for kernel monitoring than user space. It's pretty hard to get DB to leak significant amounts of memory. The reason is that it mostly uses shared memory carved from the environment. Also if you are neglecting to close or delete some object DB generally complains about it somewhere.
    I don't see how pmap would help if it's a heap leak but maybe I'm missing something.
    One way to rule DB out is to replace its internal memory allocation functions with your own that are instrumented to track how much VM has been allocated (and freed). This is very easy to do using the interfaces:
    db_env_set_func_malloc()
    db_env_set_func_free()
    These are global to your process and your functions will be used where DB would otherwise call malloc() and free(). How you get usage information out of the system is an exercise left to the reader :-) If it turns out DB is the culprit then there is more thinking to do to isolate the problem.
    Other ideas that can provide information if not actual smoking guns:
    -- accelerate reproduction of the problem by allocating nearly all of the VM to the JVM and the DB cache (or otherwise limit the allowable VM in your process)
    -- change the VM allocated to the JVM in various ways
    Regards,
    George

  • How to free up memory while running?

    Hello,
    i´m using a custom OPUI written in LV.
    In there i start TestStand 3.5.
    The model of Teststand then starts two parallel-sequences (as "New Execution" that are loaded in Sequence-File-Load and looping all the time.
    One parallel-sequence checks the buttons of the OPUI and the other parallel-sequence is controling a climatic chamber. When a special temperature is reched then it executes "Test UUT" of the model-sequence (also as "New Execution").
    Everything works fine but someone is eating memory. Everytime when a Test UUT is done the memory goes up. And after it is finished the memory is not released.
    So my question is: How can i free up unused memory (like garbage collector) in teststand, or in LV if there is a special action needed.
    Or perhaps the teststep-results are not removed? So that i create 100s of new executions and they all keep the results?
    I already clicked on "Disable result recording for all steps" in the two parallel-sequences and in all steps in the model. So that i think that only the Mainsequence Callback is creating results.
    Has someone any ideas what i can do?
    Thanks for everything

    If you have result collection disabled for all of your steps then the memory leak is likely due to something else. You might not be closing a reference that you should be. Try to narrow down which part(s) of your sequence are leaking the memory by either stepping through things one step at a time and looking at memory usage or perhaps by cutting out parts of your sequence until the problem goes away. Also, if it's not your sequence then it might be your custom UI that is leaking the memory, to determine this, try running your sequence in the sequence editor and/or one of the UIs that ship with TestStand and see if the problem goes away.
    Hope this helps,
    -Doug

  • App performs inconsistantly...how to check for memory leaks?

    I (was!) almost done with an app to submit to the app store but when I tested the final build on my iPad it started performing strangely.
    It had never done this before, but it's crashing almost CONSTANTLY. Every few seconds or so. I haven't changed much of anything since the last time I tested it and I can't really figure out what would cause it to crash so much.
    From the menu you can choose a button to go to other frames in the main timeline. Half of those work, half of them make the app crash. This is new and I think I might have a memory leak somewhere because it's consistently inconsistent.
    At first I thought I had a stack overflow somewhere; however, testing in mobile debug mode doesn't cause any errors or even warning output messages - nothing. It runs just fine.
    So is it possible that the problem is stemming from hardware? I can't imagine this app would be using up too much memory - it is full of vectors, but there's nothing more complicated than touch and drag because it's just a puzzle.
    How would I test something like this if debug mode isn't throwing any exceptions?

    You may want to try MonsterDebugger (free, open source). It also serves as a profiler. You might be able to get a better understanding on what your app is doing (and why it might be breaking on the iPad).

  • How to deal with Memory Leaks, that are caused by Binding

    Hi, I recently noticed (huge?) memory leaks in my application and suspect bindings to be the cause of all the evil.
    I made a little test case, which confirms my suspicion:
    import javafx.application.Application;
    import javafx.beans.property.SimpleStringProperty;
    import javafx.beans.property.StringProperty;
    import javafx.scene.Scene;
    import javafx.scene.control.Button;
    import javafx.scene.layout.VBox;
    import javafx.stage.Stage;
    public class TestAppMemoryLeak extends Application {
        public static void main(String[] args) {
            launch(args);
        @Override
        public void start(Stage stage) throws Exception {
            VBox root = new VBox();
            Button button = null;
            for (int i = 0; i < 100000; i++) {
                button = new Button();
                button.textProperty().bind(text);
                button.textProperty().unbind(); // if you don't call this, you can notice the increased memory of the java process.
            root.getChildren().add(button);
            Scene scene = new Scene(root);
            stage.setScene(scene);
            stage.show();
        private StringProperty text = new SimpleStringProperty("test");
    }Now the problem is, HOW can I know, when a variable is no longer needed or overwritten by a new instance.
    Just an example:
    I have a ListView with a Cell Factory. In the updateItem method, I add a ContextMenu. The textProperty of each MenuItem is bound to a kind of global property, like in the example above. I have to do it in the updateItem method, since the ContextMenu differs depending on the item.
    So every time the updateItem method is called a new ContextMenu is created, which binds some properties, but the old context menus remain in memory.
    I guess there could be many more example.
    How can I deal with it?

    I've dealt with this situation and created a Jira issue for it, but I also have a work-around that is a bit unwieldy but works. I'll share it with you.
    The bug that deals with this (in part atleast): http://javafx-jira.kenai.com/browse/RT-20616
    The solution is to use weak invalidation listeners, however they are a bit of a pain to use as you cannot do it with something simplistic as "bindWeakly"... and you need to keep a reference around to the wrapped listener otherwise it will just get garbage collected immediately (as it is only weakly referenced). Some very odd bugs can surface if weak listeners disappear randomly because you forgot to reference them :)
    Anyway, see this code below, it shows you some code that is called from a TreeCell's updateItem method (I've wrapped it in some more layers in my program, but it is essentially the same as an updateItem method):
    public class EpisodeCell extends DuoLineCell implements MediaNodeCell {
      private final WeakBinder binder = new WeakBinder();
      @Override
      public void configureCell(MediaNode mediaNode) {
        MediaItem item = mediaNode.getMediaItem();
        StringBinding episodeRange = MapBindings.selectString(mediaNode.dataMapProperty(), Episode.class, "episodeRange");
        binder.unbindAll();
        binder.bind(titleProperty(), MapBindings.selectString(mediaNode.dataMapProperty(), Media.class, "title"));
        binder.bind(ratingProperty(), MapBindings.selectDouble(mediaNode.dataMapProperty(), Media.class, "rating").divide(10));
        binder.bind(extraInfoProperty(), Bindings.when(episodeRange.isNull()).then(new SimpleStringProperty("Special")).otherwise(episodeRange));
        binder.bind(viewedProperty(), item.viewedProperty());
        subtitleProperty().set("");
    }This code makes use of a class called WeakBinder -- it is a helper class that can make Weak bindings and can keep track of them. When you call unbindAll() on it, all of the bindings it created before are released immediately (although they will also disappear when the Cell itself is garbage collected, which is possible because it only makes weak references).
    I've tested this extensively and it solves the problem of Cells keeping references to objects with much longer life cycles (in my case, the MediaNode passed in has a longer lifecycle than the cells and so it is important to bind weakly to it). Before this would create huge memory leaks (crashing my program within a minute if you kept refreshing the Tree)... now it survives hours atleast and the Heap usage stays in a fixed range which means it is correctly able to collect all garbage).
    The code for WeakBinder is below (you can consider it public domain, so use it as you see fit, or write your own):
    package hs.mediasystem.util;
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import javafx.beans.InvalidationListener;
    import javafx.beans.Observable;
    import javafx.beans.WeakInvalidationListener;
    import javafx.beans.property.Property;
    import javafx.beans.value.ObservableValue;
    public class WeakBinder {
      private final List<Object> hardRefs = new ArrayList<>();
      private final Map<ObservableValue<?>, WeakInvalidationListener> listeners = new HashMap<>();
      public void unbindAll() {
        for(ObservableValue<?> observableValue : listeners.keySet()) {
          observableValue.removeListener(listeners.get(observableValue));
        hardRefs.clear();
        listeners.clear();
      public <T> void bind(final Property<T> property, final ObservableValue<? extends T> dest) {
        InvalidationListener invalidationListener = new InvalidationListener() {
          @Override
          public void invalidated(Observable observable) {
            property.setValue(dest.getValue());
        WeakInvalidationListener weakInvalidationListener = new WeakInvalidationListener(invalidationListener);
        listeners.put(dest, weakInvalidationListener);
        dest.addListener(weakInvalidationListener);
        property.setValue(dest.getValue());
        hardRefs.add(dest);
        hardRefs.add(invalidationListener);
    }Let me know if this solves your problem.

  • How to free the memory after closing JavaFX Stage?

    I am creating a JavaFx application which contain a button. When I click on that button, it opens a new stage containing a table with thousands of data. It's working fine. But the problem is, when I close the Stage of that table, memory is not getting free by the application i.e. everytime when I open the new stage for table then memory is get increased. Is there any issue with JavaFX? or I have to do something else?
    I have tried to set everything null at the time of closing of that stage but still memory is not getting free.
    My button click code is :
    btn.setOnAction(new EventHandler<ActionEvent>() {
                @Override
                public void handle(ActionEvent event) {
                    Stage stage = new Stage();
                    Table1Controller controller = (Table1Controller) Utility.replaceScene("/tablesample/Table1.fxml", stage);
                    controller.init(stage);
                    stage.setTitle("Sample");
                    stage.setWidth(583.0);
                    stage.setHeight(485.0);
                    stage.initModality(Modality.APPLICATION_MODAL);
                    InputStream in = TableSample.class.getResourceAsStream("icon_small.png");
                    try {
                        stage.getIcons().add(new Image(in));
                    } finally {
                        try {
                            in.close();
                        } catch (IOException ex) {
                    stage.show();
    Utility.replacescene method : It loads the scene from given fxml and set to stage. At final It return controller object for that scene.
    public static Initializable replaceScene(String fXml, Stage mystage) {
            InputStream in = null;
            try {
                FXMLLoader loader = new FXMLLoader();
                in = Utility.class.getResourceAsStream(fXml);
                loader.setLocation(Utility.class.getResource(fXml));
                loader.setBuilderFactory(new JavaFXBuilderFactory());
                AnchorPane page;
                try {
                    page = (AnchorPane) loader.load(in);
                } finally {
                    in.close();
                Scene scene = new Scene(page);
                mystage.setScene(scene);
                return loader.getController();
            } catch (Exception ex) {
                return null;
        }Thanks

    Is there any issue with JavaFX? or I have to do something else?It's likely an issue with your application code (though it could be a bug in the JavaFX platform too).
    Either way debugging most memory issues in JavaFX (except for ones the graphics card texture related) is the same as debugging them in Java - so just use standard Java profiling tools to try to track down any memory leaks you have.
    Such work is as much an art as a science and takes some experience to get right, so grab your detective cap, download and use the tools linked and try and track it down:
    http://stackoverflow.com/questions/6470651/creating-a-memory-leak-with-java
    http://resources.ej-technologies.com/jprofiler/help/doc/indexRedirect.html?http&&&resources.ej-technologies.com/jprofiler/help/doc/helptopics/memory/memoryLeak.html
    http://www.ej-technologies.com/products/jprofiler/overview.html (recommended tool).
    http://visualvm.java.net/
    http://netbeans.org/kb/articles/nb-profiler-uncoveringleaks_pt1.html
    http://www.ibm.com/developerworks/library/j-leaks/ (this link is really old and somewhat outdated but explains some of the underlying concepts better than some newer articles I found).

  • How can I avoid memory leak problem ?

    I use Jdev 10.1.2 . I have a memory leak problem with ADF .
    My application is very large . We have at least 30 application module , each application module contain many view object
    and I have to support a lot of concurrent users .
    as I know ADF stored data of view object in http session .
    and http session live is quite long . when I use application for a while It raise Ouf of Memory error .
    I am new for ADF.
    I try to use clearCache() on view object when I don't use it any more .
    and call resetState() when I don't use Application Module any more
    I don't know much about behavior of clearCache() and resetState() .
    I am not sure that It can avoid memory leak or not .
    Do you have suggestion to avoid this problem ?

    ADF does not store data in the HTTP session.
    See Chapter 28 "Application Module State Management" in the ADF Developer's Guide for Forms/4GL Developers on the ADF Learning Center at http://download-uk.oracle.com/docs/html/B25947_01/toc.htm for more information.
    See Chapter 29 "Understanding Application Module Pooling" to learn how you can tune the pooling parameters to control how many modules are used and how many modules "hang around" for what periods of time.

  • How to free  up memory to play game ?

    I recently tried to install a game via dvd, but then got the message that I had "0 mb...and need to free up memory" to play the game
    yet I have recenty added memory and have over 500 mb available, according to the system.
    do I need to drag the game icons to the hard drive? seems like I have enough memory, but when the game file appears it says " 0 mb" at the top of the file. or do they mean Ram?
    I am a first-time mac user, love the forum, and would appreciate any suggestions
    thanks to all

    The game likely needs to be installed from the DVD onto your iBook's hard drive. Obviously, you will need to check the specs of the game to see how much space is recommended by the game maker for the game to run and run efficiently.
    500 megabytes likely isn't enough to run a game. This is the space on your iBook's hard drive, correct, and not the disc itself?
    -Ryan

  • How To Free Up Memory (RAM) N95(ninety-five)

    Does anyone have a sugestion on how to free up some ram, apart from closing running aps?

    Ngage and music store will only eat RAM you cannot say that to increase efficiency to memory management. Features are always here to eat RAM:d
    Right now, I guess the demand paging application will DEFINITELY be a good solution to increase RAM. I hope existing N95-1 will get this.
    Besides, my lil advice always clear the cache of the web browser and use a task manager that will monitor the currently running apps in background.

  • How to free allocated memory

    As we all know in java the memory allocated for an object is freed automatically when there is no reference for that object exist. and there is no operator in java to free the memory explicitly like delete() in c++.
    But i want to free the memory allocated for the object in java for my project.
    but i dont know how to do this.
    Any idea abt this?

    ghanoz2480 wrote:
    emmmh,,,see the following documentation:
    [http://java.sun.com/javase/6/docs/api/java/lang/System.html#gc()|http://java.sun.com/javase/6/docs/api/java/lang/System.html#gc()]
    Edited by: ghanoz2480 on 10 Mei 09 6:58From your link:
    "Calling the gc method *_suggests_* that the Java Virtual Machine expend effort toward recycling unused objects in order to make the memory they currently occupy available for quick reuse. When control returns from the method call, the Java Virtual Machine has made a best effort to reclaim space from all discarded objects. "
    That does not guarantee that the gc will be run.
    As the OP has already said, and as Jos has already highlighted - what the OP wants to do cannot be done in java.

  • How to detect a memory leak?

    Hi,
    we are running a web application with Tomcat 4.1.27 on HP-UX 11i under HP-JVM 1.4.2_05.
    We have major problem, that the application suddenly starts to allocated the whole memory (6GB) and the server stops running because the JVM is busy doing GCs. We don`t get an out of memory exception, but after 300sec or more doing major-gc, only about 100mb memory is freed, which is allocated again very very fast.
    I tried -verbosegc and the application was profiled with jprobe on hp-ux environmen tand with optimizeit on windows environment. With the profling tools we didn`t find any memory leaks. We even used verbosegc in production environment, the server runs fine for days spending about 0.09 seks for minor GCs and it allocates about 1gb(256m for Eden) of 6gb. But suddenly the server starts to do minor gcs about 2 to 4 seks and after a few minutes the whole memory is allocated and the vm is busy doing major GCs. In the log-files, which are very detailed, we can`t find any coherency with the suddenly appearing GCs.
    Any approach to solve this problem?

    I`ve found it:
    I have to use -Xeprof with the following option:
    "time_on=sigusr1|sigusr2 (SDK 1.4.1 and later)
    Specifies which signal will cause profiling to begin (profile data collection). Please be aware that the application or the VM may already be using the sigusr signals for their own purposes; please check the documentation. Specifying a signal and a timeout (see above) at the same time is possible by repeating the time_on option. Only one of the two signals can be declared to use as the signal to start profiling. During the application's run, the specified signal can be delivered to the Java� process multiple times."
    But i have no idea how to define a signal which causes the profiling to begin.
    What is a sigusr signal? And how to specify this signal?
    Are these signals or Java or HP-UX?

  • How can i detect "Memory leak" with large LabVIEW projects.

    Hi,
    I have a huge LabVIEW application that runs out of memory after running continuously for some time. I am not able to find out the VI that is hogging up memory. Is there any tool that dynamically detects the VI that is leaking memory.
    Or, is there a tool or a way to identify the critical areas which can be potential culprits that is leaking memory.
    Regards
    Bharath

    Bdev wrote:
    Thanks Dennis.
    I think Desktop Execution toolkit should solve the problem. 
    Wayne Wrote
    Have you tried Tools»Profile»Performance and Memory ?  http://zone.ni.com/reference/en-XX/help/371361F-01/lvdialog/profile/
    But this will just give me the amount of memory used by the VIs and not the amount of memory that is not getting released.
    And where is the problem about that? Just try to find what VIs keep increasing in memory size. That are the culprits. If you have real memory leaks, meaning there is memory that is not managed by LabVIEW directly but for instance by a DLL somewhere and that DLL looses references to memory, so it goes really lost, then the only way to find that is by successively exclude functionality in your application until you can find the culprit.
    There is no other simple way to find out about who is loosing memory references than by doing debugging by exclusion until the problem disappears. The only way to speed this up, which quite often works for me is doing an educated guess, about what components are most likely to do this misbehaviour.
    Not knowing anything about your application and if you are talking about memory hogs (fairly easily identifiable by the mentioned Performance and Memory monitor) or actual memory leaks, it is hard to tell how to go about it. Memory hogs are usually the first thing I suspect escpecially with software I inherit somehow from people from whom I'm not sure they know all the ins and outs of LabVIEW programming.
    If a leak seems likely the first culprit usually are custom DLLs (yes even DLLs I have written myself), then NI DLLs such as DAQmx, etc. and last there come leaks in LabVIEW itself. This last category is very seldom but it has happened to me. However before going to scream about LabVIEW having a memory leak you really, really should make sure you have very intensivly researched all the other possibilities. The chance that you run into a memory leak in LabVIEW, while not impossible, is so small compared to the other ways of causing either a memory hog or running into a leak in an external component to LabVIEW, that in 99.9% of the cases where someone screams about a LabVIEW memory leak, he is simply wrong.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to Free Up Memory Allocated by Kernel without reboot

    Hi
    In one of the Production Server we are facing issue in which server is up for last 365 days.
    While checking we found that kernel memory utilization is itself 64 %.
    I want solution by which we can free up Memory used by kernel which should be normally 12-15 %.
    I am aware about that rebooting the server will resolve the issue.
    echo ::memstat | mdb -k
    >
    Page Summary Pages MB %Tot
    Kernel 166378 1299 64%
    Anon 38042 297 15%
    Exec and libs 1161 9 0%
    Page cache 39329 307 15%
    Free (cachelist) 12152 94 5%
    Free (freelist) 1326 10 1%
    Total 258388 2018
    Physical 254514 1988Please suggest.
    Thanks
    Rajan

    hi
    see this URL and alternatively you can add temp SWAP
    http://developers.sun.com/solaris/articles/solaris_memory.html

Maybe you are looking for