Memory Usage in WinTaskManger vs heap size

hello,
I have exactly the same problem as in the topic "Windows Task Manager vs. Heap", posted at Dec 10, 2004 5:17 AM, by jorgeHX.
Here are the symptoms:
1) My application starts at 20MB seen by windows task manager
2) I use profiler to monitor the heap. Heap is always working very healthly - heap size in the profiler increases by a minimum until the gc comes around so that the used heap size drops down again.
3) However, after doing a relatively memory-consuming operation (loop of String indexing, patterning ..etc.), the memory usage in windows task manager goes up couple of MBs but never drops down.
4) Then I go manually free the heap (System.gc()), I can see GC is freeing heap. However, the memory in windows task manager remains no change, no matter how many times I force the garbabge collection.
This is a bad thing - if my application keeps doing that memory-consuming operation again and again, the memory seen in the windows task manager will be growing and growing to hundreds of MBs until the windows alerts "low on virtual memory".
I tried everything already. I set every instance = null at the end, I delete every reference but the memory just keeps increasing! WHY?????
Anyone can help me charactorize the problem? I am so in dark!!
Ryan-Chi

I guess my problem can also be interpretated as
"Why doesn't JVM return memory to OS?"
It does, depending on the setting of the -XX:MaxHeapFreeRatio option. "Normal" operation using the default setting does not usually cause memory to be returned to the os.
Search for this option term for explanations.

Similar Messages

  • Does the jvm allocate complete max heap size initially?

    Does the JVM allocate the memory for the entire max heap size up front, or does it start with the specified minimum size and increase and grab more later if needed?
    The reason this is being posted is because we have a number of jboss servers that are being run. Some don't require large heap sizes, others do. If we use the same large max heap size for all would all the memory get allocated up front, or possibly a smaller initialization portion?

    I have done the test with Solaris, Linux and WinXP.
    Test with -Xms512M
    Have written a simple java program to which the minimum heap size was set to -Xms512m then the program was executed on Solaris and WinXP platforms. The usage of memory of the Java process was 6 MB in WinXP and 9 MB in Solaris, rather than 512 MB. The JVM is not allocating the configured minimum size of 512 MB at the start of the process execution.
    Reason:
    If you ask the OS for 512 MB it'll say "here it is", but pages won't actually be allocated until your app actually touches them.
    If the allocation is not being made initially during the start of the process, the concept of minimum heap size is not required.
    But the garbage collection log shows the minimum heap size as what was configured using -Xms option.
    Test with -Xms1024M
    The JVM arguments was set to : -Xms1024m -Xmx1024m, but the used memory observed using Windows perfmon was 573M.
    6.524: [Full GC 6.524: [Tenured: 3081K->10565K(967936K), 0.1949291 secs] 52479K->10565K(1040512K), [Perm : 12287K->12287K(12288K)], 0.1950893 secs] Reason:
    Optimization is something that the operating systems do. The JVM allocates the memory in it's address space and initializes all data structures to your -Xms. In any way that the JVM can measure, the allocation from the OS is complete. But the OS doesn't physically assign a page to the app until the first store instruction. Almost all modern OSs do this.
    Hope this is helpful.

  • About the limitation of memory usage of Illustrator.

    Please tell us about the behavior of the illustrator CS5/CS6 when memory usage of illustrator is too many.
    The memory usage of Illustrator is increased, I think the application operates as follow.
    - Warning display (e.g.:Illustrator can not preview)
    - Illustrator stalls
    1.Such as case, Is there memory usage limit?
    2.In the case of memory usage of Illustrator exceed maximum size, how Illustrator works?
      Is there a specification?
    3.Is there the function of the memory auto-release in Illustrator?When the function work?

    Thanks.  Finally found where it is and turned off all alerts.  Verizon doesn't make it easy to find where it is located.  Appreciate the help.

  • How to reduce heap size in jvm 1.3.1 ??

    Is it possible to force GC to reduce heap size ???
    My application uses large maps and after loading one memory allocation looks like this:
    [GC 92088K->84997K(105656K)]
    If map is not needed I remove it and run Full GC.
    Memory is dealocated but the heap size remains the same:
    [GC 10088K->8499K(105656K), 0.0026571 secs]
    Is any way to reduce it ??? I tried to use incremental GC, it worked in desired way but it is to slow.

    Hi,
    Did you try to set java min and max to different sizes?
    Then you still need a full GC to reduce the heap
    because only the full GC compacts memory.
    For IBM JRE you will need this to compact anyway.
    In general applications perform better with heapmin=max.
    -Xmx<size> set maximum Java heap size. example: -Xmx64m
    -Xmn<size> set new generation Java heap size. example: -Xmn64m
    Different size make it compacter but are more expensive(time)
    Do you run Full GC as system.gc()?
    I would never do this.
    Let the JRE take care of this.
    Number 1 GC rule is avoid it.

  • Can I Control Memory Usage?

    I'm starting a new project. The project now has only three camera files of about 24 minutes (1920x1080i mpeg2), a few short "resource" files, and three sequences (the multi, the target, and a short 10-sec intro), yet when I restart PremierePro CS4 I get the warning "Adobe Premiere Pro is running low on memory..."
    The system has 12GB and uses Vista 64-bit. In the past I would see that warning only when the Project panel is filled with all sorts of stuff, but this project is just getting started.
    I notice that already the Task Manager > Processes tab, Commit Size is 3,439,732K (it fluctuates slightly just sitting there while I type this), which is probably what triggered that warning.
    Is there any thing I could be doing in Premiere, rearranging something or moving files around or whatever, to control memory usage to keep that Commit Size from being so large?

    1. Commit Size is a selectable column - from Task Manager choose View > Select Columns, then check Memory - Commit Size.
    2. Commit Size is (I believe for some reason, but can't defend) the address space allocated by the OS to the application, not the actual amount of physical main memory currently tied up by the application. The number of processes is not relevant because the OS performs paging. Since Premiere Pro CS4 is still a 32-bit application (I don't know what it means when people say "optimized for 64-bit") having a Commit Size of 3.4+ GB is approaching the PC 32-bit limit (after you subtract out the fixed addresses).
    3. I read about Eddie Lotter's first suggestion and it makes sense, but I haven't proved it yet.
    4. Just after my post PPro crashed; I sent in the crash message to Adobe, reloaded the project, and this time the Commit Size wasn't even 1GB!!! Yes, this was before I could try Eddie's suggestion.
    Lee

  • Massive memory hemorrhage; heap size to go from about 64mb, to 1.3gb usage

    **[SOLVED]**
    Note: I posted this on stackoverflow as well, but a solution was not found.
    Here's the problem:
    [1] http://i.stack.imgur.com/sqqtS.png
    As you can see, the memory usage balloons out of control! I've had to add arguments to the JVM to increase the heapsize just to avoid out of memory errors while I figure out what's going on. Not good!
    ##Basic Application Summary (for context)
    This application is (eventually) going to be used for basic on screen CV and template matching type things for automation purposes. I want to achieve as high of a frame rate as possible for watching the screen, and handle all of the processing via a series of separate consumer threads.
    I quickly found out that the stock Robot class is really terrible speed wise, so I opened up the source, took out all of the duplicated effort and wasted overhead, and rebuilt it as my own class called FastRobot.
    ##The Class' Code:
        public class FastRobot {
             private Rectangle screenRect;
             private GraphicsDevice screen;
             private final Toolkit toolkit;
             private final Robot elRoboto;
             private final RobotPeer peer;
             private final Point gdloc;
             private final DirectColorModel screenCapCM;
             private final int[] bandmasks;
             public FastRobot() throws HeadlessException, AWTException {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
                  toolkit = Toolkit.getDefaultToolkit();
                  elRoboto = new Robot();
                  peer = ((ComponentFactory)toolkit).createRobot(elRoboto, screen);
                  gdloc = screen.getDefaultConfiguration().getBounds().getLocation();
                  this.screenRect.translate(gdloc.x, gdloc.y);
                  screenCapCM = new DirectColorModel(24,
                            /* red mask */    0x00FF0000,
                            /* green mask */  0x0000FF00,
                            /* blue mask */   0x000000FF);
                  bandmasks = new int[3];
                  bandmasks[0] = screenCapCM.getRedMask();
                  bandmasks[1] = screenCapCM.getGreenMask();
                  bandmasks[2] = screenCapCM.getBlueMask();
                  Toolkit.getDefaultToolkit().sync();
             public void autoResetGraphicsEnv() {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
             public void manuallySetGraphicsEnv(Rectangle screenRect, GraphicsDevice screen) {
                  this.screenRect = screenRect;
                  this.screen = screen;
             public BufferedImage createBufferedScreenCapture(int pixels[]) throws HeadlessException, AWTException {
        //          BufferedImage image;
                DataBufferInt buffer;
                WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
                  return new BufferedImage(screenCapCM, raster, false, null);
             public int[] createArrayScreenCapture() throws HeadlessException, AWTException {
                       return peer.getRGBPixels(screenRect);
             public WritableRaster createRasterScreenCapture(int pixels[]) throws HeadlessException, AWTException {
             //     BufferedImage image;
                 DataBufferInt buffer;
                 WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
             //     SunWritableRaster.makeTrackable(buffer);
                  return raster;
        }In essence, all I've changed from the original is moving many of the allocations from function bodies, and set them as attributes of the class so they're not called every time. Doing this actually had a significant affect on frame rate. Even on my severely under powered laptop, it went from ~4 fps with the stock Robot class, to ~30fps with my FastRobot class.
    ##First Test:
    When I started outofmemory errors in my main program, I set up this very simple test to keep an eye on the FastRobot. Note: this is the code which produced the heap profile above.
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined:
    It doesn't do this every time, which is really strange (and frustrating!). In fact, it rarely does it at all with the above code. However, the memory issue becomes easily reproducible if I have multiple for loops back to back.
    #Test 2
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 200; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 1500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined
    The out of control heap is now reproducible I'd say about 80% of the time. I've looked all though the profiler, and the thing of most note (I think) is that the garbage collector seemingly stops right as the fourth and final loop begins.
    The output form the above code gave the following times:
    Time taken: 24.282 //Loop1
    Time taken: 11.294 //Loop2
    Time taken: 7.1 //Loop3
    Time taken: 70.739 //Loop4
    Now, if you sum the first three loops, it adds up to 42.676, which suspiciously corresponds to the exact time that the garbage collector stops, and the memory spikes.
    [2] http://i.stack.imgur.com/fSTOs.png
    Now, this is my first rodeo with profiling, not to mention the first time I've ever even thought about garbage collection -- it was always something that just kind of worked magically in the background -- so, I'm unsure what, if anything, I've found out.
    ##Additional Profile Information
    [3] http://i.stack.imgur.com/ENocy.png
    Augusto suggested looking at the memory profile. There are 1500+ `int[]` that are listed as "unreachable, but not yet collected." These are surely the `int[]` arrays that the `peer.getRGBPixels()` creates, but for some reason they're not being destroyed. This additional info, unfortunately, only adds to my confusion, as I'm not sure why the GC wouldn't be collecting them
    ##Profile using small heap argument -Xmx256m:
    At irreputable and Hot Licks suggestion I set the max heap size to something significantly smaller. While this does prevent it from making the 1gb jump in memory usage, it still doesn't explain why the program is ballooning to its max heap size upon entering the 4th iteration.
    [4] http://i.stack.imgur.com/bR3NP.png
    As you can see, the exact issue still exists, it's just been made smaller. ;) The issue with this solution is that the program, for some reason, is still eating through all of the memory it can -- there is also a marked change in fps performance from the first the iterations, which consume very little memory, and the final iteration, which consumes as much memory as it can.
    The question remains why is it ballooning at all?
    ##Results after hitting "Force Garbage Collection" button:
    At jtahlborn's suggestion, I hit the Force Garbage Collection button. It worked beautifully. It goes from 1gb of memory usage, down to the basline of 60mb or so.
    [5] http://i.stack.imgur.com/x4282.png
    So, this seems to be the cure. The question now is, how do I pro grammatically force the GC to do this?
    ##Results after adding local Peer to function's scope:
    At David Waters suggestion, I modified the `createArrayCapture()` function so that it holds a local `Peer` object.
    Unfortunately no change in the memory usage pattern.
    [6] http://i.stack.imgur.com/Ky5vb.png
    Still gets huge on the 3rd or 4th iteration.
    #Memory Pool Analysis:
    ###ScreenShots from the different memory pools
    ##All pools:
    [7] http://i.stack.imgur.com/nXXeo.png
    ##Eden Pool:
    [8] http://i.stack.imgur.com/R4ZHG.png
    ##Old Gen:
    [9] http://i.stack.imgur.com/gmfe2.png
    Just about all of the memory usage seems to fall in this pool.
    Note: PS Survivor Space had (apparently) 0 usage
    ##I'm left with several questions:
    (a) does the Garbage Profiler graph mean what I think it means? Or am I confusing correlation with causation? As I said, I'm in an unknown area with these issues.
    (b) If it is the garbage collector... what do I do about it..? Why is it stopping altogether, and then running at a reduced rate for the remainder of the program?
    (c) How do I fix this?
    Does anyone have any idea what's going on here?
    [1]: http://i.stack.imgur.com/sqqtS.png
    [2]: http://i.stack.imgur.com/fSTOs.png
    [3]: http://i.stack.imgur.com/ENocy.png
    [4]: http://i.stack.imgur.com/bR3NP.png
    [5]: http://i.stack.imgur.com/x4282.png
    [6]: http://i.stack.imgur.com/Ky5vb.png
    [7]: http://i.stack.imgur.com/nXXeo.png
    [8]: http://i.stack.imgur.com/R4ZHG.png
    [9]: http://i.stack.imgur.com/gmfe2.png
    Edited by: 991051 on Feb 28, 2013 11:30 AM
    Edited by: 991051 on Feb 28, 2013 11:35 AM
    Edited by: 991051 on Feb 28, 2013 11:36 AM
    Edited by: 991051 on Mar 1, 2013 9:44 AM

    SO came through.
    Turns out this issue was directly related to the garbage collector. The default one, for whatever reason, would get behind on its collection at points, and thus the memory would balloon out of control, which then, once allocated, became the new normal for the GC to operate at.
    Manually setting the GC to ConcurrentMarkSweep solved this issue completely. After numerous tests, I have been unable to reproduce the memory issue. The garbage collector does an excellent job of keeping on top of these minor collections.

  • UCCX 7 Heap Memory Usage Exceeded Error

    UCCX 7.0.(1) SR5
    Getting the following error when updating or adding new script applications:
    "It is not recommended to update the application as Engine heap memory usage exceeded configured threshold. Click OK to continue and Cancel to exit."
    Apparently this is an alert that was built into SR4 and is configurable under the System Parameters.
    Does anyone have information on what processes use the heap memory in UCCX or how to monitor the usage?

    As Tom can attest to by now, this is something of an iceberg with big sharp edges below the surface.
    The Java heap is fixed at 256MB on CCX. The Java heap is used by Tomcat as execution memory. In addition to this, applications, scripts, and other repository data is loaded into the heap at runtime. Depending on your environment, you may be approaching the limits of the heap, which cannot be changed. If the heap size is reached, it will be dumped and impact calls.
    What have you been doing as of late on your CCX server? How many applications and scripts do you have? Are any of these using XML files extensively?
    Note there is also a possible bug where the MIVR engine does not properly release all objects loaded into the heap at the end of a script execution leading to a memory leak of sorts. The discussion [debate] over this behavior is continuing. As of this week, it may be represented under
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    CSCte49231. If it is, this may qualify as the most poorly described defect ever.

  • DPS 6.x jvm memory heap size?

    I am setting up directory proxy server 6.3 and have the java setting for memory heap size in my notes from testing last year. Is it important to set this? Is the argument as stated ok? And is 500 ok? Doubt anything has changed since last year, but want to be sure. Thanks.
    dpadm set-flags /opt/dps jvm-args="-Xmx500M -Xms500M -XX:NewRatio=1"

    crosspost

  • Mapping set heap sizes to used memory

    Hi all,
    I've got a question about the parameters used to control your java process' heap sizes: "-Xms128m -Xmx256m" etc.
    Let's say I set my min and max to 2Gb, just for a simplistic example.
    If I then look at the linux server my process is running on, I may see a top screen like so:
    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    10647 javaprog 20   0 2180m 1.9g  18m S  1.3  3.7   1:57.02 javaWhat I'm trying to understand is what relationship - if any - there is between these arguments and the figures I see within top. One thing in particular that I'm interested in is the fact that I occasionally see a RES (or more commonly a VIRT) size higher than the maximum that I have provided to Java. Naively I would assume that therefore there isn't a relationship between the two... but I wouldn't mind someone clarifiying this for me.
    Any resources on the matter would be appreciated, and I apologise if this question is outside the realms of this particular subforum.
    Dave.

    Peter Lawrey wrote:
    user5287726 wrote:
    Peter Lawrey wrote:
    It will always reserve this much virtual memory, plus. In term of resident memory, even the minimum is not guarenteed to be used. The minimum specifies at what point it will make little effort to recycle memory. i.e. it grows to the minimum size freely, but a "Hello World" program still won't use the minimum size.No, Linux does not reserve virtual memory. Just Google "Linux memory overcommit". Out-of-the-box, every Linux distro I'm aware of will just keep returning virtual memory to processes until things fall apart and the kernel starts killing processes that are using lots of memory - like your database server, web server, or application-critical JVMs. You know - the very processes you built and deployed the machine to run. Just Google "Linux OOM killer".Thats not the behaviour I see. When I start a process which busy waits, but doesn't create any objects, the virtual memory sized used is based on the -mx option, not how much is used. Given virtual memeory is largely free, why would an OS only give virtual memory on an as needs basis.
    Busy looping process which does nothing.
    In each case the resident size is 16m
    option       virtual size
    -mx100m      368m = 100m + 268m
    -mx250m      517m = 250m + 267m
    -mx500m      769m = 500m + 269m
    -mx1g        1294m = 1024m + 270m
    -mx2g        2321m = 2048m + 273mTo me it appears that the maximum size you ask is immediately added to the virtual memory size, even if its not used (plus an overhead) i.e. the resident size is only 16m.Yes, it's only using 16m. And its virtual size may very well be what you see. But that doesn't mean the OS actually has enough RAM + swap the hold what it tells all running processes they can have.
    How much RAM + swap does your machine have? Say it's 4 GB. You can probably run 10 or 20 JVMs simultaneously with the "-mx2g" option. Imagine what happens, though, if they actually try and use that memory - that the OS said they could have, but which doesn't all exist.
    What happens?
    The OOM killer fires up and starts killing processes. Which ones? Gee, it's a "standard election procedure". Which on a server that's actually doing something tend to be the processes actually doing something, like your DBMS or web server or JVM. Or maybe it's your backups that get whacked because they're "newly started" and got promised access to memory that doesn't exist.
    Memory overcommit on a server with availability and reliability requirements more stringent than risible is indefensible.

  • Heap size memory error on WebI Reports

    Hello,
    I'm developping my own JDBC driver to report specific data in Business Objects. I've actually an universe build on this driver and a web intelligence report. When I try to request I've an heap size error due to the size of my jvm. My JBDC driver is deployed thanks to a jar file. Normally I can increase jvm size with the argument -Xmx256M in command-line.
    I tried several things :
    I modified register HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/Session Manager/Subsystems, SharedSection to 1024,3072,1024
    I passed Tomcat's memory to 1024, I can't more because I'm on 32 bits on Windows Server 2003.
    I tried to add MAX_HEAP_SIZE=1024000000 on webi.properties in c:\programfiles\Tomcat\webapps\businessobjects\enterprise115\desktoplaunch\WEB-INF\classes\, as explained on a post,but this directory doesn't exist, but there is 4 different webi.properties files one in C:\Program Files\Business Objects\Tomcat55\webapps\PerformanceManagement\WEB-INF\lib so I added max heap size parameter in this one.
    I improved my JDBC-driver to avoid memory leak, and BO requests to limit results.
    But I still have an heap size error.
    Some people have other ideas to solve my problem ?
    Best regards.       
    BO XI 3.1 Service Pack 2 HotFix 1
    Tomcat 5.5 : 1024Mo memory allocation
    Windows Server 2003 32bits

    Hi Hizam,
    1. Go to universe and click on View--> Structure
    2. Check objects and Conditions.
    3. If you found any changes then re-export your universe and run queries.
    Once everything goes fine then Schedule the report.
    Hope it helps you....!!
    Thank You!

  • How to increase Memory and Java Heap Size for Content Server

    Hi,
    My content server is processing requests very slowly. Over performance is not good. I have 2 GB of RAM any idea what files I can modify to increase the Java Heap Size for the Content Server. If I increase the RAM to 4 or 6 GB where do I need to make changes for the Java Heap Size and what are the recommended values. I just have Content Server Running on the linux box. Or how do I assign more memory to the user that owns the content server install.
    Thanks

    You might find these interesting:
    http://blogs.oracle.com/fusionecm/2008/10/how_to_-javatuning.html
    http://download.oracle.com/docs/cd/E10316_01/cs/cs_doc_10/documentation/admin/performance_tuning_10en.pdf
    Do you have access to metalink? This has about everything you could want:
    https://metalink2.oracle.com/metalink/plsql/f?p=130:14:9940589543422282072::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,788210.1,1,1,1,helvetica
    Or search for "788210.1" in metalink knowledgebase if that link doesn't work and look for the FAQ on configuring Java for Content Servers

  • Setting memory heap size

    Hi,
    I would like to know how to set the default memory
    heap size to 128 MB while installing the Java Plugin
    itself.
    Is there any environment variable available for setting
    this?
    I dont want to use the Control Panel option for setting
    it.
    Thanks for your help.

    Hey i have the same issue, how did you solve it?
    Hi Gkumarc1,
    Plugin 1.3.0 (and after) reads the property filein
    .java directory of user's home directory. Java
    runtime parameter is saved in the property file.Here
    is an example of it:
    # Java(TM) Plug-in Properties
    # DO NOT EDIT THIS FILE. It is machine generated.
    # Use the Activator Control Panel to editproperties.
    javaplugin.jre.params=-Xmx 128m
    Hope this helps.
    Sun-DTSIs there a way to set the default heap size on the
    fly? Meaning, we want to avoid the need for our
    customers to go into their control panel and make
    this setting. Instead, we would like to have the
    applet load with our preferred default VM size. This
    would allow us to change the preferred size during
    some future performance enchancements without having
    to contact all of our 2,000+ external customers. Is
    there something that can be put in the PARAM tag
    within the HTML that specifies the
    preferred/recommended default size?
    Any help would be greatly appreciated.

  • Memory Usage:  Heap space vs System Monitor

    Hi guys
    I'm trying to wrap my head around something which seems pretty complicated. We have an enterprise app making use of Jasper Reports. When a report is generated, the app's memory usage in the Linux (Ubuntu 8.10) System Monitor shoots up by around 20MB (the more data on the report, the higher that number). After the report is closed, memory usage doesn't decrease. This is resulting in performance issues, and memory usage given by the System Monitor doesn't match up with the results given by the NetBeans profiler. There is a huge difference between the values indicated by the System Monitor, and the heap space given by the profiler. I've done some fairly detailed checking using the profiler, and according to that, there are 0 instances of our report generating classes in memory after closing a generated report. I've read a bunch of articles on the net, but I'm still stumped. My best guess is that something weird is happening somewhere in the Jasper code, but I can't really think of a way to confirm that. Does anyone have any idea what could be causing this and how to prevent it? Any help will be much appreciated; thanks.
    P.S. I've posted on the Jasper forums are well, but to no avail. [http://jasperforge.org/plugins/espforum/view.php?group_id=102&forumid=103&topicid=65335]
    Edit: We're using Java 6 (we've recreated the issue on a few release of Java 6), and Jasper Reports 3.0.1. (Just in case that is important)

    One thing about your test is that using Xms110m will cause the JVM to NEVER give back any memory, because it won't ever go below 110M. I altered your test a bit:
    import java.util.LinkedList;
    public class MemTest {
       private static LinkedList<byte[]> mem = new LinkedList<byte[]>();
       public static void main(String[] args) {
            try {
                System.out.println("Initial:");
                print();
                useMem();
                System.out.println("Allocated:");
                print();
                clearMem();
                System.out.println("Deallocated:");
                print();
                for (;;) {
                   Thread.sleep(10000);
                   System.out.println("After 10 seconds: ");
                   print();
            } catch (Exception ex) {
                ex.printStackTrace();
       private static void useMem() {
          for (;;) {
             try {
                mem.add(new byte[1024*1024]);
             } catch (OutOfMemoryError e) {
                //free up a few MB to make sure we don't get an OOME while trying to print, etc
                mem.removeFirst();
                mem.removeFirst();
                mem.removeFirst();
                break;
       private static void clearMem() {
          mem = null;
          System.gc();
          System.gc();
          System.gc();
       private static void print() {
          long free = Runtime.getRuntime().freeMemory();
          long total = Runtime.getRuntime().totalMemory();
          double freeFraction = (((double)free) / (total)) * 100;
          System.out.printf("\tAvailable: %d\tTotal: %d\tFree: %4.2f%%%n", free, total,
                freeFraction);
    }When I ran it with
    -Xms16m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30I got these results:
    Initial:
         Available: 16518016     Total: 16711680     Free: 98.84%
    Allocated:
         Available: 2070952     Total: 1065484288     Free: 0.19%
    Deallocated:
         Available: 582957376     Total: 583073792     Free: 99.98%
    After 10 seconds:
         Available: 582953224     Total: 583073792     Free: 99.98%As you can see, the JVM did return about half of the memory back. ProcExp shows ~592MB used, and peaked ~1000MB. But right now it's still running, and still holding at 583MB.
    I've read bug reports like this, but honestly have no idea what the people at Sun are talking about. I don't see the footprint being reduced. I don't know, maybe if I wait a couple of days...

  • Memory Notification:Library Cache Object loaded in Heap size 2262K exceeds

    Dear all
    I am facing the following problem. I am using Oracle 10gR2 on Windows.
    Please help me.
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 2262K exceeds notification threshold (2048K)
    KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
    Thanks

    This is a normal warning message displayed for release 10.2.0.1.0, this is just a bug that by default has declared the kgllarge_heap_warning_threshold instance parameter to 8388608 . The bug is harmless, but the problem is that you will see a lot of messages displayed on the alert.log file, which renders this file difficult to read and it is uncomfortable to spot the real errors.
    Just declare a higher value for the kgllarge_heap_warning_threshold undocumented instance parameter. This is meant to be corrected at 10.2.0.2.0, but you can manually have this parameter increased to a value higher than the highest value reported.
    For further references take a look at this metalink note:
    Memory Notification: Library Cache Object Loaded Into Sga
         Doc ID:      Note:330239.1
    ~ Madrid
    http://hrivera99.blogspot.com/

  • How to monitor heap size used in OPP in order to protect Out of Memory

    My objective is to know how much heap size currently use so I can handle or do something before user got error report.
    The following sql statement just determine what the heap size per OPP process is currently setting:
    select DEVELOPER_PARAMETERS from FND_CP_SERVICES
    where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES
    where CONCURRENT_QUEUE_NAME = 'FNDCPOPP');
    Now OPP heap size is set to 1.5G.
    Thanks

    Hi Sam,
    Please review the following note, as it might help you!!!
    Tuning Output Post Processor (OPP) to Improve Performance (Doc ID 1399454.1)
    R12: How to Configure the Account Analysis Report for Large Reports (Doc ID 737311.1)
    Thanks &
    Best Regards,

Maybe you are looking for