If I consider heap size, should I follow component binding or Value binding

Hi,
I have an Web application my pages gets change rapidly in run time(components get disable and invisible at run-time depends on user input), So in that case which kind of binding approach I have to follow if I consider Heap memory of run time.

navaneeth.j wrote:
I am getting more grip on the component means, in ValueChange event I can get the updated value in the backing bean, That means on the updated values of the rest of the components are not available only we can get evt.getNewValue for event produced component. Example there are 5 components in the page if the event got generated from the 3rd component so in that valuechange event method needs the first two components values also. In value binding I can't get those value but in the component binding every component's updated value I can get.That's why I mentioned AJAX enabling the view in the previous post.
I can accept for 3rd party libraries if we use value binding it would be very easy to detach and use another one but for basic components I am not able to visualize the need to use the other >API, If my guess is right, for look and feel purpose means we can change the renderkit na.
Please suggest me am I thinking in the right direction or notRichfaces, Tomahawk and many others have lots of components that solve common problems that you are going to meet when using plain JSF components.
Yes people try to stick to standard JSF as much as possible but it is not always possible nor always beneficial.
Like I said it's not hard and fast. As long as you understand the advantages and disadvantages of each technique so that you can chose the correct approach according to requirements.
The memory differences are unlikely to impact in a major way.

Similar Messages

  • Increase Java Max Heap Size

    Hello ,
    How can I increase java max heap size to more than 2G .I tried a bunch of different sizes and anything under 2G works just fine so this appears to be a hard limit.
    system info:
    Linux server with process data model 32-bit ,Java 1.5 , total physical memory 9GB and available virtual and physical memory close to 4GB
    I would appreciate any guidance on how to resolve this.
    Regards,
    Omid

    warnerja wrote:
    Any application wanting to allocate that much of a heap size should be considered a rogue application and a system hog of gigantic proportions, not desirable to have running on anyones machine. Focus on designing it correctly so that it doesn't require so much memory in the first place.By very virtue of the fact that the author has a test box with 9 gigs of RAM (I'm inferring), it seems unlikely that this app will be distributed to just "anyone."
    However, if this is meant to be some server app that scales to an enormous memory capacity, you're probably better off not using the java launcher directly and instead using some enterprise container.
    I'm a complete newb when it comes to anything meant to be scalable though, so warnerja would probably have better advice than I would.

  • NT Service and Heap size

    Can someone tell me if there is a way to :
    (1) verify and modify heap size in nt service?
    (2) in multi server mode, there are two nt services for weblogic
    domains, is it needed to change heap size for weblogicAdmin service for
    heap size increase? What will happen if PIA is 1024MB heap size and
    weblogicAdmin is 256MB?
    Thanks in advance/T

    Nicolas, test show that once the nt service is installed, modifying heap
    size in setenv.cmd and run setenv.cmd file reinstall nt service does not
    change the heap size already in registry. So, test shows that I can
    modify the heap size directly in registry and restart service?. Am i doing something wrong here.. can i correct and only to modify and existing heap size in
    registry?
    Please let me know what else should I do once the old heap
    size in already in registry and not the new heap size when reinstalled NT service.
    other question, long the weblogic tuning, I found the following from bea site. During the peak time, we may have 40000 unique users per day. Should I also consider increase maxUserPort? The weblogic domains are hosted on windows 2003 servers.
    Windows Tuning Parameters
    For Windows platforms, the default settings are usually sufficient. However, under sufficiently heavy loads it may be necessary to adjust the MaxUserPort and TcpTimedWaitDelay. These parameters determine the availability of user ports requested by an application.
    By default, ephemeral (that is, short-lived) ports are allocated between the values of 1024 and 5000 inclusive using the MaxUserPort parameter. The TcpTimedWaitDelay parameter, which controls the amount of time the OS waits to reclaim a port after an application closes a TCP connection, has a default value of 4 minutes. During a heavy loads, these limits may be exceeded resulting in an address in use: connect exception. If you experience address in use: connect exceptions try setting the MaxUserPort and TcpTimedWaitDelay registry values under the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters key:
    MaxUserPort = dword:00004e20 (20,000 decimal)
    TcpTimedWaitDelay = dword:0000001e (30 decimal)
    Increase the value of the MaxUserPort parameter if the exception persists.
    For more information about Windows 2000 tuning options, see:
    § The Microsoft Windows 2000 TCP/IP Implementation Details.
    § The Windows 2000 Performance Tuning white paper.
    thanks
    T

  • Plz can you help me. How to increase the Heap Size!!!!!!!!!!!

    Hi,
    i'm new to 10g appl server
    Failed to deploy web application "AGXI51". Failed to deploy web application "AGXI51". . The evaluate phase failed. The Adapter used in the evaluate may have thrown an exception.
    Resolution:
    Please call Oracle support.
    Base Exception:
    java.lang.OutOfMemoryError
    null. java.lang.OutOfMemoryError
    i'm getting this Error while deploying the application.So wat is the procedure to increase the Heap Size
    Regards,
    Niranjan.D

    Locate opmn.xml on your instance. In file locate section relevant for container you use (i.e. "home"). In next line you can see tags where startup parameters are set. You should add following parameters (this will set heap size on 512Mb)
    -Xms512m -Xmx512m
    For detailed explanation I would recommend you to check following link:
    http://download-uk.oracle.com/docs/cd/B25221_03/web.1013/b14431/troublesht.htm
    Hope this help.

  • Too low initial heap size

    Hi,
    Any idea why following line is giving too low initial heap size error in server startup.
    $JAVA_HOME/bin/java $JAVA_VM -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=6 -Xms2048m -Xmx2048m -Xrs -verbose:gc
    This error is coming when I am adding -XX:NewSize=256m -XX:MaxNewSize=256m parameters.
    Thanks in advance.
    Regards,
    Chintan.

    Just a thought: maybe you should use both,
    initial-heap-size="128m" max-heap-size="256m"
    I think the Client JVM defaults to max 64m, so setting the initial heap to 128m would set initial bigger than max, which leads to an error.
    hope that helps...
    Patric

  • How do define the limit of the max heap size?

    Hi All,
    I would like to know what should be the limit of the JVM max heap size.
    What will happen if we will not define it?
    What is the purpose of defining it from the technical point of view?
    Thanks
    Edited by: Anna78 on Jul 31, 2008 12:36 PM

    Defining a max heap space too large can have the following effect:
    If you create new objects, the VM may decide it is not worth getting rid of garbage-collectable ones, as there
    is still plenty of space between the current heap size and the max allowed. The result will be that the
    application will run faster and will consume more memory than it really needs.
    If the heap size is too small, but still sufficient, the application will do a lot of garbage-collection and therefore
    run slower. On the other hand, it will stay inside the tight space it has been allowed to use.
    The speed difference may or may not be noticeable, while the difference between 256M and 512M may
    or may not matter on today's computers.

  • How to set the heap size in adminstrative console

    Hi All,
    Please let me know how to increase heap size in weblogic Adminstrative console.
    Regards
    Madhu

    The answer to this depends on whether you're trying to set the heap size for a manager server which is managed by NodeManager, or the admin server.
    First click on "Lock and Edit".
    If the former, go into "Environment"->"Servers", then click on the server you want to configure. This should start you on the "Configuration"->"General" tab. Now click on the "Server Start" tab. Find the "Arguments" field. Assuming this is blank, put something like the following in the field value:
    -Xmx1536m
    Then click "Save", then "Activate Changes". When you restart the managed server the next time, it should use those new settings.
    If, however, you're trying to set this on the admin server, you can't do this in the admin console at all. In that case, go into $DOMAIN_HOME/bin and edit the "setDomainEnv.{sh,cmd}" script (".sh" if on Unix/Linux, ".cmd" on Windows). Find the line that sets the "MEM_ARGS" variable, like the following on Windows:
    set MEM_ARGS=-Xms256m -Xmx512m
    Change this line to whatever you want, then restart the admin server.

  • Strange: Mac screen menu-bar requires max-heap-size to be set.

    I planned to omit the max-heap-size attribute in the line of my jnlp file
    <j2se version="1.6+" max-heap-size="256m" />
    The idea was that with Java 1.6 the heap size is set automatically
    according to the client's RAM.
    Unfortunately, the Macintosh screen menu-bar works if and only if the max-heap-size attribute is present.
    It is a MacOsX=1.4 with Java1.5.
    Strange, since the Mac runs Java 1.5 and I am talking about settings for 1.6.
    The jnlp passed ValidateJNLP at http://mindprod.com/jgloss/jnlp.html#VALIDATION
    Here is another post stating that attributes in JNLP have side effects on Mac's screen menu-bar:
    http://lists.apple.com/archives/Java-dev/2008/Jul/msg00122.html
    Here is my jnlp:
    w3m -dump http://www.bioinformatics.org/strap/strap.jnlp
    Is there an explanation for this?
    Christoph

    user10289576 wrote:
    I would not blame Macintosh.
    The error might still be in the Sun's code.Could be. But to fix the Mac VM it would require the following.
    1. Find it in the Sun VM.
    2. Fix it in the Sun VM.
    3. Move the changes to the Mac VM...somehow.
    >
    If the jdk would be smaller, less redundant and clearer, then
    open JDK could be possibly compiled on a Mac.
    Not sure what that means since there are likely OS level calls that must be implemented in some place in the API. Which are specific to Mac.
    Just as there are differences between windows/solaris/linux which Sun accounted for.
    And that would be what Apple would have done to make it work on Mac. And what someone (someone who likes Mac) will need to continue to do with the public release (explusion?) which is the form that Java will have going forward.
    The main thing that should be improved on a Macintosh
    is to directly allow for Linux and Solaris executables such that the Linux JDK could directly work on a Mac.The main thing, again, is the Apple is no longer supporting Java on the Mac.
    And Apple, not Sun and certainly not Oracle, were the ones that created the Java VM.
    So the main thing at this point is that ALL future directions that Java take are dependent not on Apple but on the Macintosh community. That includes features as well as fixes.

  • Safely reducing JVM heap sizes

    I have just started to rewrite a physics multi-particle simulation program in Java. The program is supposed to have a light, essentially constant memory footprint throughout the execution of the simulation. While I am aware of that switching from legacy procedural code to a clean OO design will cause a very significant increase in memory consumption (let alone any language implementation issues) it was a bit jarring to see the process running my first working prototype consumed ~64MB while serializing the simulation framework object (which contains references to everything else in the program) results in a 115KB file. I asked Google for help and it told me about the -Xmx switch which sets the maximum heap size reserved by the JVM. A very brief test indicates it can really help, as -Xmx16m brought memory uptake to a much saner ~18MB. However, I have a few concerns about this kind of tuning; namely, stability over long execution times (I mean several days potentially), performance issues due to garbage collection problems and the need to accommodate for the growth of the program (as it is only a very basic prototype right now). So I would like to ask for advice on good practices and, eventually, useful tools (profilers, etc.) to identify potential problems. Thanks in advance!
    P.S.: If any of you is thinking "just use C++, then!", that's my ultimate goal. But before that I need to become more fluent in that language in order to avoid unnecessary suffering with memory leaks and the like. I believe porting to C++ later on will be a lot easier if have the design done in Java, and having a functional program with decent performance in the meantime would help a lot. Furthermore, I like Java, and it would be nice to see it doing scientific computing :-)

    @paulcw: Thanks for the useful advice, I will definitely consider these options. Right now, my main problem is that I'm a very inexperienced programmer, and Java is the language I feel more able to produce a solid design with. On the other hand now I have another reason to have a good look at Python... about the C++ issue, I think I will have to do the switch at some point; not necessarily due to performance issues but also because I would like to be able to run the program in systems in which I won't have the luxury of, say, a JVM available. But you're probably right in that I should start porting while the program is still small and simple, even if I need to learn a lot of C++ skills in the process...
    sjasja wrote:
    While I am aware of that switching from legacy procedural code to a clean OO design will cause a very significant increase in memory consumptionI'm not sure there is such a rule. An int is 32 bits in C and in Java. A pointer is 32 bits in both (or 64 on a 64-bit system). A Java object has an object header (Sun JVM: 8 bytes on a 32-bit system), when you malloc() a struct in C it typically has 4 or 8 bytes of malloc header; ditto for new in C++.
    It is possible to be wasteful of memory in any language or any programming style. Just don't do that then.
    Indeed. It is shocking how much prejudice people have about such issues, as I demonstrated in my post. In fact, I found that my first Java build ran remarkably fast considering I feared at first that it would be maybe one order of magnitude slower than our legacy code. (By the way, said legacy code is not aggressively optimized. There are some very fast packages available for the calculations we run, but more often than not we prefer sticking to our "amateur" in-house developed tools for simplicity of operation and for having full control on what the program does, or which features we might want to add. The problem is that such an approach does not work well when you spend half your productive time fighting with messy FORTRAN code)
    my first working prototype consumed ~64MBHow do you determine that?
    Tools like top on Unix or task manager on Windows are not reliable measures of memory use. Take this C program:
    main() { sleep(60); }According to Windows Task Manager that takes 1.2 megabytes of memory to execute. In reality it doesn't: most of that memory is the C runtime dll that is mapped into memory. Most of the runtime is never touched by the program; most of it is never brought from disk to RAM. But because it is mapped into memory, ready to be used, Task Manager reports it. Even if the library is used, several processes share the same memory mapped copy; memory use as reported by simple tools can far exceed available memory, as the same libraries are counted again and again for many processes.
    You get a similar effect with the Java runtime. There are lots of stuff in there, like graphics libraries, networking, etc etc that is in the runtime dll and jars. Those are memory mapped but never touched; and tools that report memory usage are likely to mislead the careless observer.Newbie question: what would be a good way, then, to monitor actual RAM consumption - either using Java features or Linux commands?

  • RTSJ ver RTS-2.2u1 - Running with large heap size

    We installed the RTSJ version which should support 64 bit , on Solaris S10X_u8 with 8GB memory .
    We expected that we can define heap size greater than 4GB .
    But when we run with the following flags : -D64 -Xms4g -Xmx4g
    we got the following message :
    Invalid maximum heapsize:-Xmx4g
    The specified size exceeds the maximum representable size
    Could not create the java virtual machine

    Gabi,
    That should be -d64 (small 'd') The -D arguments define Java property settings.
    Also note that in JRTS we don't dynamically grow the heap so you only need one of -Xms or -Xmx to set the heap size.
    David

  • What is the best way to verify default heap size in Java

    Hi All,
    What is the best way to verify default heap size in Java ? does it vary over JVM to JVM . I was reading this article http://javarevisited.blogspot.sg/2011/05/java-heap-space-memory-size-jvm.html , and it says default size is 128 MB but When I run following code :
    public static void main(String args[]) {
    int MB = 1024*1024;
    System.out.println(Runtime.getRuntime().totalMemory()/MB);
    It print "870" i.e. 870 MB.
    I am bit confused, what is the best way to verify default heap size in any JVM ?
    Edited by: 938864 on Jun 5, 2012 11:16 PM

    938864 wrote:
    Hi Kayaman,
    Sorry but I don't agree with you on verification part, Why not I can verify it ? to me default means value when I don't specify -Xms and -Xmx and by the way I was testing that program on 32 bit JRE 1.6 on Windows. I am also curious significant difference between 128MB and 870MB I saw, do you see anything obviously wrong ?That spec is outdated. Since Java 6 update 18 (Sun/Oracle implementation) the default maximum heap space is calculated based on total memory availability, but never more than 1GB on 32 bits JVMs / client VMs. On a 64 bits server VM the default can go as high as 32gb.
    The best way to verify ANYTHING is to address multiple sources of information and especially those produced by the source, not some page you find on the big bad internet. Even wikipedia is a whole lot better than any random internet site IMO. That's common sense, I can't believe you put much thought into it that you have to ask in a forum.

  • Massive memory hemorrhage; heap size to go from about 64mb, to 1.3gb usage

    **[SOLVED]**
    Note: I posted this on stackoverflow as well, but a solution was not found.
    Here's the problem:
    [1] http://i.stack.imgur.com/sqqtS.png
    As you can see, the memory usage balloons out of control! I've had to add arguments to the JVM to increase the heapsize just to avoid out of memory errors while I figure out what's going on. Not good!
    ##Basic Application Summary (for context)
    This application is (eventually) going to be used for basic on screen CV and template matching type things for automation purposes. I want to achieve as high of a frame rate as possible for watching the screen, and handle all of the processing via a series of separate consumer threads.
    I quickly found out that the stock Robot class is really terrible speed wise, so I opened up the source, took out all of the duplicated effort and wasted overhead, and rebuilt it as my own class called FastRobot.
    ##The Class' Code:
        public class FastRobot {
             private Rectangle screenRect;
             private GraphicsDevice screen;
             private final Toolkit toolkit;
             private final Robot elRoboto;
             private final RobotPeer peer;
             private final Point gdloc;
             private final DirectColorModel screenCapCM;
             private final int[] bandmasks;
             public FastRobot() throws HeadlessException, AWTException {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
                  toolkit = Toolkit.getDefaultToolkit();
                  elRoboto = new Robot();
                  peer = ((ComponentFactory)toolkit).createRobot(elRoboto, screen);
                  gdloc = screen.getDefaultConfiguration().getBounds().getLocation();
                  this.screenRect.translate(gdloc.x, gdloc.y);
                  screenCapCM = new DirectColorModel(24,
                            /* red mask */    0x00FF0000,
                            /* green mask */  0x0000FF00,
                            /* blue mask */   0x000000FF);
                  bandmasks = new int[3];
                  bandmasks[0] = screenCapCM.getRedMask();
                  bandmasks[1] = screenCapCM.getGreenMask();
                  bandmasks[2] = screenCapCM.getBlueMask();
                  Toolkit.getDefaultToolkit().sync();
             public void autoResetGraphicsEnv() {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
             public void manuallySetGraphicsEnv(Rectangle screenRect, GraphicsDevice screen) {
                  this.screenRect = screenRect;
                  this.screen = screen;
             public BufferedImage createBufferedScreenCapture(int pixels[]) throws HeadlessException, AWTException {
        //          BufferedImage image;
                DataBufferInt buffer;
                WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
                  return new BufferedImage(screenCapCM, raster, false, null);
             public int[] createArrayScreenCapture() throws HeadlessException, AWTException {
                       return peer.getRGBPixels(screenRect);
             public WritableRaster createRasterScreenCapture(int pixels[]) throws HeadlessException, AWTException {
             //     BufferedImage image;
                 DataBufferInt buffer;
                 WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
             //     SunWritableRaster.makeTrackable(buffer);
                  return raster;
        }In essence, all I've changed from the original is moving many of the allocations from function bodies, and set them as attributes of the class so they're not called every time. Doing this actually had a significant affect on frame rate. Even on my severely under powered laptop, it went from ~4 fps with the stock Robot class, to ~30fps with my FastRobot class.
    ##First Test:
    When I started outofmemory errors in my main program, I set up this very simple test to keep an eye on the FastRobot. Note: this is the code which produced the heap profile above.
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined:
    It doesn't do this every time, which is really strange (and frustrating!). In fact, it rarely does it at all with the above code. However, the memory issue becomes easily reproducible if I have multiple for loops back to back.
    #Test 2
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 200; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 1500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined
    The out of control heap is now reproducible I'd say about 80% of the time. I've looked all though the profiler, and the thing of most note (I think) is that the garbage collector seemingly stops right as the fourth and final loop begins.
    The output form the above code gave the following times:
    Time taken: 24.282 //Loop1
    Time taken: 11.294 //Loop2
    Time taken: 7.1 //Loop3
    Time taken: 70.739 //Loop4
    Now, if you sum the first three loops, it adds up to 42.676, which suspiciously corresponds to the exact time that the garbage collector stops, and the memory spikes.
    [2] http://i.stack.imgur.com/fSTOs.png
    Now, this is my first rodeo with profiling, not to mention the first time I've ever even thought about garbage collection -- it was always something that just kind of worked magically in the background -- so, I'm unsure what, if anything, I've found out.
    ##Additional Profile Information
    [3] http://i.stack.imgur.com/ENocy.png
    Augusto suggested looking at the memory profile. There are 1500+ `int[]` that are listed as "unreachable, but not yet collected." These are surely the `int[]` arrays that the `peer.getRGBPixels()` creates, but for some reason they're not being destroyed. This additional info, unfortunately, only adds to my confusion, as I'm not sure why the GC wouldn't be collecting them
    ##Profile using small heap argument -Xmx256m:
    At irreputable and Hot Licks suggestion I set the max heap size to something significantly smaller. While this does prevent it from making the 1gb jump in memory usage, it still doesn't explain why the program is ballooning to its max heap size upon entering the 4th iteration.
    [4] http://i.stack.imgur.com/bR3NP.png
    As you can see, the exact issue still exists, it's just been made smaller. ;) The issue with this solution is that the program, for some reason, is still eating through all of the memory it can -- there is also a marked change in fps performance from the first the iterations, which consume very little memory, and the final iteration, which consumes as much memory as it can.
    The question remains why is it ballooning at all?
    ##Results after hitting "Force Garbage Collection" button:
    At jtahlborn's suggestion, I hit the Force Garbage Collection button. It worked beautifully. It goes from 1gb of memory usage, down to the basline of 60mb or so.
    [5] http://i.stack.imgur.com/x4282.png
    So, this seems to be the cure. The question now is, how do I pro grammatically force the GC to do this?
    ##Results after adding local Peer to function's scope:
    At David Waters suggestion, I modified the `createArrayCapture()` function so that it holds a local `Peer` object.
    Unfortunately no change in the memory usage pattern.
    [6] http://i.stack.imgur.com/Ky5vb.png
    Still gets huge on the 3rd or 4th iteration.
    #Memory Pool Analysis:
    ###ScreenShots from the different memory pools
    ##All pools:
    [7] http://i.stack.imgur.com/nXXeo.png
    ##Eden Pool:
    [8] http://i.stack.imgur.com/R4ZHG.png
    ##Old Gen:
    [9] http://i.stack.imgur.com/gmfe2.png
    Just about all of the memory usage seems to fall in this pool.
    Note: PS Survivor Space had (apparently) 0 usage
    ##I'm left with several questions:
    (a) does the Garbage Profiler graph mean what I think it means? Or am I confusing correlation with causation? As I said, I'm in an unknown area with these issues.
    (b) If it is the garbage collector... what do I do about it..? Why is it stopping altogether, and then running at a reduced rate for the remainder of the program?
    (c) How do I fix this?
    Does anyone have any idea what's going on here?
    [1]: http://i.stack.imgur.com/sqqtS.png
    [2]: http://i.stack.imgur.com/fSTOs.png
    [3]: http://i.stack.imgur.com/ENocy.png
    [4]: http://i.stack.imgur.com/bR3NP.png
    [5]: http://i.stack.imgur.com/x4282.png
    [6]: http://i.stack.imgur.com/Ky5vb.png
    [7]: http://i.stack.imgur.com/nXXeo.png
    [8]: http://i.stack.imgur.com/R4ZHG.png
    [9]: http://i.stack.imgur.com/gmfe2.png
    Edited by: 991051 on Feb 28, 2013 11:30 AM
    Edited by: 991051 on Feb 28, 2013 11:35 AM
    Edited by: 991051 on Feb 28, 2013 11:36 AM
    Edited by: 991051 on Mar 1, 2013 9:44 AM

    SO came through.
    Turns out this issue was directly related to the garbage collector. The default one, for whatever reason, would get behind on its collection at points, and thus the memory would balloon out of control, which then, once allocated, became the new normal for the GC to operate at.
    Manually setting the GC to ConcurrentMarkSweep solved this issue completely. After numerous tests, I have been unable to reproduce the memory issue. The garbage collector does an excellent job of keeping on top of these minor collections.

  • Problem increasing heap size using Java Control Panel

    I am running a web-based bioinformatics package which uses a Java applet as the client application. It mostly works fine, but some large clustering processes don't complete, they just cycle endlessly. The user manual advises increasing heap space "by entering the following command in the Java Control Panel or Java Plug-in dialog: -Xms128M -Xmx256M." I can open the Java Control Panel OK and enter the necessary command in the Java Runtime Parameters box, but after clicking OK then Apply, the Runtime Parameters box is empty again and there is no increase in the heap size allocation. In other words, the change doesn't take effect.
    I have already tried: adjusting one parameter at a time (max heap size first, then min); changing uppercase to lowercase M (-Xmx256m); restarting the browser after the change; rebooting my PC - none of these has helped.
    Technical details: it's Java 6 standard edition, build 1.6.0_10-rc-b28; browser is Internet Explorer 7; operating system is Vista (Service Pack 1); PC is a 2.4 GHz processor with 2 GB RAM (so I have plenty available to allocate to Java). I have local admin privilege on my PC.
    Help would be much appreciated, as I really need to complete these cluster analyses. I'm a biologist not a developer, so ideally I need a solution that doesn't involve major programming, though I'm OK with registry edits and use of the command line. Thanks in advance.

    The failure to retain the parameters is a bug, per hdong in this [Java,net thread|http://forums.java.net/jive/thread.jspa?threadID=44540&tstart=0] dated August 14
    "I did a little more research and found out that the problem is only with "-rc" or "-beta" versions of plugin. When 1.6.0_10 is finally released, the version string will be "1.6.0_10". Plugin works fine to set/get jvm parameters when version string is "1.6.0_10".
    I am surprised that this issue was not discovered earlier. It was probably because the internal test binary use "1.6.0_10" as version string.
    Yeah, it is a bug that will affect many manny developers. It will be fixed as soon as possible."

  • JVM heap size limit under Windows

    Hi,
    I'm looking either for some help with a workaround, or
    confirmation that the information I've found is still the case for the
    current state of Java.
    Development machine is Win XP Pro, 2G RAM.
    Biggest heap I can allocate is about 1.6G, and that is not large enough for this
    app.
    I have a Swing application that
    1) must run on Win XP, 32 bit
    2) must implement an editor (similar to Excel but with fewer features) to handle large csv files
    ( up to about 800Mb).
    3) Strong preference for Java 5, though higher could conceivably be supported.
    Research so far tells me that this is the result of process memory limitations
    of Windows and the JVM, and that I might be able to squeeze a little more heap with
    Windows' rebase command, but probably not enough and I would start running the
    risk of conflicts with other applications on my users' systems. Ugh.
    Also I read of the Windows /3GB switch, but posts say that the JDK's available are not
    built to be able to use that feature. I havent had a chance to add memory to
    test that yet. However, I'm also under the impression that I should be able to
    allocate a heap larger than physical RAM ... except for that process size limit.
    So ... my information is basically that I'm stuck with a limit of about 1.6G for
    heap size, regardless of the RAM on my computer.
    Can anyone confirm whether that is still correct, preferably with a pointer to some
    official reference ?
    Or better yet, point me toward a workaround?
    Thanks!
    -tom

    >
    Some bookmarks I have on this topic.
    http://sinewalker.wordpress.com/2007/03/04/32-bit-windows-and-jvm-virtual-memory-limit/
    http://stackoverflow.com/questions/171205/java-maximum-memory-on-windows-xp
    The first link pulled together what I found in lots of bits and pieces elsewhere, nice to have a coherent summary :)
    The second link offered a bit of insight into the jvm that I hadn't seen yet .
    Thanks!

  • Set the heap size

    Hi,
    I am wondering what's the default heap size if I don't add -ms -mx opation? I
    set the two options and the following is the output from -verbosegc. what's it
    means?
    Thanks.
    [GC 2598K->2145K(2696K), 0.0035829 secs]
    [GC 2656K->2201K(2824K), 0.0036262 secs]
    [Full GC 2713K->2070K(4028K), 0.1933980 secs]
    [GC 2582K->2229K(4028K), 0.0046565 secs]
    [GC 2741K->2373K(4028K), 0.0061002 secs]
    [GC 2885K->2514K(4028K), 0.0064346 secs]
    [GC 3026K->2656K(4028K), 0.0061670 secs]
    [GC 3168K->2798K(4028K), 0.0054694 secs]
    [GC 3310K->2940K(4028K), 0.0054521 secs]
    [GC 3452K->3080K(4028K), 0.0058119 secs]
    [GC 3592K->3216K(4028K), 0.0059069 secs]
    [GC 3728K->3357K(4028K), 0.0057200 secs]
    [GC 3869K->3496K(4028K), 0.0055719 secs]
    [GC 4008K->3636K(4156K), 0.0057239 secs]
    [Full GC 4148K->3649K(6544K), 0.3064887 secs]
    [GC 4091K->3773K(6544K), 0.0412259 secs]
    [GC 4285K->3912K(6544K), 0.0052336 secs]
    [GC 4424K->4051K(6544K), 0.0055661 secs]
    [GC 4563K->4189K(6544K), 0.0055543 secs]
    [GC 4701K->4326K(6544K), 0.0055703 secs]
    [GC 4838K->4464K(6544K), 0.0055915 secs]
    [GC 4976K->4608K(6544K), 0.0059667 secs]
    [GC 5120K->4746K(6544K), 0.0053261 secs]
    [GC 5258K->4884K(6544K), 0.0053761 secs]
    [GC 5396K->5023K(6544K), 0.0059290 secs]
    [GC 5534K->5159K(6544K), 0.0054320 secs]
    [GC 5671K->5301K(6544K), 0.0114341 secs]
    [GC 5813K->5405K(6544K), 0.0103658 secs]
    [GC 5917K->5492K(6544K), 0.0053194 secs]
    [GC 6003K->5592K(6544K), 0.0107092 secs]
    [GC 6104K->5694K(6544K), 0.0096887 secs]
    [GC 6206K->5786K(6544K), 0.0037949 secs]
    [GC 6298K->5890K(6544K), 0.0101172 secs]
    [GC 6402K->5990K(6544K), 0.0041271 secs]
    [GC 6502K->6097K(6672K), 0.0040678 secs]
    [Full GC 6609K->6191K(10992K), 0.2664733 secs]
    Tue Dec 04 15:22:10 PST 2001:<I> <T3Services> CacheManagerImpl: EMAIL TEMPLATE
    C
    ACHE STARTING
    [GC 6849K->6361K(10992K), 0.0586387 secs]
    Tue Dec 04 15:22:10 PST 2001:<I> <T3Services> CacheManagerImpl: SCHEDULE CACHE
    S
    TARTING
    [GC 7129K->6530K(10992K), 0.0083019 secs]
    [GC 7298K->6678K(10992K), 0.0058533 secs]
    [GC 7446K->6807K(10992K), 0.0052940 secs]
    [GC 7575K->6920K(10992K), 0.0048598 secs]

    I think the default heap size is 16MB, as for the GC output:
    these [GC 2598K->2145K(2696K), 0.0035829 secs]
    show the collection of Objects within the eden area of the heap (Short lived Objects),
    the heap size before GC was 2598K and after GC was 2145K and the time taken was
    0.0035829 secs.
    These outputs:
    [Full GC 2713K->2070K(4028K), 0.1933980 secs]
    show the details for a full GC, these are the ones to watch out for, they will
    take longer and the JVM (no mater how many processors) will block during a full
    GC....I.E no server response at all.
    The smaller the heap size the more often a full GC will occur, however the larger
    the heap, the longer the full GC will take.
    One of the new options for jdk 1.3.1 is the -Xincgc option, this will do incremental
    Garbage Collection, overall it will take longer than normal, but the individual
    Full GCs will take less time....so the server is not hung for as long at any one
    time.
    Set -Xms (the minimum heap) to the same as -Xmx (max heap), this increases performance
    as the JVM does not have to repeatedly assign more memory to the heap.
    Gareth
    "Jen" <[email protected]> wrote:
    >
    Hi,
    I am wondering what's the default heap size if I don't add -ms -mx opation?
    I
    set the two options and the following is the output from -verbosegc.
    what's it
    means?
    Thanks.
    [GC 2598K->2145K(2696K), 0.0035829 secs]
    [GC 2656K->2201K(2824K), 0.0036262 secs]
    [Full GC 2713K->2070K(4028K), 0.1933980 secs]
    [GC 2582K->2229K(4028K), 0.0046565 secs]
    [GC 2741K->2373K(4028K), 0.0061002 secs]
    [GC 2885K->2514K(4028K), 0.0064346 secs]
    [GC 3026K->2656K(4028K), 0.0061670 secs]
    [GC 3168K->2798K(4028K), 0.0054694 secs]
    [GC 3310K->2940K(4028K), 0.0054521 secs]
    [GC 3452K->3080K(4028K), 0.0058119 secs]
    [GC 3592K->3216K(4028K), 0.0059069 secs]
    [GC 3728K->3357K(4028K), 0.0057200 secs]
    [GC 3869K->3496K(4028K), 0.0055719 secs]
    [GC 4008K->3636K(4156K), 0.0057239 secs]
    [Full GC 4148K->3649K(6544K), 0.3064887 secs]
    [GC 4091K->3773K(6544K), 0.0412259 secs]
    [GC 4285K->3912K(6544K), 0.0052336 secs]
    [GC 4424K->4051K(6544K), 0.0055661 secs]
    [GC 4563K->4189K(6544K), 0.0055543 secs]
    [GC 4701K->4326K(6544K), 0.0055703 secs]
    [GC 4838K->4464K(6544K), 0.0055915 secs]
    [GC 4976K->4608K(6544K), 0.0059667 secs]
    [GC 5120K->4746K(6544K), 0.0053261 secs]
    [GC 5258K->4884K(6544K), 0.0053761 secs]
    [GC 5396K->5023K(6544K), 0.0059290 secs]
    [GC 5534K->5159K(6544K), 0.0054320 secs]
    [GC 5671K->5301K(6544K), 0.0114341 secs]
    [GC 5813K->5405K(6544K), 0.0103658 secs]
    [GC 5917K->5492K(6544K), 0.0053194 secs]
    [GC 6003K->5592K(6544K), 0.0107092 secs]
    [GC 6104K->5694K(6544K), 0.0096887 secs]
    [GC 6206K->5786K(6544K), 0.0037949 secs]
    [GC 6298K->5890K(6544K), 0.0101172 secs]
    [GC 6402K->5990K(6544K), 0.0041271 secs]
    [GC 6502K->6097K(6672K), 0.0040678 secs]
    [Full GC 6609K->6191K(10992K), 0.2664733 secs]
    Tue Dec 04 15:22:10 PST 2001:<I> <T3Services> CacheManagerImpl: EMAIL
    TEMPLATE
    C
    ACHE STARTING
    [GC 6849K->6361K(10992K), 0.0586387 secs]
    Tue Dec 04 15:22:10 PST 2001:<I> <T3Services> CacheManagerImpl: SCHEDULE
    CACHE
    S
    TARTING
    [GC 7129K->6530K(10992K), 0.0083019 secs]
    [GC 7298K->6678K(10992K), 0.0058533 secs]
    [GC 7446K->6807K(10992K), 0.0052940 secs]
    [GC 7575K->6920K(10992K), 0.0048598 secs]

Maybe you are looking for