Why my netweaver's heap size is 3000m? I had changed 256m,not change

dear all:
the log under  the dir "work":
filename:dev_jcontrol.b01:
Bootstrap nodes
-> [00] bootstrap            : D:\usr\sap\FZB\JC00\j2ee\cluster\instance.properties
-> [01] bootstrap_ID5486600  : D:\usr\sap\FZB\JC00\j2ee\cluster\instance.properties
-> [02] bootstrap_ID5486650  : D:\usr\sap\FZB\JC00\j2ee\cluster\instance.properties
Worker nodes
-> [00] ID5486600            : D:\usr\sap\FZB\JC00\j2ee\cluster\instance.properties
-> [01] ID5486650            : D:\usr\sap\FZB\JC00\j2ee\cluster\instance.properties
-> [02] sdm                  : D:\usr\sap\FZB\JC00\SDM\program\config\sdm_jstartup.properties
[Thr 5180] JControlExecuteBootstrap: execute bootstrap process [bootstrap]
[Thr 5180] [Node: bootstrap] java home is set by profile parameter
     Java Home: C:\j2sdk1.4.2_18
[Thr 5180] JStartupICheckFrameworkPackage: can't find framework package D:\usr\sap\FZB\JC00\exe\jvmx.jar
JStartupIReadSection: read node properties [bootstrap]
-> node name          : bootstrap
-> node type          : bootstrap
-> node execute       : yes
-> java path          : C:\j2sdk1.4.2_18
-> java parameters    : -Djco.jarm=1
-> java vm version    : 1.4.2_18-b06
-> java vm vendor     : Java HotSpot(TM) Server VM (Sun Microsystems Inc.)
-> java vm type       : server
-> java vm cpu        : x86
-> heap size          : 3000M
-> root path          : D:\usr\sap\FZB\JC00\j2ee\cluster
-> class path         : .\bootstrap\launcher.jar
-> OS libs path       : D:\usr\sap\FZB\JC00\j2ee\os_libs
-> main class         : com.sap.engine.offline.OfflineToolStart
-> framework class    : com.sap.bc.proj.jstartup.JStartupFramework
-> registr. class     : com.sap.bc.proj.jstartup.JStartupNatives
-> framework path     : D:\usr\sap\FZB\JC00\exe\jstartup.jar;D:\usr\sap\FZB\JC00\exe\jvmx.jar
-> parameters         : com.sap.engine.bootstrap.Bootstrap ./bootstrap ID0054866
-> debuggable         : yes
-> debug mode         : no
-> debug port         : 60000
-> shutdown timeout   : 120000
JControlStartJLaunch: program = D:\usr\sap\FZB\JC00\exe\jlaunch.exe
-> arg[00] = D:\usr\sap\FZB\JC00\exe\jlaunch.exe
-> arg[01] = pf=D:\usr\sap\FZB\SYS\profile\FZB_JC00_tzgl
-> arg[02] = -DSAPINFO=FZB_00_bootstrap
-> arg[03] = -nodeId=-1
-> arg[04] = -file=D:\usr\sap\FZB\JC00\j2ee\cluster\instance.properties
-> arg[05] = -syncSem=JSTARTUP_WAIT_ON_1660
-> arg[06] = -nodeName=bootstrap
-> arg[07] = -jvmOutFile=D:\usr\sap\FZB\JC00\work\jvm_bootstrap.out
-> arg[08] = -stdOutFile=D:\usr\sap\FZB\JC00\work\std_bootstrap.out
-> arg[09] = -locOutFile=D:\usr\sap\FZB\JC00\work\dev_bootstrap
-> arg[10] = -mode=BOOTSTRAP
-> arg[11] = pf=D:\usr\sap\FZB\SYS\profile\FZB_JC00_tzgl
-> lib path = PATH=C:\j2sdk1.4.2_18\jre\bin\server;C:\j2sdk1.4.2_18\jre\bin;D:\oracle\FZB\102\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;;D:\usr\sap\FZB\SYS\exe\uc\NTI386
-> exe path = PATH=C:\j2sdk1.4.2_18\bin;D:\usr\sap\FZB\JC00\j2ee\os_libs;D:\oracle\FZB\102\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;;D:\usr\sap\FZB\SYS\exe\uc\NTI386
[Thr 5180] Mon Oct 18 09:10:09 2010
[Thr 5180] *** ERROR => invalid return code of process [bootstrap] (exitcode = -2) [jstartxx.c   1642]
[Thr 5180] JControlExecuteBootstrap: error executing bootstrap node [bootstrap] (rc = -2)
[Thr 5180] JControlCloseProgram: started (exitcode = -2)
[Thr 5180] JControlCloseProgram: good bye... (exitcode = -2)
who can help me? thanks a lot!

Hello,
It wouldn't matter even if you changed the heap size  in the config tool to the desired value. The issue here is that your bootstrap process is failing and so the synchronization from DB to file system is not going to happen.
You need to find out what is causing the bootstrap to fail. Please refer to all the files with bootstrap in the work directory.
Also, if 3000M heap size is the cause for any reason that's causing the JVM to not be able to assign that much contguous space, to resolve that you'll also have to modify this value in instance.properties file along with the configtool. That way, atleast it will start first and then it can get the value from DB on synchrnonization.
Please paste the excerpt from the relevant bootstrap so we can help further as to whats causing the bootstrap failure.
REgards,
Snehal

Similar Messages

  • Why since I updated Mac OS X v10.7 Lion, I can not change my Principal Password

    Why since I updated Mac OS X v10.7 Lion, I can not change my Principal Password

    I Get there, but there is not option for reset Paswword just Restore, Reinstall Lion, Utilities Disk. My works to star my computer but i can`t change it from User and Group

  • Why JVM shows another Heap size when Xms=Xsx?

    Hi everybody. I have two instances of oc4j on a cluster of two nodes. The configuration of parameters on opmn.xml is exactly the same for both instances.
    Particulary I see an strange behavior on memory heap. I expect that if Xmx and Xms parameters are equals to 4096, then the heap size must be fixed on 4 Gb exactly, and there will be not resizing.
    But monitoring the heaps with JVisualVM I can see that the heap size of the first instance is moving between 3.7 and 4 Gb, and the Heap size of the second node is moving around 3.2 Gb. The hosts have both 16 Gb of RAM, it's running on Windows 2003 Server R2 and there is no overload on the OS.
    Possibly the behavior is normal and I just have a wrong understanding of what is expectable.
    Here are some parameteres extracted from opmn.xml file:
    -D64
    -Dcom.sun.management.jmxremote
    -Xmx4096M
    -Xms4096M
    -XX:+UseParallelGC
    -verbose:gc
    -XX:+PrintGCTimeStamps
    -XX:+PrintGCDetails
    -Xloggc:D:\oracle\product\10.1.3\soa\j2ee\OC4J_SOA\archivo_loggc.log
    -Dcom.sun.management.jmxremote.port=8004
    -Dcom.sun.management.jmxremote.authenticate=false
    -Dcom.sun.management.jmxremote.ssl=false
    -Djava.security.policy=D:\oracle\product\10.1.3\soa\j2ee\OC4J_SOA\config\java2.policy
    -Djava.awt.headless=true
    -Dhttp.webdir.enable=false
    -Doc4j.userThreads=true
    -Doracle.mdb.fastUndeploy=60
    -Doc4j.formauth.redirect=true
    -Djava.net.preferIPv4Stack=true
    -Dorabpel.home=D:\oracle\product\10.1.3\soa\bpel
    -Xbootclasspath/p:D:\oracle\product\10.1.3\soa\bpel\lib\orabpel-boot.jar
    -Dstdstream.filesize=8
    -Dstdstream.filenumber=30
    -Dhttp.proxySet=false
    -XX:MaxPermSize=768M
    -XX:PermSize=768M
    -Doraesb.home=D:\oracle\product\10.1.3\soa\integration\esb
    -Dorabpel.process.lock.timeout=300
    -Drmi.client.connection.timeout=100
    -Doracle.dms.hunter.sleeptime=600000
    -Doracle.dms.gatherers=300000,2:1800000:3
    -Djavax.net.ssl.keyStore=D:\oracle\product\10.1.3\soa\jdk\jre\lib\security\cacerts
    -Djavax.net.ssl.keyStorePassword=changeit
    -DopmnPingInterval=90
    -Xrs
    -DHTTPClient.socket.staleCheck=true
    -DHTTPClient.disableKeepAlives=true
    -XX:+HeapDumpOnOutOfMemoryError

    Hi Sebastien, Thanks for responding.
    In fact I'm looking exactly where you say to determine your heap size.
    The thing is that I can't understand why one instance takes some minor heap size and fluctuates the size while the other instance take around 4 Gb and kept static in this value. I can see there is no diferences on configuration for both. I know that the used heap must present diferent behavior, but not the total size. I think that the size should be fixed and static with Xmx=Xms=4096, but this is not happening.
    I would like to show an image to ilustrate what I'm describing.
    About the PermGen, I think there's no problem with that, around 80% is in effective use, and this value is stable.
    Best regards

  • The initial heap size must be less than or equal to the maximum heap size.

    All,
    Please help!!
    I have tested my Application Client Project in WSAD on my pc and it works fine.
    I have 1gb RAM on my pc. When I deploy the same app on another xp pc(same as mine but 512mb RAM) I get a heap size error. Here is the exact error:
    Incompatible initial and maximum heap sizes specified:
    initial size: 268435456 bytes, maximum heap size: 267380736 bytes
    The initial heap size must be less than or equal to the maximum heap size.
    The default initial and maximum heap sizes are 4194304 and 267380736 bytes.
    Usage: java [-options] class [args...]
    (to execute a class)
    or java -jar [-options] jarfile [args...]
    (to execute a jar file)
    where options include:
    -cp -classpath <directories and zip/jar files separated by ;>
    set search path for application classes and resources
    -D<name>=<value>
    set a system property
    -verbose[:class|gc|jni]
    enable verbose output
    -version print product version
    -showversion print product version and continue
    -? -help print this help message
    -X print help on non-standard options
    Could not create the Java virtual machine.
    Press any key to continue . . .
    Here is the batch file that runs my app:
    @echo off
    SET appClientEar=C:\corp\apps\mts\jars\MTSClientEAR.ear
    set JVM_ARGS=-Xms256M -Xmx256M
    set CLIENT_PROPS=C:\corp\apps\mts\jars\medicalclient.properties
    set APP_ARGS=
    call C:\bnsf\IBM\WebSphere\AppClient\bin\launchClientBNSF.bat "%JVM_ARGS%" %appClientEar% "-CCpropfile=%CLIENT_PROPS%" %APP_ARGS%
    @pause
    I have changed the value of Xms and Xmx of JVM_ARGS to different size but I sitll get error. Anyone knows what the problem is. Thanks..

    Don't know why, but the "maximum heap size: 267380736 bytes" value is just slightly less than 256*1024*1024, wheras the reported initial size is equal to that.
    Try setting the initial value to 255MB.

  • Java Heap Sizes  -Xms  -Xmx

    Gurus,
    Could some body help me to understand the differentiate between heap size parameters -Xms and -Xmx ? I usually change both of them. One another is that, the maximum heap size we can use is total ram/2 ?
    Thanks,

    Text from the above link
    ==================
    As part of the Best Practices, we know that we should be setting -Xms & -Xmx Java command line parameters. What are these settings and why is it required to set these.
    As JAVA starts, it creates within the systems memory a Java Virtual Machine (JVM). JVM is where the complete processing of any Java program takes place. All JAVA applications (including IQv6) by default allocates & reserves up to 64 MB of memory resource pool from the system on which it is running.
    The Xms is the initial / minimum Java memory (heap) size within the JVM. Setting the initial memory (heap) size higher can help in a couple of ways. First, it will allow garbage collection (GC) to work less which is more efficient. The higher initial memory value will cause the size of the memory (heap) not to have to grow as fast as a lower initial memory (heap) size, thereby saving the overhead of the Java VM asking the OS for more memory.
    The Xmx is the maximum Java memory (heap) size within the Java Virtual Memory (JVM). As the JVM gets closer to fully utilizing the initial memory, it checks the Xmx settings to find out if it can draw more memory from the system resources. If can, it does so. For the JVM to allocate contiguous memory to itself is a very expensive operation. So as the JVM gets closer to the initial memory, the JVM will use aggressive garbage collection (to clean the memory and if possible avoid memory allocation), increasing the load on the system.
    If JVM is in need of memory beyond the value set in Xmx, the JVM will not be able to draw more memory from system resource (even if available) and run out of memory. Hence, the -Xms and -Xmx memory parameters should be increased depending upon the demand estimation of the system. Ideally both same should be the same value (set at maximum possible as per demand estimation). This ensure that the maximum memory is allocated right at the start-up eliminating the need for extra memory allocation during program execution. We recommend aggressive maximum memory (heap) size of between 1/2 and 3/4 of physical memory.
    Edited by: oracleSQR on Oct 7, 2009 10:38 AM

  • Max. heap size - possibility to set in jar file ?

    Hi everybody,
    my first question ;-)
    I used java webstart for a while to roll out my software to 2 other colleagues. There was a possibility to set the starting and maximum heap size via the jnlp file. Recently I switched my "deployment process" to a simple jar file. I looked through the jar docs and googled a bit until I found this in the [bug database|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4682105]
    and especially:
    When jar file is read the maximum heap size is already set.
    It is not feasible to introduce this feature.
    So the only way to do this is using a script ?
    Thanks in advance and enjoy your day,
    Michael

    Thanks for your quick and economic response.
    Another way would be to use a tool like Launch4j which builds an exe file. Until now I have not been a friend of exe files for java programs. Since my application is only used in Windows environment I will have to rethink this opinion.

  • What is the best way to verify default heap size in Java

    Hi All,
    What is the best way to verify default heap size in Java ? does it vary over JVM to JVM . I was reading this article http://javarevisited.blogspot.sg/2011/05/java-heap-space-memory-size-jvm.html , and it says default size is 128 MB but When I run following code :
    public static void main(String args[]) {
    int MB = 1024*1024;
    System.out.println(Runtime.getRuntime().totalMemory()/MB);
    It print "870" i.e. 870 MB.
    I am bit confused, what is the best way to verify default heap size in any JVM ?
    Edited by: 938864 on Jun 5, 2012 11:16 PM

    938864 wrote:
    Hi Kayaman,
    Sorry but I don't agree with you on verification part, Why not I can verify it ? to me default means value when I don't specify -Xms and -Xmx and by the way I was testing that program on 32 bit JRE 1.6 on Windows. I am also curious significant difference between 128MB and 870MB I saw, do you see anything obviously wrong ?That spec is outdated. Since Java 6 update 18 (Sun/Oracle implementation) the default maximum heap space is calculated based on total memory availability, but never more than 1GB on 32 bits JVMs / client VMs. On a 64 bits server VM the default can go as high as 32gb.
    The best way to verify ANYTHING is to address multiple sources of information and especially those produced by the source, not some page you find on the big bad internet. Even wikipedia is a whole lot better than any random internet site IMO. That's common sense, I can't believe you put much thought into it that you have to ask in a forum.

  • Massive memory hemorrhage; heap size to go from about 64mb, to 1.3gb usage

    **[SOLVED]**
    Note: I posted this on stackoverflow as well, but a solution was not found.
    Here's the problem:
    [1] http://i.stack.imgur.com/sqqtS.png
    As you can see, the memory usage balloons out of control! I've had to add arguments to the JVM to increase the heapsize just to avoid out of memory errors while I figure out what's going on. Not good!
    ##Basic Application Summary (for context)
    This application is (eventually) going to be used for basic on screen CV and template matching type things for automation purposes. I want to achieve as high of a frame rate as possible for watching the screen, and handle all of the processing via a series of separate consumer threads.
    I quickly found out that the stock Robot class is really terrible speed wise, so I opened up the source, took out all of the duplicated effort and wasted overhead, and rebuilt it as my own class called FastRobot.
    ##The Class' Code:
        public class FastRobot {
             private Rectangle screenRect;
             private GraphicsDevice screen;
             private final Toolkit toolkit;
             private final Robot elRoboto;
             private final RobotPeer peer;
             private final Point gdloc;
             private final DirectColorModel screenCapCM;
             private final int[] bandmasks;
             public FastRobot() throws HeadlessException, AWTException {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
                  toolkit = Toolkit.getDefaultToolkit();
                  elRoboto = new Robot();
                  peer = ((ComponentFactory)toolkit).createRobot(elRoboto, screen);
                  gdloc = screen.getDefaultConfiguration().getBounds().getLocation();
                  this.screenRect.translate(gdloc.x, gdloc.y);
                  screenCapCM = new DirectColorModel(24,
                            /* red mask */    0x00FF0000,
                            /* green mask */  0x0000FF00,
                            /* blue mask */   0x000000FF);
                  bandmasks = new int[3];
                  bandmasks[0] = screenCapCM.getRedMask();
                  bandmasks[1] = screenCapCM.getGreenMask();
                  bandmasks[2] = screenCapCM.getBlueMask();
                  Toolkit.getDefaultToolkit().sync();
             public void autoResetGraphicsEnv() {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
             public void manuallySetGraphicsEnv(Rectangle screenRect, GraphicsDevice screen) {
                  this.screenRect = screenRect;
                  this.screen = screen;
             public BufferedImage createBufferedScreenCapture(int pixels[]) throws HeadlessException, AWTException {
        //          BufferedImage image;
                DataBufferInt buffer;
                WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
                  return new BufferedImage(screenCapCM, raster, false, null);
             public int[] createArrayScreenCapture() throws HeadlessException, AWTException {
                       return peer.getRGBPixels(screenRect);
             public WritableRaster createRasterScreenCapture(int pixels[]) throws HeadlessException, AWTException {
             //     BufferedImage image;
                 DataBufferInt buffer;
                 WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
             //     SunWritableRaster.makeTrackable(buffer);
                  return raster;
        }In essence, all I've changed from the original is moving many of the allocations from function bodies, and set them as attributes of the class so they're not called every time. Doing this actually had a significant affect on frame rate. Even on my severely under powered laptop, it went from ~4 fps with the stock Robot class, to ~30fps with my FastRobot class.
    ##First Test:
    When I started outofmemory errors in my main program, I set up this very simple test to keep an eye on the FastRobot. Note: this is the code which produced the heap profile above.
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined:
    It doesn't do this every time, which is really strange (and frustrating!). In fact, it rarely does it at all with the above code. However, the memory issue becomes easily reproducible if I have multiple for loops back to back.
    #Test 2
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 200; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 1500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined
    The out of control heap is now reproducible I'd say about 80% of the time. I've looked all though the profiler, and the thing of most note (I think) is that the garbage collector seemingly stops right as the fourth and final loop begins.
    The output form the above code gave the following times:
    Time taken: 24.282 //Loop1
    Time taken: 11.294 //Loop2
    Time taken: 7.1 //Loop3
    Time taken: 70.739 //Loop4
    Now, if you sum the first three loops, it adds up to 42.676, which suspiciously corresponds to the exact time that the garbage collector stops, and the memory spikes.
    [2] http://i.stack.imgur.com/fSTOs.png
    Now, this is my first rodeo with profiling, not to mention the first time I've ever even thought about garbage collection -- it was always something that just kind of worked magically in the background -- so, I'm unsure what, if anything, I've found out.
    ##Additional Profile Information
    [3] http://i.stack.imgur.com/ENocy.png
    Augusto suggested looking at the memory profile. There are 1500+ `int[]` that are listed as "unreachable, but not yet collected." These are surely the `int[]` arrays that the `peer.getRGBPixels()` creates, but for some reason they're not being destroyed. This additional info, unfortunately, only adds to my confusion, as I'm not sure why the GC wouldn't be collecting them
    ##Profile using small heap argument -Xmx256m:
    At irreputable and Hot Licks suggestion I set the max heap size to something significantly smaller. While this does prevent it from making the 1gb jump in memory usage, it still doesn't explain why the program is ballooning to its max heap size upon entering the 4th iteration.
    [4] http://i.stack.imgur.com/bR3NP.png
    As you can see, the exact issue still exists, it's just been made smaller. ;) The issue with this solution is that the program, for some reason, is still eating through all of the memory it can -- there is also a marked change in fps performance from the first the iterations, which consume very little memory, and the final iteration, which consumes as much memory as it can.
    The question remains why is it ballooning at all?
    ##Results after hitting "Force Garbage Collection" button:
    At jtahlborn's suggestion, I hit the Force Garbage Collection button. It worked beautifully. It goes from 1gb of memory usage, down to the basline of 60mb or so.
    [5] http://i.stack.imgur.com/x4282.png
    So, this seems to be the cure. The question now is, how do I pro grammatically force the GC to do this?
    ##Results after adding local Peer to function's scope:
    At David Waters suggestion, I modified the `createArrayCapture()` function so that it holds a local `Peer` object.
    Unfortunately no change in the memory usage pattern.
    [6] http://i.stack.imgur.com/Ky5vb.png
    Still gets huge on the 3rd or 4th iteration.
    #Memory Pool Analysis:
    ###ScreenShots from the different memory pools
    ##All pools:
    [7] http://i.stack.imgur.com/nXXeo.png
    ##Eden Pool:
    [8] http://i.stack.imgur.com/R4ZHG.png
    ##Old Gen:
    [9] http://i.stack.imgur.com/gmfe2.png
    Just about all of the memory usage seems to fall in this pool.
    Note: PS Survivor Space had (apparently) 0 usage
    ##I'm left with several questions:
    (a) does the Garbage Profiler graph mean what I think it means? Or am I confusing correlation with causation? As I said, I'm in an unknown area with these issues.
    (b) If it is the garbage collector... what do I do about it..? Why is it stopping altogether, and then running at a reduced rate for the remainder of the program?
    (c) How do I fix this?
    Does anyone have any idea what's going on here?
    [1]: http://i.stack.imgur.com/sqqtS.png
    [2]: http://i.stack.imgur.com/fSTOs.png
    [3]: http://i.stack.imgur.com/ENocy.png
    [4]: http://i.stack.imgur.com/bR3NP.png
    [5]: http://i.stack.imgur.com/x4282.png
    [6]: http://i.stack.imgur.com/Ky5vb.png
    [7]: http://i.stack.imgur.com/nXXeo.png
    [8]: http://i.stack.imgur.com/R4ZHG.png
    [9]: http://i.stack.imgur.com/gmfe2.png
    Edited by: 991051 on Feb 28, 2013 11:30 AM
    Edited by: 991051 on Feb 28, 2013 11:35 AM
    Edited by: 991051 on Feb 28, 2013 11:36 AM
    Edited by: 991051 on Mar 1, 2013 9:44 AM

    SO came through.
    Turns out this issue was directly related to the garbage collector. The default one, for whatever reason, would get behind on its collection at points, and thus the memory would balloon out of control, which then, once allocated, became the new normal for the GC to operate at.
    Manually setting the GC to ConcurrentMarkSweep solved this issue completely. After numerous tests, I have been unable to reproduce the memory issue. The garbage collector does an excellent job of keeping on top of these minor collections.

  • Too low initial heap size

    Hi,
    Any idea why following line is giving too low initial heap size error in server startup.
    $JAVA_HOME/bin/java $JAVA_VM -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=6 -Xms2048m -Xmx2048m -Xrs -verbose:gc
    This error is coming when I am adding -XX:NewSize=256m -XX:MaxNewSize=256m parameters.
    Thanks in advance.
    Regards,
    Chintan.

    Just a thought: maybe you should use both,
    initial-heap-size="128m" max-heap-size="256m"
    I think the Client JVM defaults to max 64m, so setting the initial heap to 128m would set initial bigger than max, which leads to an error.
    hope that helps...
    Patric

  • Java heap size error Work manager

    Hi,
      We are working with Work Manager 6.0 running on SMP 2.3 SP03.
    The application was working fine, but after the data load that has happened in the UAT
    System, we are getting the below error:
    2014/08/08 00:48:56.142:
    setImportParameters::STORAGE_REF_KEY=ET_COMPLEX_TABLE
    2014/08/08 00:48:56.143:               + User=TS_MAINTMGR
    2014/08/08 00:48:56.143:                 execute::::TS_MAINTMGR::before
    BAPI execute: /SMERP/MM_CTMATPLANT_GET
    2014/08/08 00:55:20.905:             + BackEnd=Java-1
    2014/08/08 00:55:20.905:               Exception while updating complex
    table 'ctpart': JavaBackEndError: JAVA EXCEPTION CAUGHT:
    java.lang.OutOfMemoryError: Java heap space in
    AgentryJavaComplexTableIterator::hasNext at
    AgentryJavaComplexTableIterator.cpp:52
    2014/08/08 00:55:21.099: + Thread=6588
    2014/08/08 00:55:21.099:   + Thread Pool=Server
    2014/08/08 00:55:21.099:     + WorkFunction=00000000020DEF90
    2014/08/08 00:55:21.099:       + User=TS_MAINTMGR
    2014/08/08 00:55:21.099:         + User=TS_MAINTMGR
    2014/08/08 00:55:21.099:           Received Logout Request message 18
    status changed to 'In Progress'
    2014/08/08 00:55:21.105:         Logged out (but not yet cleaned up)
    We tried to increase the min and max heap size from 256 / 512 to 512 /
    2048 respectively in Agentry.ini, but are still getting the same error.
    Please let us know if there is any other parameter which can be used to
    fine tune this and get this up and running,
    Thanks for the help.

    Raviraj,
    Are you working with Boopaln?  The reason why I as He posted the same error (in the same complex table) With the same changes but just a different user shown.
    If yes. please see that thread at:  JAVA Heap Size Error in SAP Work Manager

  • Monitor heap size

    How do we monitor Java heap size? Is there any way to check whether we are exceeding the heap size or not?
    Regards,
    N.S

    Hi Swamy
    Start the NWA tool (NetWeaver Administrator) that can be reached from the main page of your J2EE.
    Log on with a administrator user and select the "Monitoring" link.
    Then you select the "Java System Reports" link.
    Here you should see a graph showing your "Resource consumption
    Reward suitable points

  • Memory Usage in WinTaskManger vs heap size

    hello,
    I have exactly the same problem as in the topic "Windows Task Manager vs. Heap", posted at Dec 10, 2004 5:17 AM, by jorgeHX.
    Here are the symptoms:
    1) My application starts at 20MB seen by windows task manager
    2) I use profiler to monitor the heap. Heap is always working very healthly - heap size in the profiler increases by a minimum until the gc comes around so that the used heap size drops down again.
    3) However, after doing a relatively memory-consuming operation (loop of String indexing, patterning ..etc.), the memory usage in windows task manager goes up couple of MBs but never drops down.
    4) Then I go manually free the heap (System.gc()), I can see GC is freeing heap. However, the memory in windows task manager remains no change, no matter how many times I force the garbabge collection.
    This is a bad thing - if my application keeps doing that memory-consuming operation again and again, the memory seen in the windows task manager will be growing and growing to hundreds of MBs until the windows alerts "low on virtual memory".
    I tried everything already. I set every instance = null at the end, I delete every reference but the memory just keeps increasing! WHY?????
    Anyone can help me charactorize the problem? I am so in dark!!
    Ryan-Chi

    I guess my problem can also be interpretated as
    "Why doesn't JVM return memory to OS?"
    It does, depending on the setting of the -XX:MaxHeapFreeRatio option. "Normal" operation using the default setting does not usually cause memory to be returned to the os.
    Search for this option term for explanations.

  • How to increase Heap size at runtime

    Hi,
    In my application i sometime require to parse large XML files,so for doing that i require to increase the heap size dynamically.Is there any way to do that ?
    Thanks in advance
    Gurpreet Singh

    And unless there is no reason to be swapped out
    (plenty of physical memory) then it will be swapped
    out.I don't understand this. If there is no reason to
    swap a page back to the disk - e.g., there is plenty
    of available phyiscal memory for some other process -
    then why would the OS bother to swap that page back
    out to the disk?Ok I don't understand the statement either :)
    So obviously I need to rephrase it.
    Page swapping only occurs when needed. If there is plenty of physical memory to hold everything, then swapping does not occur.
    On the other hand if there is not enough physical memory then pages will be swapped out.
    >
    And application can also tell the OS that it nolonger
    wants to use some memory (pages.) But what doesthis
    gain the application? After all if it kept thememory
    and simply did not use it, it would not affectother
    applications. That is because the memory isswapped
    out to the harddrive until needed. So there is no
    need for an application to return memory to the OS.But since virtual memory is finite and there is a
    performance overhead if there is frequent swapping of
    pages into physical memory and out from the disk
    (thrashing), overall performance/throughput degrades.Some what correct. First virtual memory has a finite limit. But that limit itself does not affect other applications.
    Virtual memory in of it self does not cause swapping. If I allocate 1 gig for my application, when my application runs it will not have 1 gig of physical space. Only the memory that is 'touched' is relevant to the OS. So during a time slice it might be the case that only 16k of actual physical memory is used - say 8k for the small piece of code running and 8k for data (an over simplification but appropriate for this discussion.)
    It is rather unlikely, perhaps even impossible, that the entire 1 gig of memory would be swapped into memory during a single time slice.
    If the process' consumption of memory can not be
    streamlined, then that's typically when people go and
    buy more physical memory, no? Or in some cases,
    increasing the amount of virtual memory available to
    the OS can alleviate the problem.There is a difference between requesting memory and using memory. Memory that is used ('touched') is the only thing that has to be in physical memory. If you do indeed use a lot of memory in your application then more physical memory will help. But simply requesting more memory doesn't impact physical memory.

  • Mapping set heap sizes to used memory

    Hi all,
    I've got a question about the parameters used to control your java process' heap sizes: "-Xms128m -Xmx256m" etc.
    Let's say I set my min and max to 2Gb, just for a simplistic example.
    If I then look at the linux server my process is running on, I may see a top screen like so:
    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    10647 javaprog 20   0 2180m 1.9g  18m S  1.3  3.7   1:57.02 javaWhat I'm trying to understand is what relationship - if any - there is between these arguments and the figures I see within top. One thing in particular that I'm interested in is the fact that I occasionally see a RES (or more commonly a VIRT) size higher than the maximum that I have provided to Java. Naively I would assume that therefore there isn't a relationship between the two... but I wouldn't mind someone clarifiying this for me.
    Any resources on the matter would be appreciated, and I apologise if this question is outside the realms of this particular subforum.
    Dave.

    Peter Lawrey wrote:
    user5287726 wrote:
    Peter Lawrey wrote:
    It will always reserve this much virtual memory, plus. In term of resident memory, even the minimum is not guarenteed to be used. The minimum specifies at what point it will make little effort to recycle memory. i.e. it grows to the minimum size freely, but a "Hello World" program still won't use the minimum size.No, Linux does not reserve virtual memory. Just Google "Linux memory overcommit". Out-of-the-box, every Linux distro I'm aware of will just keep returning virtual memory to processes until things fall apart and the kernel starts killing processes that are using lots of memory - like your database server, web server, or application-critical JVMs. You know - the very processes you built and deployed the machine to run. Just Google "Linux OOM killer".Thats not the behaviour I see. When I start a process which busy waits, but doesn't create any objects, the virtual memory sized used is based on the -mx option, not how much is used. Given virtual memeory is largely free, why would an OS only give virtual memory on an as needs basis.
    Busy looping process which does nothing.
    In each case the resident size is 16m
    option       virtual size
    -mx100m      368m = 100m + 268m
    -mx250m      517m = 250m + 267m
    -mx500m      769m = 500m + 269m
    -mx1g        1294m = 1024m + 270m
    -mx2g        2321m = 2048m + 273mTo me it appears that the maximum size you ask is immediately added to the virtual memory size, even if its not used (plus an overhead) i.e. the resident size is only 16m.Yes, it's only using 16m. And its virtual size may very well be what you see. But that doesn't mean the OS actually has enough RAM + swap the hold what it tells all running processes they can have.
    How much RAM + swap does your machine have? Say it's 4 GB. You can probably run 10 or 20 JVMs simultaneously with the "-mx2g" option. Imagine what happens, though, if they actually try and use that memory - that the OS said they could have, but which doesn't all exist.
    What happens?
    The OOM killer fires up and starts killing processes. Which ones? Gee, it's a "standard election procedure". Which on a server that's actually doing something tend to be the processes actually doing something, like your DBMS or web server or JVM. Or maybe it's your backups that get whacked because they're "newly started" and got promised access to memory that doesn't exist.
    Memory overcommit on a server with availability and reliability requirements more stringent than risible is indefensible.

  • Siginficance of max heap size mentioned in configtool

    Hi all,
    could anyone please tell me the exact significance of
    max heap size mentioned in configtool in SAP Netweaver in
    <b>1)Instance_ID</b>
    -servers general
    -message servers and bootstrap
    <b>2)Dispatcher_ID</b>
    -general
    -bootstrap
    <b>3)Server_ID</b>
    -general
    -bootstrap
    Which of these do i change to improve the performance?
    I tried changing the max heap size specified in
    <b>Server_ID</b>
    -general
    but i got the following error while trying to start the server  in std_server0.out:
    node name   : server0
    pid         : 3452
    system name : N02
    system nr.  : 01
    started at  : Tue Mar 20 21:53:37 2007
    Reserved 1610612736 (0x60000000) bytes before loading DLLs.
    [Thr 1912] MtxInit: -2 0 0
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Regards,
    Namrata.

    Hi,
    The biggest impact to runtime performance will be adjusting the heap size of the server JVM. This is done in Server_ID->general.  The JVM parameters entered here take precedence over parameters in Instance_ID->servers general.  The server job by far do the most work in the Java engine and so it is very important that the JVM for the server node is tuned to handle the workload.  Tuning the server JVM or even adding additional server nodes is dependent on the workload and the amount of work on the system.
    Adjusting the heap for the other JVMs will have much less of an impact than adjusting the heap in the server JVM.
    The dispatcher JVM heap settings may have a slight impact during runtime, but compared to the server jobs the dispatcher does relatively little work.  Depending on your situation you may need to tune the dispatcher a little, but my experience has been that the default value for the dispatcher is usually sufficient.
    The values for all of the bootstrap jobs may have an impact on startup time, but they will have no impact on runtime since these jobs go away once the system is up.  From what I have seen the defaults values for the bootstrap jobs are sufficent.
    I never adjust anything under Instance_ID, I'm not sure what these parameters are used for except for maybe default values when adding server nodes.  Maybe someone out there knows.
    Hope this helps.
    Regards,
    Kolby

Maybe you are looking for

  • Installed latest update, now finder not loading

    I installed the latest update (Safari 7.0.5 18th July 2014); after installing a 'question mark' came over the safari icon. I didn't think much of it at the time and after completing my work shut down my system (Macbook Air 13" mid 2013). Afterwards w

  • How i can make valid java appliacation and how

    i am novice in java and i want to make good and valid application that i can run from server any body can tell me how i can make this or any link for any project online or build on net with deployment details A.R

  • Y and X axis options are not showing up

    I want to edit the position of the null object, but the Y and X axis aren't present! It's been a while since I used AE so I don't remember how to get them to show up. They used to just show up automatically, but now I don't know where it is at. Pleas

  • Works on local, blackscreen on deploy folder

    hi. i designed 2 webpages in fc. before and there wasnt any problem. this time i did the same things ,:) did my design but when i open the upload the web folder and run main file only a black screen appears. when i run the main file on the run local

  • What happened to dhcp clients log?

    Not sure when but now I only have one log menu. Under advanced, Logging & SNMP,Logs and Statistics there used to be 3 log menus. Now I only have "Logs". What happened to dhcp clients? Version - 7.3.2 running DHCP(Share a public IP address) Tx