Java memory consumption

Hi
I want to know how much java memory is used.
How can I know the memory comsumption.
regards,

Hai,
Please check the below link.....
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d0eaafd5-6ffd-2910-019c-9007a92b392f
Regards,
Yoganand.V

Similar Messages

  • High Java memory consumption.

    Hello,
    We are developing a solution using Servoy, which is a database server framework built in java. We have now 2 Servoy instances running on Debian linux servers, one with 64bit, both of them running java 1.6 update 26. We are having lot of crashes of the server with this error log:
    # There is insufficient memory for the Java Runtime Environment to continue.
    # Native memory allocation (malloc) failed to allocate 32776 bytes for Chunk::new
    # Possible reasons:
    # The system is out of physical RAM or swap space
    # In 32 bit mode, the process size limit was hit
    # Possible solutions:
    # Reduce memory load on the system
    # Increase physical memory or swap space
    # Check if swap backing store is full
    # Use 64 bit Java on a 64 bit OS
    # Decrease Java heap size (-Xmx/-Xms)
    # Decrease number of Java threads
    # Decrease Java thread stack sizes (-Xss)
    # Set larger code cache with -XX:ReservedCodeCacheSize=
    # This output file may be truncated or incomplete.
    # Out of Memory Error (allocation.cpp:317), pid=12359, tid=1777961872
    # JRE version: 6.0_26-b03
    # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
    --------------- T H R E A D ---------------
    Current thread (0x092c8400): JavaThread "C2 CompilerThread1" daemon [_thread_in_native, id=12368, stack(0x69f18000,0x69f99000)]
    Stack: [0x69f18000,0x69f99000], sp=0x69f95fe0, free space=503k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [libjvm.so+0x7248b0]
    We checked both the heap memory and the memory used by java in this case and we notice a highly increase of the total java memory, while the heap memory is resonable:
    JVM Information
    java.vm.name=Java HotSpot(TM) Server VM
    java.version=1.6.0_26
    java.vm.info=mixed mode
    java.vm.vendor=Sun Microsystems Inc.
    Operating System Information
    os.name=Linux
    os.version=2.6.29-xs5.5.0.17
    os.arch=i386
    System Information
    Heap memory: allocated=141440K, used=105555K, max=699072K
    None Heap memory: allocated=49312K, used=49018K, max=180224K
    root 16388 2.5 43.0 2829220 1811772 ? Sl 20:11 3:43 java -Djava.awt.headless=true -Xmx768m -Xms128m -XX:MaxPermSize=128m -classpath .:lib/ohj-jewt.jar:lib/MRJAdapter.jar:lib/compat141.ja
    Right now we are running it on the 64bit machine with -Xmx256m -Xms64m, and the memory is the same as on the 32bit one.
    We also tried different memory configurations, but the result is the same. Java goes up to more the 2GB of memory used, while the heap is about 100MB - 400MB. In the output above you can see 1.8GB used, but this is at startup, after few hours it goes to more then 2GB - 2.5GB, then it crashes in at most few days, sometimes in few hours or even less.

    Can the profiler see more then the heap? OK, I'll try to profile the server.
    Servoy uses Tomcat 5.
    The OS we run on is Debian. The 64bit machine is this:
    JVM Information
    java.vm.name=Java HotSpot(TM) 64-Bit Server VM
    java.version=1.6.0_26
    java.vm.info=mixed mode
    java.vm.vendor=Sun Microsystems Inc.
    Operating System Information
    os.name=Linux
    os.version=2.6.32-5-amd64
    os.arch=amd64
    and the current memory usage is this:
    System Information
    Heap memory: allocated=253440K, used=228114K, max=253440K
    None Heap memory: allocated=112512K, used=112223K, max=180224K
    , while the java process memory is this:
    root 11629 6.9 44.4 2105640 1829808 hvc0 Sl 11:42 14:18 java -Djava.awt.headless=true -Xmx256m -Xms64m -XX:MaxPermSize=128m -classpath .:lib/ohj-jewt.jar:lib/MRJAdapter.jar:lib/compat141.jar:lib/commons-codec.jar:lib/commons-httpclient.jar:lib/activation.jar:lib/antlr.jar:lib/commons-collections.jar:lib/commons-dbcp.jar:lib/commons-fileupload-1.2.1.jar:lib/commons-io-1.4.jar:lib/commons-logging.jar:lib/commons-pool.jar:lib/dom4j.jar:lib/help.jar:lib/jabsorb.jar:lib/hibernate3.jar:lib/j2db.jar:lib/j2dbdev.jar:lib/jdbc2_0-stdext.jar:lib/jmx.jar:lib/jndi.jar:lib/js.jar:lib/jta.jar:lib/BrowserLauncher2.jar:lib/jug.jar:lib/log4j.jar:lib/mail.jar:lib/ohj-jewt.jar:lib/oracle_ice.jar:lib/server-bootstrap.jar:lib/servlet-api.jar:lib/wicket-extentions.jar:lib/wicket.jar:lib/wicket-calendar.jar:lib/slf4j-api.jar:lib/slf4j-log4j.jar:lib/joda-time.jar:lib/rmitnl.jar:lib/networktnl.jar com.servoy.j2db.server.ApplicationServer
    About another app, I'm confused, because we do have another java app that uses much less:
    root      1149  0.3  2.1 843312 86988 ?        Sl   10:16   0:58 java -Xmx512m -Xms128m -cp noaa_server.zip server.WebServer
    and for this one, the heap is this:
    Memory: total: 129892352, free: 114666408, maximum: 518979584
    So yes, this one looks better. It uses 15MB from heap and 80MB in total from ram.
    But still, I don't get it, how can the process memory be affected while the heap is much low?
    Edited by: 897090 on Nov 14, 2011 6:35 AM
    Edited by: 897090 on Nov 14, 2011 6:39 AM
    Edited by: 897090 on Nov 14, 2011 6:40 AM

  • Integration Builder Memory Consumption

    Hello,
    we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
    - open normal idoc to idoc mapping: + 40 MB
    - idoc to edi orders d93a: + 70 MB
    - a second idoc to edi orders d93a: + 70 MB
    - Execute those mappings: no additional consumption
    - third edi to edi orders d93a: + 100 MB
    (alle mappings in same namespace)
    After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
    Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
    So we cannot really call that fun. Working is very slow.
    Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
    And we are using only graphical mappings, no XSLT !
    We are on XI 3.0 SP 21
    CSY

    Hii
    Apart from raising tablespace..
    Note 425207 - SAP memory management, current parameter ranges
    you have configure operation modes to change work processes dynamically using rz03,rz04.
    Please see the below link
    http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
    You can Contact your Basis administrator for necessary action

  • How to determine the java memory cosumption

    Hi.
    In our system Netweaver7.1, (on windows)
    I want to know java heap memory consumption.
    We can see the memory consumption from windows task manager, but the AS JAVA caught the memory heap memory size during startup.
    So itisn't correct.
    In NWA, many paerformance monitors are, but I don't know which tool is useful.
    I want to sizing the memory size with following logic.
    8:00~9:00 50% load
    The java memory is conusmed 3GB.
    11:00~12:00 100% load
    The java memorry will "may" be consumed 6GB.
    regards,

    I found the directory with java.exe on my XP client. After updating my Path and then typing 'java -versions' I still see a 'java not found message'. No problem though - a README.TXT says that I have JRE 1.1.7B.
    One final question - a co-worker who also has XP just starting seeing a pop-up window saying 'Runtime' error when running a Java applet. His java.exe is in a path that includes the sub-directory 'JRE' On my XP client, java.exe is in a path which includes a 'JRE11' sub-directory. We therefore seem to have different versions of the JRE. Since I don't see the Runtime error when running the same applet, should my co-worker try upgrading his JRE?
    Thank you.

  • Query on Memory consumption of an object

    Hi,
    I am able to get information on the number of instances loaded, the memory occupied by those instances using heap histogram.
    Class      Instance Count      Total Size
    class [C      10965      557404
    class [B      2690      379634
    class [S      3780      220838
    class java.lang.String      10807      172912 Is there way to get detailed info like, String object of which class consume much memory.
    In other words,
    The memory consumption of String is 172912. can I have a split up like
    String Objects of Class A - 10%
    String Objects of Class B - 90%
    Thanks

    I don't know what profiler you are using but many memory profilers can tell you where the strings are allocated.

  • Memory Consumption: Start A Petition!

    I am using SQL Developer 4.0.0.13 Build MAIN 13.80.  I was praying that SQL Developer 4.0 would no longer use so much memory and, when doing so, slow to a crawl.  But that is not the case.
    Is there a way to start a "petition" to have the SQL Development team focus on the products memory usage?  This is problem has been there for years now with many posts and no real answer.
    If there isn't a place to start a "petition" let's do something here that Oracle will respond to.
    Thank you

    Yes, at this point (after restarting) SQL Developer is functioning fine.  Windows reports 1+ GB of free memory.  I have 3 worksheets open all connected to two different DB connections.  Each worksheet has 1 to 3 pinned query results.  My problem is that after working in SQL Developer for a a day or so with perhaps 10 worksheets open across 3 database connections and having queried large data sets and performing large exports it becomes unresponsive even after closing worksheets.  It appears like it does not clean up after itself to me.
    I will use Java VisualVM to compare memory consumption and see if it reports that SQL Developer is releasing memory but in the end I don't care about that.  I just need a responsive SQL Developer and if I need to close some worksheets at times I can understand doing so but at this time that does not help.

  • Portal Session Memory Consumption

    Dear All,
                          I want to see the user sessions memory consumption for portal 7.0. i.e. if a Portal user opens a session, how much memory is consumed by him/her. How can i check this. Any default value that is associated with this?
    Backend System memory load will get added to portal consumption or to that specific Backend System memory consumption.
    Thanks in Advance......
    Vinayak

    I'm seeing the exact same thing with our setup (it essentially the same
    as yours). The WLS5.1 documentation indicates that java objects that
    aren't serializeable aren't supported with in-memory replication. My
    testing has indicated that the <web_context>._SERVLET_AUTHENTICATION_
    session value (which is of class type
    weblogic.servlet.security.ServletAuthentication) is not being
    replicated. From what I can tell in the WLS5.1 API Javadocs, this class
    is a subclass of java.lang.object (doesn't mention serializeable) as of
    SP9.
    When <web_context>._SERVLET_AUTHENTICATION_ doesn't come up in the
    SECONDARY cluster instance, the <web_context>.SERVICEMANAGER.LOGGED.IN
    gets set to false.
    I'm wondering if WLCS3.2 can only use file or JDBC for failover.
    Either way, if you learn anything more about this, will you keep me
    informed? I'd really appreciate it.
    >
    Hi,
    We have clustered two instances of WLCS in our development environment with
    properties file configured for "in memory replication" of session data. Both the
    instances come up properly and join the cluster properly. But, the problem is
    with the in memory replication. It looks like the session data of the portal is
    getting replicated.
    We tried with the simplesession.jsp in this cluster and its session data is properly
    replicated.
    So, the problem seems to be with the session data put by Portal
    (and that is the reason why I am posting it here). Everytime the "logged in "
    check fails with the removal of one of the instances, serving the request. Is
    there known bug/patch for the session data serialization of WLCS? We are using
    3.2 with Apache as the proxy.
    Your help is very much appreciated.--
    Greg
    GREGORY K. CRIDER, Emerging Digital Concepts
    Systems Integration/Enterprise Solutions/Web & Telephony Integration
    (e-mail) gcrider@[NO_SPAM]EmergingDigital.com
    (web) http://www.EmergingDigital.com

  • Memory Consumption in Multidimensional Arrays

    Hi,
    I've noticed that the memory consumption of multidimensional arrays in Java is sometimes far above one could expect for the amount of data that is being stored. For example, here is a simple program which stores a table containing only integers and reports the memory consumption after it is filled:
    public static void main(String[] args) {     
    int tableSize = 1000000;
    int noFields = 10;
    Random rnd3 = new Random();          
    int arr[][] = new int[tableSize][noFields];
    for (int i = 0; i < tableSize; i++) {
    for (int j = 0; j < noFields; j++) {
         arr[i][j] = rnd3.nextInt(100);
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();                    
    // Ensures table's data is still referenced
    System.out.println(arr[rnd3.nextInt(arr.length)]);
    long totalMemory = Runtime.getRuntime().totalMemory();
    long usedMemory = totalMemory-Runtime.getRuntime().freeMemory();
    System.out.println("Total Memory: " + totalMemory/(1024.0*1024) + " MB.");
    System.out.println("Used Memory: " + usedMemory/(1024.0*1024) + " MB.");          
    Output:
    Total Memory: 866.1875 MB.
    Used Memory: 62.124053955078125 MB.
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers. The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):
    Rows:100; Cols:100000 -> Used Memory: 43,05 MB
    Rows:1000; Cols:10000 -> Used Memory: 43,07 MB
    Rows:10000; Cols:1000 -> Used Memory: 43,24 MB
    Rows:100000; Cols:100 -> Used Memory: 44,96 MB
    Rows:1000000; Cols:10 -> Used Memory: 62,15 MB
    Rows:10000000; Cols:1 -> Used Memory: 192,15 MB
    Any ideas about the reasons for that behavior?
    Thanks,
    Marcelo

    mrnm wrote:
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers.That's only the expected value if you assume that a 2D array of ints is nothing more than a bunch of ints lined up end to end. This is not the case. A "2D" array in java is really just a plain ol' array whose component type is "reference to array".
    The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):That's because, e.g., new int[200][100] creates 200 array objects (and references to each of them), each of which holds 100 ints, while new int[100][200] creates 100 array objects (and references to each of them), each of which holds 200 ints.
    Edited by: jverd on Feb 24, 2010 11:17 AM

  • Memory consumption is more with oracle database compared to Sybase

    Hi,
    We are executing the same java source code with backend sybase and the oracle database. But with oracle database it is consuming more memory than Sybase.
    Currently using 11g R2 with ojdbc6.jar driver
    can you please provide the information to optimize the memory consumption when using the oracle.
    Thanks,
    Nagaraj

    user12569889 wrote:
    We are executing the same java source code with backend sybase and the oracle database. But with oracle database it is consuming more memory than Sybase. That is not saying anything at all.
    What memory in Oracle? Shared pool? Buffer cache? PGA? UGA? Library cache? Something else?
    What memory in Sybase did you compare it to?
    What type of client-server connection to Oracle was made? Dedicated or shared?
    What commands/method were used to determine the memory consumption in Oracle and then Sybase?
    What serves as the baseline for comparison?
    Comparing product A with product B is a COMPLEX thing to do. And IMO, beyond the abilities of the majority of developers - as they lack the technical expertise to extract usable metrics and correctly compare these between products that can work VERY differently.
    And unless you can provide technical details to backup your claim that you are observing that "+Memory consumption is more with oracle database compared to Sybase+", I would say that you have no idea what you are actually observing and in no position to deduce that Oracle consumes more memory.

  • Idle server 1 meg/second memory consumption

    WL 6.1 sp2
    Solaris 2.8
    JDK 1.3.1_02 -server, 1 gig heap
    I noticed lately through the console that an idle WL server, with our
    application deployed but no client/sockets connecting to it other than the
    web console, is consuming memory at about 1 meg/second. Is this the norm?
    Seems a bit voracious to me....
    Gene

    Damn it, I added one too many zeros! I'm looking at the performance graph
    in /console and thought I was seeing 300 megs, instead of 30 megs! So in
    actuality my idle server is consuming .1 meg/sec, which seems a bit more
    like it... Can I make a feature request, have the console show
    comma-separators for those big numbers? :-)
    Actually this is a lead-in to my real question: on production we have a
    couple of servers that are true memory hogs; they go through 1 gig GC's in
    20 seconds! This is causing a lot of issues, obviously: GC occurs every 20
    seconds, with 3-5 second GC time. Hence we have an inordinate amount of
    "downtime", even if we cluster 2-3 servers, each experiencing this kind of
    memory consumption. Here's what I want:
    1) I like to capture a daily and weekly graph of GC frequency and duration.
    The java applet console does not record such history, so I'm wondering if
    there is an MBean I can use, or has someone written one that does this?
    2) How can I profile my 50 SLSB ejbs to find which one(s) are the memory
    hogs? They aren't leaking, because GC always bring them back down to
    baseline; they just suck up a lot of memory! I've tried using -Xrunhprof
    and JProbe, but both slow the server down to a point where its unbearable to
    run (on dev, I don't do this on production :-)). You guys have other tricks
    to find memory consumers?
    Thanks,
    Gene
    "Rob Woollen" <[email protected]> wrote in message
    news:[email protected]..
    1 MB/s does seem like a lot for an idle server. You might try taking
    some thread dumps when it's supposedly idle and see what it's up to.
    -- Rob
    Gene Chuang wrote:
    WL 6.1 sp2
    Solaris 2.8
    JDK 1.3.1_02 -server, 1 gig heap
    I noticed lately through the console that an idle WL server, with our
    application deployed but no client/sockets connecting to it other than
    the
    web console, is consuming memory at about 1 meg/second. Is this thenorm?
    Seems a bit voracious to me....
    Gene

  • Measure thread's memory consumption

    Hello.
    Nice to see you here.
    Please tell me is there any possibility to measure thread's memory consumption?
    I'm trying to tune application server.
    Totally physical server with Power AIX 5.3 on board has 8GB of memory.
    For example I allocate 1408m for Application Server Java heap (-Xms1408m -Xmx1408m).
    Then I tune Application server thread pools (web-threads, EJB-threads, EJB alarm threads, etc...).
    As I understood Java treads live in native memory, not in Java heap.
    I would like to know how to measure size of thread in native memory.
    After that I can set size of thread pools (to avoid OutOfMemory native or heap).

    holod wrote:
    As I understood Java treads live in native memory, not in Java heap.The data the JVM uses to manage threads may live in the JVM's own memory outside of the Java heap. However, that data will be a very tiny fraction of what the JVM is consuming (unless you have huge number of threads, which are all using very, very little memory).
    I would like to know how to measure size of thread in native memory. It will almost certainly be so small as to not matter.
    After that I can set size of thread pools (to avoid OutOfMemory native or heap).No, that will almost certain not help at all.

  • Measuring Swing GUI memory consumption

    Hi community,
    First , I beg you for excuse for posting this here and not in the Swing forum. I think the question is not clear GUI question, but a more generic one.
    I have a GUI that seems to consume a lot of memory(i.e. opening several screens causes OutOfMemoryError and data received on network measures to a very small amount (<5%) compared to the total memory shown by Task Manager).
    I would like to take these screens apart to discover which of their parts are heavy so that I can concentrate my efforts on optimizing them. I cannot use ObjectOutputStream serialization since it seems there are lots of classes that do not implement Serializable, so I need some other way of determining component memory sizes. I believe that XMLEncoder serialzation will leed to nowhere because it contains a lot of text describibg properties that cannot be accurately measured.
    What can I use to measure the memory consumption for a given component tree?
    Thank you for your time
    Mike

    Swing related questions should be...
    Umm have you considered the use of a Profiler? I kind of doubt it is the GUI per se but a model or something that is leaking memory. I mean there are lot's of fairly complex GUI's out there that don't have these issues so I suspect there is a very deep or complex structure somewhere that is eating all of the memory.
    But in short I think running in some sort of profiler would be good becuase I also suspect that this may be one of this issues where looking at each component may not help because it's as the whole thing is used at runtime where some complexity comes into play and it dies.
    Try http://www.manageability.org/blog/stuff/open-source-profilers-for-java for a start.

  • SetTransform() memory consumption

    Hi,
    I'm currently working on a application which needs to move a sphere very quickly. The position is calculated every 40 ms and set via the TransformGroup.setTransfom() method. This raises a problem as this statement rapidly consumes huge amounts of memory, especially when called in short time intervals.
    I also tested it with the java3d example program "AWTInteraction" by simply putting the statement in a for loop and watched the memory climb:
    for (int i=0; i<1e+6; i++)
         objTrans.setTransform(trans);The result is a java.lang.OutOfMemoryError
    Is there a soultion or workaround for this kind of problem?
    Any hints appreciated. (Project has to be finished on Monday. It's really urgent.)
    TIA
    Erich

    Erich,
    i've never had any memory problems when dealing with Transforms. Is it perhaps possible that the leakage results from another instruction? For high memory consumption I saw responsible only working with textures so far. In order to find the problem, I would check out three things in your code:
    1. Are you working with textures and if yes, how does the program behave, if the textures are omitted?
    2. Are there any "new" instructions inside your loop? If yes, try to reuse objects and eliminate all "new" commands inside the loop.
    3. Did you consider the Mantra to do all changes on a live scene graph within a behavior (and from the behavior scheduler)? It seems unusual to me to change transforms inside a loop.
    Good luck,
    Oliver

  • J2EE Engine memory consumption (Usage)

    Dear experts,
    We have J2EE Engine (a Jawa stack).  When I run routine monitoring via the browser and read the memory consumption I am meet with a chart that show a sawtooth like graph. Every hour from 19:00 to 02:00 the memory consumption will rise with approx. 200 MB after 7 hours all of a sudden the memory consumption drops down to normal idel levvel and start over again. I can inform that at the time there are no user on the system.
    My question is what are the J2EE doing? since there is no user activity.Are the J2EE engine running some system applications? is it filling up the log files and then empty(storing) them.
    I hope some of the experts can answer.
    I just want to undertand what's going on, on the system. If there is some documentation/white paper on how to interpret/read the J2EE monitor I will great full if you drop the information or link here.
    Mike

    Hi Mike
    To understand what exactly is being executed in Java engine, I'd suggest you perform Thread dump analysis as per:
    http://help.sap.com/saphelp_smehp1/helpdata/en/10/3ca29d9ace4b68ac324d217ba7833f/frameset.htm
    Generally 4-5 thread dumps are triggered at the interval of 20-25 seconds for better analysis.
    Here's some useful SAP notes related to thread dump analysis:
    710154 - How to create a thread dump for the J2EE Engine 6.40/7.0
    1020246 - Thread Dump Viewer for SAP Java Engine
    742395 - Analyzing High CPU usage by the J2EE Engine
    Kind regards,
    Ved

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

Maybe you are looking for

  • HT5567 iPod won't update - iOS 6.0.1

    The latest software on my iPod Touch 5th Gen won't update. I get a message saying it cant verify because I'm 'no longer connected to the Internet' however I am, the signal is full. What's the reason for this and is there a way around it? Any help?

  • Events are moving to the next day (automatically!!!!)

    A while ago i noticed that my upcoming events were on the wrong day. in fact they were one day too late. i thought i put them on the wrong day and forgot it. but recently i had the same problem and now i really suspect iCal of moving my events on the

  • Iphoto 6  & image resolutions

    what format, jpeg, tiff, psd is the best when importing photos into iphoto 6 from photoshop cs? does the image size of a photo in photoshop stay the same in iphoto once its imported? where do i find the resolution for am image in iphoto? what resolut

  • Update multiple fields with same/diffrent name

    I have the form is displayed  order no, message and sent_date. Mesage and Date sent are editable.  I can change value from both colums in diffrent values then hit submit for update.  My code below is loop thur message and date sent field then update

  • Unable to create invoice

    Hi All, I'd like to create an invoice in SRM. If I press the button create invoice a popup comes up with following message: "Error in Backend communication; please inform your systemadministrator" All other processes (PO, GR) are working fine. Can an