Java memory usage

Dear All,
We have got quite a big java application and we have tried the code below towards the end and it is keep showing around 2Mb is that considered high and is this the right way to do it? We have closed all the statement and resutlsets immediately after using them. How to know if there is any memory leakage if the values goes more then 2Mb ?
Runtime runtime = Runtime.getRuntime();
// Run the garbage collector
runtime.gc();
// Calculate the used memory
long memory = runtime.totalMemory() - runtime.freeMemory();
System.out.println("Used memory is bytes: " + memory);

935486 wrote:
I have google and found many profile tool but then again you said no hard rules so wont really be much helpful. So in my case say we have now close all the resultset and statements properly that should not worry much I guess rite. Previously it keep growing which I guess the resources was not close properly. At least you won't have to worry about leaks being introduced through database stuff, no.
Anything else to be done to avoid out of memory exception? Thank you.Write proper code. Which means you have to write code with care. And have it reviewed by other people, that's something that people don't do enough anymore - let other people sniff through your stuff. They're bound to find things you just overlook.
@EJP, morgalr and Dr. Clap (and all other regulars who happen to read this but have not replied yet) - right? I'm not alone in thinking that about the code reviewing am I?

Similar Messages

  • Analysing SAP Java Memory Usage in Unix/Linux

    Hi,
    I need to analyze the SAP Java memory usage of Unix /Linux machine..NW 7.0
    Please guide with the commands and steps..complete prcedure.
    Based on it I should decide whether to create a new server node (or) increasing heap size
    Thanks in advance

    Hi,
    Do you have performance problems?
    How many CPU's are in the server?
    Did you check Log Configuration for delays or errors?
    Did you tune any exisiting parameters?
    You can add the nodes only if there is performance problems. You may think of adding one node to start with
    Proper number of server nodes within an instance:
    u2013 #ServerNodes = availableMemory / (JavaHeapPermSpaceStack)
    You can calculate the server nodes based on below formula
    No. of server Node = (RAM you want to assign or available RAM in GB)/2.5 ============> for 64-bit system
    No. of server Node = (RAM you want to assign or available RAM in GB)/1.5 ============> for 32-bit system
    Hence as per above discussion, we should go with 5 server nodes means,
    5 = RAM/2.5 (Assuming you are on 64-bit platform)
    i.e. RAM = 12.5 GB
    2). u2013 Configure JVM heap according to Note 723909 and Note 1008311 - Recommended Settings for NW 7.0 >= SR2 for the AIX JVM (J9)

  • Diagnostics Workload Analysis - Java Memory Usage gives BI query input

    Dears
    I have set up diagnostics (aka root cause analysis) at a customer side and I'm bumping into the problem that on the Java Memory Usage tab in Workload analyis the BI query input overview is given
    Sol Man 7.0 EHP1 SPS20 (ST component SP19)
    Wily Introscope 8.2.3.5
    Introscope Agent 8.2.3.5
    Diagnostics Agent 7.20
    When I click on the check button there I get the following:
    Value "JAVA MEMORY USAGE" for variable "E2E Metric Type Variable" is invalid
    I already checked multiple SAP Notes like the implementation of the latest EWA EA WA xml file for the Sol Man stack version.
    I already reactivated BI content using report CCMS_BI_SETUP_E2E and it gave no errors.
    The content is getting filled in Wily Introscope, extractors on Solution Manager are running and capturing records (>0).
    Did anyone come accross this issue already?
    ERROR MESSAGE:
    Diagnosis
    Characteristic value "JAVA MEMORY USAGE" is not valid for variable E2E Metric Type Variable.
    Procedure
    Enter a valid value for the characteristic. The value help, for example, provides you with suggestions. If no information is available here, then perhaps no characteristic values exist for the characteristic.
    If the variable for 0DATE or 0CALDAY has been created and is being used as a key date for a hierarchy, check whether the hierarchies used are valid for this characteristic. The same is valid for variables that refer to the hierarchy version.
      Notification Number BRAIN 643 
    Kind regards
    Tom
    Edited by: Tom Cenens on Mar 10, 2011 2:30 PM

    Hello Paul
    I checked the guide earlier on today. I also asked someone with more BI knowledge to take a look with me but it seems the root cause analysis data fetching isn't really the same as what is normally done in BI with BI cubes so it's hard to determine why the data fetch is not working properly.
    The extractors are running fine, I couldn't find any more errors in the diagnostics agent log files (in debug mode) and I don't find other errors for the SAP system.
    I tried reactivating the BI content but it seems to be fine (no errors). I reran the managed system setup which also works.
    One of the problems I did notice is the fact that the managed SAP systems are half virtualized. They aren't completely virtualized (no seperate ip address) but they are using virtual hostnames which also causes issues with Root Cause Analysis as I cannot install only one agent because I cannot assign it to the managed systems and when I install one agent per SAP system I have the message that there are already agents reporting to the Enterprise Manager residing on the same host. I don't know if this could influence the data extractor. I doubt it because in Wily the data is being fetched fine.
    The only thing that it not working at the moment is the workload analysis - java memory analysis tab. It holds the Key Performance Indicators for the J2EE engine (garbage collection %). I can see them in Wily Introscope where they are available and fine.
    When I looked at the infocubes together with a BI team member, it seemed the infocube for daily stats on performance was getting filled properly (through RSA1) but the infocube for hourly stats wasn't getting filled properly. This is also visible in the workload analysis, data from yesterday displays fine in workload analysis overview for example but data from an hour ago doesn't.
    I do have to state the Solution Manager doesn't match the prerequisites (post processing notes are not present after SP-stack update, SLD content is not up to date) but I could not push through those changes within a short timeframe as the Solution Manager is also used for other scenarios and it would be too disruptive at this moment.
    If I can't fix it I will have to explain to the customer why some parts are not working and request them to handle the missing items so the prerequisites are met.
    One of the notes I found described a similar issue and noted it could be caused due to an old XML file structure so I updated the XML file to the latest version.
    The SAPOscol also throwed errors in the beginning strange enough. I had the Host Agent installed and updated and the SAPOscol service was running properly through the Host Agent as a service. The diagnostics agent tries to start SAPOscol in /usr/sap/<SID>/SMDA<instance number>/exe which does not hold the SAPOscol executable. I suppose it's a bug from SAP? After copying the SAPOscol from the Host Agent to the location of the SMD Agent the error disappeared. Instead the agent tries to start SAPOscol but then notices SAPOscol is already running and writes in the log that SAPOscol is already running properly and a startup is not neccesary.
    To me it comes down the point where I have little faith in the scenario if the Solution Manager and the managed SAP systems are not maintained and up to date 100%. I could open a customer message but the first advice will be to patch the Solution Manager and meet the prerequisites.
    Another pain point is the fact that if the managed SAP systems are not 100% correct in transaction SMSY it also causes heaps of issues. Changing the SAP system there isn't a fast operation as it can be included in numerous logical components, projects, scenario's (CHARM) and it causes disruption to daily work.
    All in all I have mixed feelings about the implementation, I want to deliver a fully working scenario but it's near impossible due to the fact that the prerequisites are not met. I hope the customer will still be happy with what is delivered.
    I sure do hope some of these issues are handled in Solution Manager 7.1. I will certainly mail my concerns to the development team and hope they can handle some or all of them.
    Kind regards
    Tom

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • How to monitor java memory usage in enterprise manager

    I am running sqlplus to execute a sql package, which generates XML.
    When processing 2000+ rows, it will give a out of memory error.
    Where in enterprise manger can I see this memory usage?
    Thanks.

    Hello,
    it depends a little on what you want to do. If you use the pure CCMS monitoring with the table ALTRAMONI you get average response time per instance and you only get new measurements once the status changes from green to yellow or red.
    In order to get continuous measurements you should look into Business Process Monitoring and the different documentations under https://service.sap.com/bpm --> Media Libary --> Technical Information. E.g. the PDF Setup Guide for Application Monitoring describes this "newer" dialog performance monitor. Probably you have to click on the calendar sheet in the Media Libary to also see older documents as well. As the Business Process Monitoring integrates with BW (there is also a BI Setup Guide in the Media LIbrary) you can get trendlines there. This BW integration also integrates back with SL Reporting.
    Some guidance for SL Reporting is probably given under https://service.sap.com/rkt-solman but I am not 100% sure.
    Best Regards
    Volker

  • Java memory usage/management

    Hi,
    I am trying to give my program as much memory as possible. I have a machine with over 6GB of RAM. However, when I try
    java -Xmx4096Mwhich is significantly less than what's available, I get this error:
    Invalid maximum heap size: -Xmx4096M
    Could not create the Java virtual machine.How come?
    Secondly, lets say I try a smaller number, like 3.8 GB:
    java -Xmx3800Mthings work perfectly.
    Now, if I try 3.9 GB:
    java -Xmx3900MI get this error:
    Exception in thread "main" java.lang.OutOfMemoryError
            at java.util.zip.ZipFile.open(Native Method)
            at java.util.zip.ZipFile.<init>(ZipFile.java:112)
            at java.util.jar.JarFile.<init>(JarFile.java:127)
            at java.util.jar.JarFile.<init>(JarFile.java:65)
            at sun.misc.URLClassPath$JarLoader.getJarFile(URLClassPath.java:575)
            at sun.misc.URLClassPath$JarLoader.<init>(URLClassPath.java:542)
            at sun.misc.URLClassPath$3.run(URLClassPath.java:320)
            at java.security.AccessController.doPrivileged(Native Method)
            at sun.misc.URLClassPath.getLoader(URLClassPath.java:309)
            at sun.misc.URLClassPath.getLoader(URLClassPath.java:286)
            at sun.misc.URLClassPath.getResource(URLClassPath.java:156)
            at java.net.URLClassLoader$1.run(URLClassLoader.java:191)
            at java.security.AccessController.doPrivileged(Native Method)
            at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
            at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
            at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)How come?
    I don't mind the fact that Java can't give me 4096M. I can live with that. But what I would like to know is why I get this last error and also what is the MAXIMUM that I can use for the -Xmx option? I have some serious testing to do and I can't just write that "-Xmx3900M didn't seem to work and so I went with -Xmx3800." People will not like that sentence.
    Thanks,
    Jeff

    OutOfMemoryError. My goal right now is to make sure
    that I let Java have as much memory as the JVM can
    handle. It seems like giving it 3800M is ok, but IBeing able to set the heap size to 3800M doesnot mean that your JVM is actually using it uptil 3800M.
    would like to know if there is a good reason that
    3900M doesn't work.On 32-bit processor machines, the largest contiguous memory address space the operating system can allocate to a process is 1.8GB. Because of this, the maximum heap size can only be set up to 1.8GB. On 64-bit processor machines, the 1.8 GB limit does not apply, as 64-bit processor machines have a larger memory address space. So now you need to see what processor do you have and if its a 32 bit processor no matter you set it to 3800 or 7800M the max size limit is 1.8G
    >
    Thanks guys for the help so far.

  • Java Memory Monitoring in Web Application

    Hi All,
    Request you to please review the below mentioned suggestion and provide inputs:
    Over the years, I have been involved in some projects involving web development in J2ee. JAVA memory usage is an issue that is common amongst all.
    Following are some of the questions that come across to a developer regarding the JAVA memory:
    Memory Usage Statistics.
    Trending of Memory statistics.
    Memory Leak.
    Performance optimization in case memory leaks occur.
    When it comes to answering the above, the most common suggestion is to enable heap dumps and analyze it using a heap analyzer tool. However, there are times and projects where these options are not approved off and developer is always asked to review code again and again. This is again a frustrating option for someone who has just joined a maintenance project and reading through code is not a feasible option. It has happened to me and I did the following to solve some of my problems and eventually all.
    Instead of analyzing heap dumps, I decided to do the following:
    Add a request filter to my J2EE application.
    Add following log statements in the filter:
    URL fired.
    Runtime.getRuntime().freeMemory()
    Runtime.getRuntime().totalMemory()
    Runtime.getRuntime().maxMemory()
    Gather data from daily app usage and build some trending statistics.
    Not only were we able to decide an optimum memory setting for our server, we were able to detect leaks as well. However, i agree detecting leaks wasn't as simple as it's with other tools considering the debugging effort that's involved.It is not a conventional approach but come sin handy when projects don't want to involve costs and maintain equilibrium at production systems as well.

    Hi,
    Few questions!
    1> Have you tweaked your jvm?
    2> What are the values given for Xms and Xmx?
    3> What is the size of XX:MaxPermGen?
    4> How much RAM is available on the system where you have deployed your app?
    5> Are you using pre-complied JSPs for faster response?
    6> Which JDK are you using?
    7> Have you tried using latest version of Tomcat?
    8> If these doesnt help, use any profiler to find the leak. <JProfiler, JVMTI, YourKit profiler etc>
    I hope answering these questions would help you :)
    njoy!

  • How to specify maximum memory usage for Java VM in Tomcat?

    Does any one know how to setup memory usage for Java VM, such as "-Xmx256m" parameter, in Tomcat?
    I'm using Tomcat 3.x in Apache web server on Sun Solaris platform. I already tried to add the following line into tomcat.properties, like:
    wrapper.bin.parameters=-Xmx512m
    However, it seems to me that this doesn't work. So, how about if my servlet will consume a large amount of memory that exceeds the default 64M memory boundary of Java VM?
    Any idea will be appreciated.
    Haohua

    With some help we found the fix. You have to set the -Xms and -Xmx at installation time when you install Tomcat 4.x as a service. Services do not read system variables. Go to the command prompt in windows, and in the directory where tomcat.exe resides, type "tomcat.exe /?". You will see jvm_options as part of the installation. Put the -Xms and -Xmx variables in the proper place during the install and it will work.
    If you can't uninstall and reinstall, you can apply this registry hack that dfortae sent to me on another thread.
    =-=-=-=-=-=
    You can change the parameters in the Windows registry. If your service name is "Apache Tomcat" The location is:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Apache Tomcat\Parameters
    Change the JVM Option Count value to the new value with the number of parameters it will now have. In my case, I added two parameters -Xms100m and -Xmx256m and it was 3 before so I bumped it to 5.
    Then I created two more String values. I called the first one I added 'JVM Option Number 4' and the second 'JVM Option Number 5'. Then I set the value inside each. The first one I set to '-Xms100m' and the second I set to '-Xmx256m'. Then I restarted Tomcat and observed when I did big processing the memory limit was now 256 MB, so it worked. Hope this helps!
    =-=-=-=-=
    I tried this and it worked. I did not want to have to go through the whole reinstallation process, so this was best for me.
    Thanks to all who helped on this.

  • MS Minimize and memory usage

    Hi,
    I noticed that if you minimize your dos window, your memory usage ( as shown in Windows task manager ) drops a lot. E.g. I have Tomcat running ( 15 MB usage ). When I minimize the Tomcat DOS window, the memory usage shows something like 160kB. Any ideas. What is the real usage ?
    Regards,
    Dieter Janssen

    When you minimize the window Java will run a garbage collection. But also be aware of that the memory usage in Windows task manager is shown in both the Mem Usage column and the VM Size column (you maybe need to add this to your Task manager).
    I hope this will help you!
    /Michael

  • Get CPU and memory usage

    Hi!
    I would like to know if there is any way of getting system CPU and memory usage using Java code.

    I want to get the system CPU and memory usage using the performance monitor dll, the perfctrs.dll, but access this data using Java language.Then you should create wrapper dll between your java code and perfctrs.dll and convert data from format of dll to format of your java code.
    So, that is next question - how to create wrapper dll, how to deal with or how perfctrs.dll works?

  • Very high memory usage..possible memory leak?  Solaris 10 8/07 x64

    Hi,
    I noticed yesterday that my machine was becoming increasingly slow, where once it was pretty snappy. It's a Compaq SR5250NX with 1GB of RAM. Upon checking vmstat, I noticed that the "Free" column was ~191MB. Now, the only applications I had open were FireFox 2.0.11, GAIM, and StarOffice. I closed all of them, and the number reported in the "Free" column became approximately 195MB. "Pagefile" was about 5.5x that size. There were no other applications running and it's a single user machine, so I was the only one logged in. System uptime: 9 days.
    I logged out, logged back in, to see if that had an affect. It did not. Rebooted and obviously, that fixed it. Now with only FireFox, GAIM, and a terminal open, vmstat reports "Free" as ~450MB. I've noticed if I run vmstat every few seconds, the "Free" total keeps going down. Example:
    unknown% vmstat
    kthr      memory            page            disk          faults      cpu
    r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us sy id
    0 0 0 870888 450220  9  27 10  0  1  0  8  2 -0 -0 -0  595 1193  569 72  1 28
    unknown% vmstat
    kthr      memory            page            disk          faults      cpu
    r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us sy id
    0 0 0 870880 450204  9  27 10  0  1  0  8  2 -0 -0 -0  596 1193  569 72  1 28
    unknown% vmstat
    kthr      memory            page            disk          faults      cpu
    r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us sy id
    0 0 0 870828 450092  9  27 10  0  1  0  8  2 -0 -0 -0  596 1193  570 71  1 28
    unknown%Output of prstat -u Kendall (my username ) is as follows:
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
      2026 Kendall   124M   70M sleep   59    0   0:01:47 1.4% firefox-bin/7
      1093 Kendall    85M   77M sleep   59    0   0:07:15 1.1% Xsun/1
      1802 Kendall    60M   15M sleep   59    0   0:00:08 0.1% gnome-terminal/2
      1301 Kendall    93M   23M sleep   49    0   0:00:30 0.1% java/14
      1259 Kendall    53M   15M sleep   49    0   0:00:32 0.1% gaim/1
      2133 Kendall  3312K 2740K cpu1    59    0   0:00:00 0.0% prstat/1
      1276 Kendall    51M   12M sleep   59    0   0:00:11 0.0% gnome-netstatus/1
      1247 Kendall    46M   10M sleep   59    0   0:00:06 0.0% metacity/1
      1274 Kendall    51M   13M sleep   59    0   0:00:05 0.0% wnck-applet/1
      1249 Kendall    56M   17M sleep   59    0   0:00:07 0.0% gnome-panel/1
      1278 Kendall    48M 9240K sleep   59    0   0:00:05 0.0% mixer_applet2/1
      1245 Kendall  9092K 3844K sleep   59    0   0:00:00 0.0% gnome-smproxy/1
      1227 Kendall  8244K 4444K sleep   59    0   0:00:01 0.0% xscreensaver/1
      1201 Kendall  4252K 1664K sleep   59    0   0:00:00 0.0% sdt_shell/1
      1217 Kendall    55M   16M sleep   59    0   0:00:00 0.0% gnome-session/1
       779 Kendall    47M 2208K sleep   59    0   0:00:00 0.0% gnome-volcheck/1
       746 Kendall  5660K 3660K sleep   59    0   0:00:00 0.0% bonobo-activati/1
      1270 Kendall    49M   10M sleep   49    0   0:00:00 0.0% clock-applet/1
      1280 Kendall    47M 8904K sleep   59    0   0:00:00 0.0% notification-ar/1
      1199 Kendall  2928K  884K sleep   59    0   0:00:00 0.0% dsdm/1
      1262 Kendall    47M 2268K sleep   59    0   0:00:00 0.0% gnome-volcheck/1
    Total: 37 processes, 62 lwps, load averages: 0.11, 0.98, 1.63System uptime is 9 hours, 48 minutes. I'm just wondering why the memory usage seems so high to do...nothing. It's obviously a real problem as the machine turned very slow when vmstat was showing 195MB free.
    Any tips, tricks, advice, on which way to go with this?
    Thanks!

    Apologies for the delayed reply. School has been keeping me nice and busy.
    Anyway, here is the output of prstat -Z:
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
      2040 Kendall      144M   76M sleep   59    0   0:04:26 2.0% firefox-bin/10
    28809 Kendall     201M  193M sleep   59    0   0:42:30 1.9% Xsun/1
      2083 Kendall      186M   89M sleep   49    0   0:02:31 1.2% java/58
      2260 Kendall       59M   14M sleep   59    0   0:00:00 1.0% gnome-terminal/2
      2050 Kendall       63M   21M sleep   49    0   0:01:35 0.6% realplay.bin/4
      2265 Kendall     3344K 2780K cpu1    59    0   0:00:00 0.2% prstat/1
    29513 Kendall     71M   33M sleep   39    0   0:07:25 0.2% gaim/1
    28967 Kendall     56M   18M sleep   59    0   0:00:24 0.1% gnome-panel/1
    29060 Kendall     93M   24M sleep   49    0   0:02:58 0.1% java/14
    28994 Kendall     51M   13M sleep   59    0   0:00:23 0.1% wnck-applet/1
    28965 Kendall     49M   14M sleep   59    0   0:00:33 0.0% metacity/1
       649 noaccess   164M   46M sleep   59    0   0:09:54 0.0% java/23
    28996 Kendall     51M   12M sleep   59    0   0:00:50 0.0% gnome-netstatus/1
      2264 Kendall    1352K  972K sleep   59    0   0:00:00 0.0% csh/1
    28963 Kendall  9100K 3792K sleep   59    0   0:00:03 0.0% gnome-smproxy/1
    ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
         0           80          655M  738M    73%       1:18:40 7.7% global
    Total: 80 processes, 322 lwps, load averages: 0.27, 0.27, 0.22Sorry about the bad formatting, it's copied from the terminal.
    In any event, we can see that FireFox is sucking up 145MB (??!?!!? crazy...) XSun, 200MB, and java 190MB. I'm running Java Desktop System (Release 3) so I assume that is what accounts for the the high memory usage RE: java process. But, XSun, 200MB?
    Is this normal and I just need to toss another gig in, or what?
    Thanks

  • Excessively high memory usage by Tomcat in NT

    We are facing the problem of excessive memory usage by our servlets (that call JNI functions). The memory usage seems to touch 50+ MB in 2000 Server but it drops down to 30 MB soon. But, this does not happen in NT. The memory keeps on increasing and there is a point in time when Tomcat occupies all the available memory.
    We are taking care of garbage collecting the objects frequently(by calling Runtime.gc()). Is there any other way by which we can have some kind of control over the usage of memory?
    Do we have to install a patch for NT (if something like this is available)?
    NT Server:
    PIII 500 MHz, 256MB RAM, 40GB HDD
    2000 Server:
    PIII 700 MHz, 128MB RAM, 20GB HDD
    Tomcat:
    Version 3.2.1
    JDK:
    Sun's JDK 1.3
    Any suggestions, pointers are appreciated.
    Thanks
    Manish
    [email protected]

    Hi, Manish:
    I've solved memory leak problem in JNI once. Memory leak may be resulted from inapproriate memory access in C or C++ library.
    My previous problem is: I get a C byte array from Java byte array by GetByteArrayElements, however, I never call ReleaseByteArrayElements before C function return. That will let that Java byte array unable to be garbage collected. After inserting ReleaseByteArrayElements before return, it won't run out of memory again. Similar problems may happen when useless Java object's reference count not zero because NewGlobalReference without calling DeleteGlobalReference.
    However, I've faced another problem in JNI (see http://forum.java.sun.com/thread.jsp?forum=52&thread=212275). I wonder if JVM release or reset memory allocated in C library. Hope it is not true.
    Regards,
    David Wu

  • Extremely high memory usage after upgrading to Firefox 12

    After I upgraded to Firefox 12, I began frequently experiencing Firefox memory usage ballooning extremely high (2-3GB after a few minutes of light browsing). Sometimes it will drop back down to a more reasonable level (a few hundred MB), sometimes it hangs (presumably while trying to garbage collect everything), and sometimes it crashes. Usually the crashing thread cannot be determined, but when it can be, it is in the garbage collection code ( [https://crash-stats.mozilla.com/report/list?signature=js%3A%3Agc%3A%3AMarkChildren%28JSTracer*%2C+js%3A%3Atypes%3A%3ATypeObject*%29] ).
    I was able to capture an about:memory report when Firefox had gotten to about 1.5 GB and have attached an image.
    A couple of things I've tried. I have lots of tabs open (though the Don't load tabs until selected option is enabled), so I copied my profile, kept all my extensions enabled, but closed all my tabs. I then left a page open to http://news.google.com/ and it ran fine for several days, whereas my original profile crashes multiple times a day.
    I also tried disabling most of my extensions, leaving the following extensions that I refuse to browse without:
    Adblock Plus
    BetterPrivacy
    NoScript
    PasswordMaker
    Perspectives
    Priv3
    However, the problem still happened in that case.
    Don't know if any of this helps or not. I'm looking forward to trying Firefox 13 when it comes out.

    hello, thanks for reporting back with detailed information.
    from a brief look at your extensions i don't recognize any known (to me at least) memory leaking ones. in the last weeks there were also reports about the java plugin causing high memory consumption in combination with firefox 12 - in case you have it installed in firefox > addons > plugins try disabling it for a few days & test how firefox is behaving with many tabs.
    & although probably not related to the memory problems you could update your graphics driver to get better results with hardware acceleration in firefox - this is the latest driver by intel for your model & os:
    [http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=21135&lang=eng&OSVersion=Windows%207%20%2864-bit%29*&DownloadType=Drivers]

  • High memory usage on JDBC 10.2.0.1.0 driver on Prepared/Callable Statements

    We are observing high memory usage for each callable/prepared stmt, using 10.2.0.1.0 JDBC Driver. The char[] in oracle/jdbc/driver/T4CVarcharAccessor was alloted 64K to 320K and grows with usage. This problem is worse with 10.1.0.2. driver which was alloted 720K byte of memory for each stmt right at the start.
    We found this by doing a JVM heap dump and analyzing the heap dump using IBM's heap analyser. Here is a snapshot of the heap dump for this object:
    321,240 [216] 11 oracle/jdbc/driver/T4CVarcharAccessor 0x72752968
    - 320,616 [320,616] 0 char[] 0x72761028
    - 216 [216] 0 short[] 0x727527d8
    - 72 [32] 1 java/lang/String 0x727530a0
    - 24 [24] 0 int[] 0x72752938
    - 24 [24] 0 int[] 0x72752948
    - 24 [24] 0 int[] 0x72752958
    - 16 [16] 0 bool[] 0x72752928
    - 16 [16] 0 byte[] 0x727528b0
    - 16 [16] 0 bool[] 0x72752918
    - 10,336 [88] 15 oracle/jdbc/driver/T4CMAREngine 0x712e7128
    - 1,544 [1,032] 79 oracle/jdbc/driver/T4CPreparedStatement 0x72754c58
    It is repeated many times for each prepared/callable stmt call.
    Details of our platform is:
    Database - Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
    JDBC Driver - Oracle Database 10g Release 2 (10.2.0.1.0) JDBC Drivers
    JDK - [Classic VM, Version 1.4.2] from [IBM Corporation]
    Our callable stmts are not using any of the Oracle caching facility. It is a simple call stmt with OUT parameters and the stmt is closed after each execution. However, we implement our own connection pooling and do not close the connection after each stmt.
    Is there a workaround to this? Would appreciate any feedback.

    What is happening is that each new CallableStatement you create allocates a new char[]. I would strongly encourage you to use the implicit statement cache if at all possible. That way instead of creating a new statement each time with a new char[] you will get an already existing statement and reuse the existing char[]. Closing a statement releases the char[] so if you really are closing the statements the char[]s should be GC'd.
    Douglas

  • Unnaturally high cpu and memory usage

    Hello.
    I have installed WL 6.1 and WL Portal 4.0 on a w2k machine. It has a
    800 cpu (I think), and 512 RAM.
    What happens is: After server startup, everything is low and nice. But
    after a few jsp compilations, the cpu jumps to 100% and stays there,
    even after the page has been returned and the browser says "done".
    Actually, memory usage isn't that high; the java process is using
    about 50 megs of memory. But it has exceeded this a couple of times,
    and used 200+ MB.
    The database is also running on another machine.
    I tried deploying the same application on a locally installed
    WL/Portal, and the same thing happened, only with much more memory
    usage, about 200 - 250 megs. My machine became useless, and I had to
    shut down the server.
    What is causing this? Is the server's configuration totally screwed,
    or can some code be doing this? Btw, I know I am the only user on this
    server...
    On other threads here, I have seen people supplying server dumps of
    processes etc. How do I see this dump, or what processes within the
    server are running?
    I am very grateful for any help with this.
    Christer

    Take a thread dump of the server. You should at least be able to see
    what it's doing.
    On UNIX, you can send a SIGQUIT. (ie kill -3 the process)
    On Windows, you can CTRL-BREAK in the window.
    If you search for thread dump on edocs.bea.com, you should see a full
    explanation.
    Also, these groups can be searched on groups.google.com.
    -- Rob
    Christer Brinchmann wrote:
    Hello.
    I have installed WL 6.1 and WL Portal 4.0 on a w2k machine. It has a
    800 cpu (I think), and 512 RAM.
    What happens is: After server startup, everything is low and nice. But
    after a few jsp compilations, the cpu jumps to 100% and stays there,
    even after the page has been returned and the browser says "done".
    Actually, memory usage isn't that high; the java process is using
    about 50 megs of memory. But it has exceeded this a couple of times,
    and used 200+ MB.
    The database is also running on another machine.
    I tried deploying the same application on a locally installed
    WL/Portal, and the same thing happened, only with much more memory
    usage, about 200 - 250 megs. My machine became useless, and I had to
    shut down the server.
    What is causing this? Is the server's configuration totally screwed,
    or can some code be doing this? Btw, I know I am the only user on this
    server...
    On other threads here, I have seen people supplying server dumps of
    processes etc. How do I see this dump, or what processes within the
    server are running?
    I am very grateful for any help with this.
    Christer

Maybe you are looking for

  • BT Yahoo mail won't switch to new style when using Firefox

    I had to do a re-install as the mother board failed. I use BT Yahoo email. On the home page I get a link "switch to the newest BT Yahoo! Mail". Using I.E. the link makes it change to the new style and I can then make this the home page and it opens i

  • How do I force straight quotation marks to turn to smart quotes?

    Our ID prefs are set to use smart quotes but there are times when we get text documents that refuse to toe the line. We fix that by doing Find/Change and forcing all straight quotes to change to smart quotes. Is there a way to tell InDesign to change

  • BAPI_OBJCL_CHANGE_KEY

    Hi all, i'm using BAPI_OBJCL_CHANGE_KEY to update customer classification class (type 011) for around 140k records in background...... when i saw the logs of the job, i found many clients for which BAPI returned "Assignment was not changed: Message C

  • 11g asm cp command gives ASMCMD-08202: internal error: [asmcmdshare_error_m

    Subject: 11g asm cp command gives ASMCMD-08202: internal error: [asmcmdshare_error_m I'm having problems using the 11g cp command to copy a remote asm file: ASMCMD [+] > cp DBQNL8_G1_DATA_AREA/admt/parameterfile/spfile.325.678643027 [email protected]

  • Cannot create Desktop Icons

    I'm not totally sure if it is Adobe Photoshop Elements 7 that is causing the problem but it has reappeared three times right AFTER installing the package. For some reason Vista Ultimate 32-bit will not let me add a new desktop Icon after Adobe Photos