Withdraw permission allowing you to monitor memory usage

I now often have to either force quit, reboot and have started having kernel "panics" since I agreed to allow Firefox monitor CPU usage. I am still running OS X 10.6.8 on my Mac's. I have had no problems with anything since installing that OS X.
However, from time to time with the latest install of Firefox 7.0.1 I find that the program and/or my computer "locks" up and appears unresponsive. I left Firefox active and resident on my computer last night and again everything will non-responsive. I opened activity monitor and noticed that Firefox was consuming 283 MB of real memory (at present 298.6 MB) and I have this an an other tab for the Seattle Times open. I withdraw my permission to allow Firefox to monitor my memory usage and need to know how to remove the cookie or what ever governs that)

Come on Firefox! Since your upgrades starting with FF4, Firefox has been acting like garage. It locks up constantly and I find myself having to use FF3.6.23 just to get my work done.
The most frustrating thing is we never hear from you. We don't know if you hear us, if you're working on it, or if you just plain don't care! I've been reacquainting myself with Safari and Google Chrome just in case I get so fed up I can't take it any more.
I believe you're monitoring mine also.
I'm on a March 2007 Mac Book Pro using OSX 10.6.8

Similar Messages

  • How to monitor memory usage of "Memory-based Snapshot" executable (MRCNSP)  in Linux?

    We have noticed in the past that MRCNSP/Memory-based Snapshot program executable consumes around 3.8 GB of memory on the linux VM. I understand that value change planning  is 32 bit executable so 4 GB is the limit. I want to monitor the memory usage of the executable when the program runs. The program usually runs overnight. I wanted to check with you experts if you have any MRP executable memory usage monitoring script that I could use.
    I found the metalink note OS Environment and Compile Settings for Value Chain and MRP/Supply Chain Planning Applications (Doc ID 1085614.1) which talks about "top -d60 >>$HOME/top.txt". Please share your ideas for monitoring this process.
    We do not use Demand Planning or Demanta or Advanced Supply Chain Planning which are 64 bit application. That is our future direction.
    Environment:
    EBS : R12.1.2 on Linux. The concurrent manager is on 64 bit linux VM, web services on 32 bit VMs.
    DB: Oracle DB 11.2.0.3 on HP UX Itanium 11.31. Single database instance.
    Thanks
    Cherrish Vaidiyan

    RAM on the controller is not the same as the C: drive. With respect to the controller, you can think of it in the same terms as your computer. RAM is volatile memory and your C: drive is non volatile flash memory.
    Depending on the frequency of the temperature excursions above and below your 70C threshold, the service life of the controller and the method you used to append to the file, there could be a number of issues that may creep up over time.
    The first, and the one you brought up is the size of the file over time. Left unchecked this file could grow continuously until the system literally runs out of flash memory space and chokes. Depending on how your are appending data to this file, you could also use more than a trivial slice of processor time to read and write this big file on disk. While I have not personally ever run one of the RT controllers out of "disk space", I can't imagine that any good could come of that.
    One thought is to keep a rolling history of say the last 3 months. Each month, start a new file and append your data to it during the course of the month. Each time a new file is created, delete the data file from something like 3 months ago. This will ensure that you will always have the last 3 months of history on the system, however the monthly deletion of the oldest data file will limit you to say 3 files at whatever size they happen to be. Unless there are hundreds of thousands of transitions above and below your threshold this should keep you in good shape.
    I also eluded to the method you use to write to this file. I would ensure that you are appending data using the actual file functions and not first reading in the file, appending your data as a string then writing the entire file contents back to disk. In addition to causing the highest load on the file system this method also has the largest system RAM requirements.

  • Activity monitor memory usage pie chart gone?

    What happened to the pie chart that sits in the dock that shows memory usage?  I found this very useful in the past, and Mavericks seems to have removed it from the dock options.

    Uhhh, I know it's a bit late for this but...
    darlingdearie wrote:
    I don't have a dock icon called "memory pressure graph".
    I have
    CPU Usage
    CPU History
    Network Usage
    Disk Activity
    so it's hard to know what you're taking about when referring the the "memory pressure graph".
    Please clarify. Thanks!
    ... it sounds like you have a previous version of the Activity Monitor.  The one that comes with 10.9 has
    CPU
    Memory
    Energy
    Disk
    Network
    The memory pressure is at the bottom of the Memory tab.

  • Resource monitoring - memory usage

    I typically use the system monitor on our SLES11 SP2 server (OES11 SP1). The server has 48 GB of memory as our backup software, Arkeia, loves memory. When monitoring resource usage during an Arkeia backup job, I notice that my memory usage remains around 9.4 GB. However, if I look at the process table, my Arkeia job is using well over 16 GB alone.
    So the questions are : Why the difference? And is this a bug?
    The one thing that comes to mind is the resource monitor is only looking at OS memory usage and is not adding "user process" memory. Any thoughts on this? Chris.

    What exact tool(s) are you using to monitor Arkeia's memory usage, and the overall memory usage? Can we see their output please? I'd guess they're really looking at different things.

  • How to monitor memory usage in a cRIO controller?

    I have an application where I suspect the customer is operating in a hotter environment than they are claiming.  I prepared the RT host vi to log a simple text file directly on to the RT's C: drive.  In this case I'm only logging the transition of events when the chassis temperature exceeds 70C and then becomes less than 60C along with a date and time stamp.  (It's not continually logging data.)  However, I'm concerned that over time (i.e months from now after I've long forgotten the project)  the file will get overly large and affect system operation.  There are articles that deal with memory management but I'm not sure how to interpret them.  They talk about RAM.  Is RAM the same as the C: drive on the controller?  How do I determine the available memory on the C: drive; like using "dir" in the old DOS days?  What is the best Knowledge Base article that deals with this issue.  This system is a stand alone application.  (The host PC is not normally connected.)  I am using a cRIO-9012 controller and LV v.8.6.
    Dave

    RAM on the controller is not the same as the C: drive. With respect to the controller, you can think of it in the same terms as your computer. RAM is volatile memory and your C: drive is non volatile flash memory.
    Depending on the frequency of the temperature excursions above and below your 70C threshold, the service life of the controller and the method you used to append to the file, there could be a number of issues that may creep up over time.
    The first, and the one you brought up is the size of the file over time. Left unchecked this file could grow continuously until the system literally runs out of flash memory space and chokes. Depending on how your are appending data to this file, you could also use more than a trivial slice of processor time to read and write this big file on disk. While I have not personally ever run one of the RT controllers out of "disk space", I can't imagine that any good could come of that.
    One thought is to keep a rolling history of say the last 3 months. Each month, start a new file and append your data to it during the course of the month. Each time a new file is created, delete the data file from something like 3 months ago. This will ensure that you will always have the last 3 months of history on the system, however the monthly deletion of the oldest data file will limit you to say 3 files at whatever size they happen to be. Unless there are hundreds of thousands of transitions above and below your threshold this should keep you in good shape.
    I also eluded to the method you use to write to this file. I would ensure that you are appending data using the actual file functions and not first reading in the file, appending your data as a string then writing the entire file contents back to disk. In addition to causing the highest load on the file system this method also has the largest system RAM requirements.

  • Is SL giving maximum priority on memory usage to disk cache???

    Pretty much every time I need to work with large files - be it video files, archiving or unarchiving large files, or working with VMWare images - the system quickly runs out of memory and starts swapping profusely. The swap file grows to 2-4 gigabytes pretty quickly, the system becomes unresponsive etc...
    I have the latest MacBook Pro 2009 with 4G of RAM, and right after system reboot it shows about 2.8G of free memory, and 0 in swap.
    Then I start VMWare, for example, which only uses about 500-600 megabytes of RAM. After some time working with it, the amount of free memory in Mac OS X steadily decreases, and then it starts swapping, and the swap keeps growing... The system becomes slower and slower, but yet, Activity Monitor is still showing that VMWare is using about 500-600 megabytes of memory. At the same time it shows about 2 gigabytes of Active memory, and about 1G of inactive memory, and about 800 megabytes of "wired" memory...
    Where the **** did all the memory go??? Did system use it all for disk cache, trying to keep all disk images that VMWare is working with, in it at the expense of physical RAM available to other processes? If it is so - that's VERY silly, and while it may give an appearance of "snappiness" with small amount of simple applications that don't do much, it becomes a MAJOR problem for applications that work with large files - iMovie, VMWare, any other media processing application, cause Mac OS X to start swapping and beachballing, and become unresponsive after certain time of work, while the swap file keeps growing, yet the amount of memory used by those applications shown by Activity Monitor staying around 100-500 megabytes.
    I've been working with JES Deinterlacer the other day - trying to convert a 1080i/30fps video into a 720p/60fps one. The file was about 23 gigabytes in size. Besides that I've only had Mail and Safari open. Before I started the deinterlacing process, there was about 1.5G of free memory. After it started, the amount has been going down steadily, until the system started swapping and become unusable after awhile. The deinterlacing process also slowed down significantly (my guess is - again - the system tried to give memory needed by JES Deinterlacer to the disk cache for the file it's been processing). There's no point in caching that file at all - there's 0 chances the application will ever need to jump back to the parts of the file it's already processed! Or to read any parts of the new file it's created!
    What's other people's experience working with large files in Snow Leopard? Any Final Cut users? Or iMovie? Does your system exhibit the same behavior? Have you tried monitoring memory usage while you work with any of the applications that work with large files?

    Having exactly the same issue. However, I cannot even get VMware to start. It gives me a "could not open paging file" message and never opens. I have also had strange random crashes recently where it seems to run out of memory and then the force quick dialogue opens and usually Safari, Entourage, or Aperture are frozen or paused. Really getting to be a big issue.
    I contacted VMware and they looked at all the logs, had me try one change to an internal VM file, but without success. They now are suggesting that based on the logs, it is an OSX paging issue and that I need to do an archive and install!!! No way do I want to do that after just migrating to this new machine and getting everything set up so well!
    Need some alternative ideas. Actually hoping 10.6.2 comes out and solves it!

  • Tracking Memory usage on iOS using the Stats class

    I've been checking memory usage on an app I'm developing for iOS using the Stats class https://github.com/mrdoob/Hi-ReS-Stats ( http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c3-315cd077124319488fd-7fff.html#WS 948100b6829bd5a61637f0a412623fd0543-8000).
    I added the Stats class to my project and redeployed and, yikes, memory usage reported in Stats creeps up (pretty slowly) even when there's nothing happening in the app (just showing a loaded bitmap).
    To try and track down the issue I created a project with a test class that extends Sprite with just this single call in the constructor :-
    addChild( new Stats() );
    I deployed it to the device to check that it didn't gobble any memory.
    But I was Suprised to watch the memory usage creep up and up (to approx 5) before some garbage collection kicked in and takes memory back down. I left it running and then it crept up again to over 7.5 this time before being kicked back down to just below 3.
    So 2 related questions that I'd appreciate any feedback/observations/thoughts on :-
    1 - Is this normal (i.e. memory creeping up when there's nothing other than Stats in the project) ?
    2 - What is the best way to monitor memory usage within an app ? Is Stats good enough - is Stats itself causing the memory usage ?
    All the best guys !

    Also see thread (http://forums.adobe.com/message/4280020#4280020)
    My conclusions are :-
    - If you run an app and leave it idle, memory usage gradually creeps up (presumably as memory is being used to perform calcs/refresh the display etc)
    - Periodically the garbage collection kicks in and memory is brought back down
    - This cycle could be in excess of 5 mins
    Run with your real app and memory will increase and be released much more rapidly/regularly.
    - It's probably worth performing initial checks by running on your desktop to iron out any initial problems

  • High RAM memory usage compared to other systems

    I'm suffering with memory usage problems, only in open 2 tabs of facebook(it's only for example, any 'heavy' page freeze also) in firefox*(for example also, but this occur in chromium also) freeze computer.
    I've 1GB of memory, it isn't big but also isn't very little.
    I'm using XFCE with compiz (yes, compiz, since use a very lightweight openbox-based session doesn't help, then compiz...) when system start (after login) system is consuming 130MB approximately of RAM.
    after 1 hour if close all open apps and see memory usage again consume is something about 350MB, "only solution" is reboot.
    all applications(htop,free -m,xfce's system monitor...) that I've tried to monitor memory usage show above scenario, except the 'ps_mem' script in AUR.
    in the 'ps_mem' script the result is following (with opera browser open, less memory offensive browser but I really prefer firefox):
    Private + Shared = RAM used Program
    88.0 KiB + 10.0 KiB = 98.0 KiB agetty
    380.0 KiB + 34.5 KiB = 414.5 KiB sshd
    408.0 KiB + 93.0 KiB = 501.0 KiB gpg-agent (2)
    372.0 KiB + 142.0 KiB = 514.0 KiB avahi-daemon (2)
    456.0 KiB + 60.0 KiB = 516.0 KiB systemd-logind
    280.0 KiB + 261.0 KiB = 541.0 KiB sh
    476.0 KiB + 103.5 KiB = 579.5 KiB xfconfd
    476.0 KiB + 153.5 KiB = 629.5 KiB gvfsd
    604.0 KiB + 34.5 KiB = 638.5 KiB systemd-udevd
    588.0 KiB + 102.5 KiB = 690.5 KiB dbus-launch (3)
    576.0 KiB + 119.5 KiB = 695.5 KiB sudo
    668.0 KiB + 77.5 KiB = 745.5 KiB cups-browsed
    620.0 KiB + 179.5 KiB = 799.5 KiB at-spi2-registryd
    804.0 KiB + 67.0 KiB = 871.0 KiB htop
    756.0 KiB + 131.0 KiB = 887.0 KiB gconfd-2
    800.0 KiB + 126.0 KiB = 926.0 KiB upowerd
    752.0 KiB + 183.5 KiB = 935.5 KiB xscreensaver
    920.0 KiB + 85.0 KiB = 1.0 MiB cupsd
    876.0 KiB + 241.0 KiB = 1.1 MiB gvfsd-fuse
    692.0 KiB + 429.0 KiB = 1.1 MiB systemd-journald
    880.0 KiB + 273.5 KiB = 1.1 MiB at-spi-bus-launcher
    1.2 MiB + 125.0 KiB = 1.3 MiB udisksd
    1.4 MiB + 486.5 KiB = 1.9 MiB dbus-daemon (5)
    1.4 MiB + 533.5 KiB = 1.9 MiB panel-6-systray
    1.5 MiB + 430.0 KiB = 2.0 MiB lightdm (2)
    1.6 MiB + 572.0 KiB = 2.1 MiB xfce4-session
    1.5 MiB + 683.0 KiB = 2.2 MiB panel-5-datetim
    1.5 MiB + 706.5 KiB = 2.2 MiB panel-2-actions
    1.6 MiB + 723.0 KiB = 2.3 MiB panel-4-systeml
    2.0 MiB + 543.5 KiB = 2.5 MiB xfsettingsd
    2.0 MiB + 579.5 KiB = 2.6 MiB systemd (3)
    2.3 MiB + 815.0 KiB = 3.1 MiB emerald
    2.8 MiB + 578.5 KiB = 3.3 MiB gnome-keyring-daemon (3)
    3.2 MiB + 946.5 KiB = 4.1 MiB zsh (2)
    16.1 MiB + -12144.0 KiB = 4.2 MiB polkitd
    4.1 MiB + 465.0 KiB = 4.6 MiB notify-osd
    4.5 MiB + 1.6 MiB = 6.2 MiB xfce4-panel
    5.1 MiB + 1.2 MiB = 6.3 MiB panel-7-mixer
    7.7 MiB + 1.4 MiB = 9.0 MiB xterm (2)
    12.0 MiB + 636.0 KiB = 12.6 MiB opera:libflashp
    24.3 MiB + -10774.5 KiB = 13.8 MiB Xorg
    18.6 MiB + 1.1 MiB = 19.7 MiB gnome-do (2)
    23.2 MiB + 2.4 MiB = 25.6 MiB compiz
    168.4 MiB + 3.2 MiB = 171.7 MiB opera
    320.1 MiB
    =================================
    as you can see in 'ps_mem' memory usage isn't nothing absurdly it's ok.
    another example, this problem stop me make certain things, for example is impossible browsing in web while programming in eclipse, this is very uncomfortable.
    other thing, this not occur in Windows XP, I can open more than 10 tabs in firefox, and eclipse open, etc...
    any needing information, tell-me!
    firefox*: with memory cache turned off in about:config
    Last edited by hotvic (2013-08-12 17:42:21)

    hotvic, could you please change the title to something more descriptive? 'memory problems' sounds like you have problems e.g. remembering / recalling things, which is not an Arch issue ;P
    kenny3794 wrote:
    I've had some limiting performance with browsers lately (Firefox and Chromium).  I found it had a considerable cache size utilizing lots of fragmented space on my home directory. I cleared the cache and have since set a limit to the cache size of 30 MB for Firefox.  Between multiple users across multiple browser sessions, I had at least 2 GB of cache data on my 10 GB home partition!  So far (1 day), this has been helpful.  Perhaps this could help you also?
    Ken
    OP is talking about RAM, not HDD space.

  • How to monitor memory on Cisco ACE Appliance 4710?

    I'm trying to monitor the memory usage in balancers Cisco ACE Appliance 4710 with version A3 (2.2), but the OIDs cpmCPUMemoryUsed (.1.3.6.1.4.1.9.9.109.1.1.1.1.12) and cpmCPUMemoryFree (.1.3.6.1.4.1.9.9. 109.1.1.1.1.13) not work.
    What the right OID to monitor memory usage in balancers Cisco ACE 4710 Appliance?

    HI,
    You need to use  CISCO-ENHANCED-SLB-MIB .
    cpmProcExtMemAllocatedRev .1.3.6.1.4.1.9.9.109.1.2.3.1.1 (this gives the memory allocated to each process)
    You can also read up on the mib
    Hope this helps
    Venky

  • Does J2ME Wireless Toolkit Check Memory Usage?

    I just want to confirm that does the J2ME wireless Toolkit check the memory usage by a MIDlet and only allow it 128KB of memory usage as specified by the MIDP 2.0 specification???
    Or do we have to calculate and check manually about the memory usage of a MIDlet???

    Hi,
    The WTK doesnt set heap memory unless u specify in the preferences. Most of the devices allow upto 200 KB heap size or more. The least being Nokia 7250 which allows only upto 200 KB heap size. SO if u are developing for a particular device in mind specify the heap size in the preferences. You will also get the memory utilized at runtime thru.
    Runtime.getRuntime().* functions. eg.
    Runtime.getRuntime().freeMemory();
    wil give u free memory.
    in new emulators such as Nokia 7210 emulator , u can see the memory real time , on the details secction.
    Hope this helps.

  • Monitor memory percentage

    Hello,
    i want to monitor memory % usage on some of the servers where i work. i tried looking for answers to my questions online, but couldn't find such for most questions so i'll try here. 
    1) i see the "percentage of committed memory" monitor, which is want to use, is by default disabled for all classes except windows server 2000 OS. is that normal?
    2) when i search in the monitors pane for the above monitor, i see it's seperately targeted at each OS, for exmaple at "windows server 2003 OS". why is that? i mean why isn't it's just targeted at windows computer class?
    3) if the servers i want to monitor are of different OS types, i need to go over each of the monitors i see and enable it for its own OS class? 
    4) i see the defaults are 120 sec interval, 80% threshold. let's say i want to override it for my group of servers with 85% - do i need to override the monitor under each class?
    thanks a lot in advance,
    Uri

    1) yes its normal and its by design
    2) most of the monitors in OS MPs are like that. example- logical disk free space. this is also by design. these monitors come with different OS MPs that we import in our SCOM. It eventually helps us to set overrides and threshold values for diff OS.
    3) Yes. go over each and enable the monitor.
    4) I think u can create the override under the targeted class of the monitor and it will be overwritten for all.
    Hope this helps.
    Thanks, S K Agrawal

  • How to monitor java memory usage in enterprise manager

    I am running sqlplus to execute a sql package, which generates XML.
    When processing 2000+ rows, it will give a out of memory error.
    Where in enterprise manger can I see this memory usage?
    Thanks.

    Hello,
    it depends a little on what you want to do. If you use the pure CCMS monitoring with the table ALTRAMONI you get average response time per instance and you only get new measurements once the status changes from green to yellow or red.
    In order to get continuous measurements you should look into Business Process Monitoring and the different documentations under https://service.sap.com/bpm --> Media Libary --> Technical Information. E.g. the PDF Setup Guide for Application Monitoring describes this "newer" dialog performance monitor. Probably you have to click on the calendar sheet in the Media Libary to also see older documents as well. As the Business Process Monitoring integrates with BW (there is also a BI Setup Guide in the Media LIbrary) you can get trendlines there. This BW integration also integrates back with SL Reporting.
    Some guidance for SL Reporting is probably given under https://service.sap.com/rkt-solman but I am not 100% sure.
    Best Regards
    Volker

  • What are Microsoft- or other's-best practice or recommendation of Disk usage, CPU usage, memory usage monitoring to prevent system trouble ?

    We use win2003,win2008,win2012 servers.
    I heard somewhere that Microsoft recommendation threshold of disk usage monitoring is ( free disk space >= 15- 20 %) , if I remember correctly, but how about CPU usage and memory usage monitoring ? 
    What are  Microsoft- or other's-best practice or recommendation of Disk usage, CPU usage, memory usage monitoring to prevent system trouble and improve availability ?

    Hi,
    You can refer the following Performance Tuning Guidelines,
    Performance Tuning Guidelines for Windows Server 2003
    http://download.microsoft.com/download/2/8/0/2800a518-7ac6-4aac-bd85-74d2c52e1ec6/tuning.doc
    Performance Tuning Guidelines for Windows Server 2008 R2
    http://blogs.technet.com/b/josebda/archive/2010/08/27/performance-tuning-guidelines-for-windows-server-2008-r2.aspx
    WINDOWS SERVER 2012 - PERFORMANCE TUNING GUIDELINES
    http://blogs.technet.com/b/itprocol/archive/2012/11/27/windows-server-2012-performance-tuning-guidelines.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Memory Usage:  Heap space vs System Monitor

    Hi guys
    I'm trying to wrap my head around something which seems pretty complicated. We have an enterprise app making use of Jasper Reports. When a report is generated, the app's memory usage in the Linux (Ubuntu 8.10) System Monitor shoots up by around 20MB (the more data on the report, the higher that number). After the report is closed, memory usage doesn't decrease. This is resulting in performance issues, and memory usage given by the System Monitor doesn't match up with the results given by the NetBeans profiler. There is a huge difference between the values indicated by the System Monitor, and the heap space given by the profiler. I've done some fairly detailed checking using the profiler, and according to that, there are 0 instances of our report generating classes in memory after closing a generated report. I've read a bunch of articles on the net, but I'm still stumped. My best guess is that something weird is happening somewhere in the Jasper code, but I can't really think of a way to confirm that. Does anyone have any idea what could be causing this and how to prevent it? Any help will be much appreciated; thanks.
    P.S. I've posted on the Jasper forums are well, but to no avail. [http://jasperforge.org/plugins/espforum/view.php?group_id=102&forumid=103&topicid=65335]
    Edit: We're using Java 6 (we've recreated the issue on a few release of Java 6), and Jasper Reports 3.0.1. (Just in case that is important)

    One thing about your test is that using Xms110m will cause the JVM to NEVER give back any memory, because it won't ever go below 110M. I altered your test a bit:
    import java.util.LinkedList;
    public class MemTest {
       private static LinkedList<byte[]> mem = new LinkedList<byte[]>();
       public static void main(String[] args) {
            try {
                System.out.println("Initial:");
                print();
                useMem();
                System.out.println("Allocated:");
                print();
                clearMem();
                System.out.println("Deallocated:");
                print();
                for (;;) {
                   Thread.sleep(10000);
                   System.out.println("After 10 seconds: ");
                   print();
            } catch (Exception ex) {
                ex.printStackTrace();
       private static void useMem() {
          for (;;) {
             try {
                mem.add(new byte[1024*1024]);
             } catch (OutOfMemoryError e) {
                //free up a few MB to make sure we don't get an OOME while trying to print, etc
                mem.removeFirst();
                mem.removeFirst();
                mem.removeFirst();
                break;
       private static void clearMem() {
          mem = null;
          System.gc();
          System.gc();
          System.gc();
       private static void print() {
          long free = Runtime.getRuntime().freeMemory();
          long total = Runtime.getRuntime().totalMemory();
          double freeFraction = (((double)free) / (total)) * 100;
          System.out.printf("\tAvailable: %d\tTotal: %d\tFree: %4.2f%%%n", free, total,
                freeFraction);
    }When I ran it with
    -Xms16m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30I got these results:
    Initial:
         Available: 16518016     Total: 16711680     Free: 98.84%
    Allocated:
         Available: 2070952     Total: 1065484288     Free: 0.19%
    Deallocated:
         Available: 582957376     Total: 583073792     Free: 99.98%
    After 10 seconds:
         Available: 582953224     Total: 583073792     Free: 99.98%As you can see, the JVM did return about half of the memory back. ProcExp shows ~592MB used, and peaked ~1000MB. But right now it's still running, and still holding at 583MB.
    I've read bug reports like this, but honestly have no idea what the people at Sun are talking about. I don't see the footprint being reduced. I don't know, maybe if I wait a couple of days...

  • Is there a VI to monitor CPU and memory usage on wINDOWS 2000 system ?

    I want to monitor CPU usage and available memory for Windows 2000 computer. Is there a VI that can call the task mngr. via a DLL to provide this information in real time ?

    There was a nice example using .NET technology, but it seems the link has changed and I cannot find it anymore. See this older thread for some clues.
    Does anyone know what happened to the target of the original link in my old post?
    Edit: Found it here.Message Edited by altenbach on 04-29-2005 12:42 PM
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for