Need advice concerning memory usage
Hi all,
i have a class that represents a hexagon map (extends JPanel). This map is usually put in a scrollpane.
In the paintComponent method of the class the hexagons of the map are painted. For good scrolling and painting performance i use the clipping region to paint only the hexagons that are currently visible.
For that purpose i call "hexBounds = hex.getBounds();" to get the bounds of the hexagon. Then i check whether the hexagon's bounds intersect with the clipping region to decide whether the hexagon needs to be painted (see the code below).
The "problem" is that lots of Rectangle objects are created (and memory allocated). So when i render a large map and do some excessive scrolling (which causes lots of repaints) for let's say 1 minute, there are several hundred thousand Rectangle objects that have been allocated (but are no longer referenced) and are occupying up to 20-40 MB of memory. I used the profiler with the following parameters:
-Xint -Xrunhprof:heap=sites,depth=10.
My question is: Do i need to worry about this or will the garbage collector take care of it when it needs to? I could call the garbage collector manually at the beginning of the paintComponent method so that all old, unreferenced Rectangle objects are garbage collected but i've read in several articles that calling the garbage collector manually is generally a bad idea.
What is the best practice in such a case?
Here is the painting code:
super.paintComponent(g);
Graphics2D g2d = (Graphics2D)g;
Rectangle clipBounds = g2d.getClipBounds();
Rectangle hexBounds = null;
Iterator<Entry<HexagonCoordinates, Hexagon>> iter = hexagons.entrySet().iterator();
while(iter.hasNext())
Entry<HexagonCoordinates, Hexagon> entry = iter.next();
Hexagon hex = entry.getValue();
hexBounds = hex.getBounds(); // <-- problem
// Paint only if neccessary
if(clipBounds.intersects(hexBounds))
g2d.drawPolygon(hex);
hexBounds = null;
}
I've run several tests now and it seems as if the garbage collector takes care of the problem as expected. But what if someone would scroll excessively for many minutes (hey, you know users do strange things sometimes). This would result in more and more memory being allocated in a very short time period. My guess is that the garbage collector would free the occupied memory before the VM runs out of memory, right?
Then everything would be fine.
Any other suggestions on improving my painting code are greatly appreciated.
Similar Messages
-
Need Advice, Concerns 12v connector on MSI K Diamond
I didnt have a 12v power connector that goes into the mobo, so I used the 4 pin p4 connector instead ( i heard that this would run ), it has worked and I am currently waiting for the converter, though from what i hear only on wire is providing 12 volts compared to the 2 that the p4 connector gives. Is there anything I should be concerned about. I don't plan on overclocking so I doubt the CPU will need the extra juice. Also would this harm my computer?
the mobo works fine with the 4 pin connector instead of the 8 pin connector. doesn't matter of you OC or not, that connector should be plugged in the mobo anyway.
-
LV 2013, LVRT 2013 (PXI), Win 7
I'm an old hand at LabVIEW, but I'm new to using Project Libraries (.lvlib) and Packed Project Libraries (.lvlibp).
I'm hoping someone with more experience can offer some insight and guidance.
At first blush, it looks like PPLs can solve a problem of mine.
My project is large: 500 channels from 20+ different domains: plugin boards, networked devices, etc. Lots of data analysis and reporting. There are 30+ cells with a copy of this, it has to be EXE and RTEXE files, not just LabVIEW DevSys.
It's been working for 10+ years, we are now doing a rewrite, so that DAQ is happening all the time, rather than on demand.
My client wants to have "addons", so that when a new Gizmotron 9000 comes along, we don't have to add to the main program, we write an "addon" to handle the Gizmotron. This "addon" would have a UI, and be able to share VIs from the main host, and data, but not require any changes to the host code. Eventually, these should be capable of being written by someone other than me, an engineer (in some other field) with fewer LabVIEW skills than me.
Using a VI for this would work, but is kind of fragile. If this VI uses a VI in the host, then it's path-dependent. Moving the VI to a different folder would at best, cause a SEARCH box to come up, and at worst, break things. I'm not even sure if the SEARCH box would come up, in an EXE. And if I want a given host VI to be available to these addons, I have to make sure that it's not stripped by the BUILD process.
So, I looked at making a DLL. Yuck. Double Yuck. I'm not about to make the client have to edit C header files, just making the "wizard" import a DLL that it just made is a nightmare.
So, i found PPLs. It looks like they're the answer.
I've already implemented a system where a given PPL has a "Description" VI, and a "Work" VI. If you choose a given PPL, it runs the "Decription" VI and obtains a text decription (that the maker writes) of what the package does. If you choose this one, then it takes the "Work" VI, creates a new window with a subPanel to fit, and installs the "Work" VI in it. A string notifier sends messages to it. The host provides Preferences save / restore functions. The window can be hidden and shown. It can do anything as far as UI that it needs.
I can move the PPL from the ADDONS folder to the ADDONS\GoodStuff\2015\May folder. My menu system will offer it, and if you choose it, it still works. It is self-contained, it just works.
But being self-contained has a "small" downside. I have a fair number of FGVs: Functional Global Variables. I call them "managers" - the idea is that they keep info in Shift Registers, and a caller calls them with some function to perform using that info.
For example, one might keep a TCP Connection ID in such a ShiftReg and have CONNECT, DISCONNECT, and various SEND and RECEIVE functions which use that ShiftReg.
Without attention, that pattern doesn't survive the move to PPLs. Because of the "self-contained" aspect, there are actually TWO instances of the manager, with TWO separate dataspaces. So, if the host calls it, and the PPL then calls it, they don't share the same info.
Here's where my understanding of this gets dicey. If I create a separate PPL just for this manager, then that's an answer. If everybody calls the PPL version, then there will only be one instance, and all is well.
But what's the best way to do that?
I can put the LVLIB into the master project, and then add a Build Spec to make the LVLIBP, I found out I have to actually put the PPL itself into the project to get it to work. But that means there are TWO copies of the VI in the project. The only way I can get it to work is to specify to use the copy from the PPL, and NOT use the one that's in the host already.
So, to avoid temptation, should I move all these manager things into one separate project? Should I move them into individual separate projects? Should I just remove the components from the main project after building the PPL? (no, I can't do that - that would make it impossible to update the PPL). What's the best strategy, and why?
Another thing I don't understand is the physical handling of the PPL. Does it have to actually exist?
Apparently not - I did a test:
--- Create a LVLIB for one of these managers. It lives in the ADDONS folder.
--- My HOST code (in an EXE) initializes it, and stores a value.
--- One of the ADDONS from a PPL reads that value and displays it.
--- It works. The data is shared.
--- I moved the PPL file to a different folder, inside the ADDONS folder. It works.
--- I moved the PPL file to my desktop. IT STILL WORKS.
--- I changed the PPLs file name . IT STILL WORKS.
--- I deleted the PPL file. IT STILL WORKS.
So, apparently the PPL was incorporated into the EXE. But I didn't specifically include it - the only SOURCE code I specified was the host main VI.
Is that what happened? Am I understanding it right?
But if that's what happened, then how does a stand-alone PPL know that it's there? Because my standalone PPL accessed it just fine, even though it's in an EXE.
Do I have to deploy the PPL file alongside the EXE? or just ignore the thing. I just don't quite understand how it works.
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com
Blog for (mostly LabVIEW) programmers: Tips And TricksCoastalMaineBird wrote:
But if the path is important, then this is not solving one of my problems, the portability of the addons.
If your problem is "how do I find plugins not included in my EXE as a plugin architecture". By relative path is the only simple method. If the plugin architecture uses VIs, or classes, or PPL, you will always need to enumerate the plugins not included in the EXE. The easiest way is to enumerate a folder of files and say each one is a plugin.
Alternatively you could have something in the middle like a text file that says where to find the plugins, or a registry to read that says where they are as an absolute path. But then you have another place where the fragility of the system could be seen. In addition to the plugins not being where they should be, your text file could be broken, or missing. At least with a PPL, or LLB, or Class you have a single file that is seen as the main which references the others.
As much as I resist classes complicating things, this is a prime example of where they should be used. I don't want to change your design if it is too late, but you should at least take a look at the Hardware Abstraction Layer.
http://www.ni.com/example/31307/en/
https://decibel.ni.com/content/docs/DOC-15014
Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously. -
Need advice concerning a heatsink
hey all,
I plan to upgrade my CPU cooling coz the stock heatsink of my P4 doesn't seem to be enough (plus I want to overclock ). So anyhoo, I've been checking out some websites for some good copper heatsinks, and I saw this
http://www.newegg.com/app/ViewProductDesc.asp?description=35-106-024&depa=0
The Thermaltake P4 Spark 5 (in case the link didn't work ). So is this heatsink good.... If any of you have had any experience (good or bad) with this product, please do tell. Also note that I'm on a bit of a budget . Thanks in advance for the advice.Quote
Originally posted by kt6user
Quote
but the guy has a p4, so a aero7 wont work for him in the first place
don't sweat it coolg
scooter did this just the other day, hope you don't mind scooter
hey cmon it was like 2am my time
i dont think the towers cool well either
for coolin:
copper= or
aluminum= or -
Syncing iPod to new user account-need advice
Hey All! I set up a new user account for my friend on my imac, so he will want to sync his iPod Touch to iTunes, iPhoto, iCal, etc. in his new user account. I know there are plenty of freeware and shareware apps to get his music from the iPod into iTunes, but need advice concerning everything else oh his iPod.
I know I'll first get that pop up window that says his iPod is synced to another computer, and that I use "cancel" rather than "sync and erase". After doing that, I guess I use one of the shareware apps to get the music transferred over.
Now, after that, if I do a regular sync, will iTunes transfer his apps that are on the iPod?
Do I need to open iPhoto and download the photos on his iPod into iPhotot before doing a sync?
What about text messages? Will they remain in tact?
I don't think he uses the calendar on his iPod so no worries there....
What about Playlists that he has on the iPod? Will just the music transfer with the shareware apps or playlists also?
Will Notes be saved?
Thanks for your help. Don't wanna screw up his iPod by setting up this user account for him.Apps are tied to the ID I assume the new account is just on the iMac and not a new iTunes account?
Yes, you can import photos into iPhoto the same as you would off any camera.
You would need to transfer purchased content (like apps). You can't do that via sync as it's a new computer and all content will be erased as you are aware.
http://support.apple.com/kb/ht1848
What happened to your friends computer? Was there no backup? -
Can someone please explain the terms that are listed under the Activity Monitor. I have
Free - 169MB - I understand this one
Wired - 1.13GB - ??
Active- 557MB - ??
Inactive-157MB - ??
Used - 1.83GB - I understand this one.
I am running VM Fusion when I took this snapshot and I just ordered more memory to upgrade to 4GB to help my windows sessions.
Thanks and by the way loving this new MBP - going on 2 weeks now and can't put it down.<font "size=+1">About OS X Memory Management and Usage
Reading system memory usage in Activity Monitor
Memory Management in Mac OS X
Performance Guidelines- Memory Management in Mac OS X
A detailed look at memory usage in OS X
Understanding top output in the Terminal
The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM. -
Very high memory usage..possible memory leak? Solaris 10 8/07 x64
Hi,
I noticed yesterday that my machine was becoming increasingly slow, where once it was pretty snappy. It's a Compaq SR5250NX with 1GB of RAM. Upon checking vmstat, I noticed that the "Free" column was ~191MB. Now, the only applications I had open were FireFox 2.0.11, GAIM, and StarOffice. I closed all of them, and the number reported in the "Free" column became approximately 195MB. "Pagefile" was about 5.5x that size. There were no other applications running and it's a single user machine, so I was the only one logged in. System uptime: 9 days.
I logged out, logged back in, to see if that had an affect. It did not. Rebooted and obviously, that fixed it. Now with only FireFox, GAIM, and a terminal open, vmstat reports "Free" as ~450MB. I've noticed if I run vmstat every few seconds, the "Free" total keeps going down. Example:
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870888 450220 9 27 10 0 1 0 8 2 -0 -0 -0 595 1193 569 72 1 28
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870880 450204 9 27 10 0 1 0 8 2 -0 -0 -0 596 1193 569 72 1 28
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870828 450092 9 27 10 0 1 0 8 2 -0 -0 -0 596 1193 570 71 1 28
unknown%Output of prstat -u Kendall (my username ) is as follows:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
2026 Kendall 124M 70M sleep 59 0 0:01:47 1.4% firefox-bin/7
1093 Kendall 85M 77M sleep 59 0 0:07:15 1.1% Xsun/1
1802 Kendall 60M 15M sleep 59 0 0:00:08 0.1% gnome-terminal/2
1301 Kendall 93M 23M sleep 49 0 0:00:30 0.1% java/14
1259 Kendall 53M 15M sleep 49 0 0:00:32 0.1% gaim/1
2133 Kendall 3312K 2740K cpu1 59 0 0:00:00 0.0% prstat/1
1276 Kendall 51M 12M sleep 59 0 0:00:11 0.0% gnome-netstatus/1
1247 Kendall 46M 10M sleep 59 0 0:00:06 0.0% metacity/1
1274 Kendall 51M 13M sleep 59 0 0:00:05 0.0% wnck-applet/1
1249 Kendall 56M 17M sleep 59 0 0:00:07 0.0% gnome-panel/1
1278 Kendall 48M 9240K sleep 59 0 0:00:05 0.0% mixer_applet2/1
1245 Kendall 9092K 3844K sleep 59 0 0:00:00 0.0% gnome-smproxy/1
1227 Kendall 8244K 4444K sleep 59 0 0:00:01 0.0% xscreensaver/1
1201 Kendall 4252K 1664K sleep 59 0 0:00:00 0.0% sdt_shell/1
1217 Kendall 55M 16M sleep 59 0 0:00:00 0.0% gnome-session/1
779 Kendall 47M 2208K sleep 59 0 0:00:00 0.0% gnome-volcheck/1
746 Kendall 5660K 3660K sleep 59 0 0:00:00 0.0% bonobo-activati/1
1270 Kendall 49M 10M sleep 49 0 0:00:00 0.0% clock-applet/1
1280 Kendall 47M 8904K sleep 59 0 0:00:00 0.0% notification-ar/1
1199 Kendall 2928K 884K sleep 59 0 0:00:00 0.0% dsdm/1
1262 Kendall 47M 2268K sleep 59 0 0:00:00 0.0% gnome-volcheck/1
Total: 37 processes, 62 lwps, load averages: 0.11, 0.98, 1.63System uptime is 9 hours, 48 minutes. I'm just wondering why the memory usage seems so high to do...nothing. It's obviously a real problem as the machine turned very slow when vmstat was showing 195MB free.
Any tips, tricks, advice, on which way to go with this?
Thanks!Apologies for the delayed reply. School has been keeping me nice and busy.
Anyway, here is the output of prstat -Z:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
2040 Kendall 144M 76M sleep 59 0 0:04:26 2.0% firefox-bin/10
28809 Kendall 201M 193M sleep 59 0 0:42:30 1.9% Xsun/1
2083 Kendall 186M 89M sleep 49 0 0:02:31 1.2% java/58
2260 Kendall 59M 14M sleep 59 0 0:00:00 1.0% gnome-terminal/2
2050 Kendall 63M 21M sleep 49 0 0:01:35 0.6% realplay.bin/4
2265 Kendall 3344K 2780K cpu1 59 0 0:00:00 0.2% prstat/1
29513 Kendall 71M 33M sleep 39 0 0:07:25 0.2% gaim/1
28967 Kendall 56M 18M sleep 59 0 0:00:24 0.1% gnome-panel/1
29060 Kendall 93M 24M sleep 49 0 0:02:58 0.1% java/14
28994 Kendall 51M 13M sleep 59 0 0:00:23 0.1% wnck-applet/1
28965 Kendall 49M 14M sleep 59 0 0:00:33 0.0% metacity/1
649 noaccess 164M 46M sleep 59 0 0:09:54 0.0% java/23
28996 Kendall 51M 12M sleep 59 0 0:00:50 0.0% gnome-netstatus/1
2264 Kendall 1352K 972K sleep 59 0 0:00:00 0.0% csh/1
28963 Kendall 9100K 3792K sleep 59 0 0:00:03 0.0% gnome-smproxy/1
ZONEID NPROC SWAP RSS MEMORY TIME CPU ZONE
0 80 655M 738M 73% 1:18:40 7.7% global
Total: 80 processes, 322 lwps, load averages: 0.27, 0.27, 0.22Sorry about the bad formatting, it's copied from the terminal.
In any event, we can see that FireFox is sucking up 145MB (??!?!!? crazy...) XSun, 200MB, and java 190MB. I'm running Java Desktop System (Release 3) so I assume that is what accounts for the the high memory usage RE: java process. But, XSun, 200MB?
Is this normal and I just need to toss another gig in, or what?
Thanks -
Memory problems: computer freezes on heavy memory usage
Ever since I changed the internal hard drive of my MacBook Pro (I installed a Seagate Barracuda 1Tb disk), I have the following problem:
Whenever an application needs a lot of memory (for example Parallels installing a new Windows system, or Acrobat Pro optimizing a big PDF file), the computer progressively freezes. What I mean by freezing is that I can still move the mouse cursor, but clicking has no effect, and there is no interaction whatsoever with the GUI. Even the clock on the menu bar stops.
I never had the problem before. I guess that under normal conditions the Finder will always keep a few Mb of RAM for its own use. After all I have 4Gb of RAM installed.
Is there some way I can prevent the computer from freezing? Has this something to do with VM? Ever since I changed the hard disk, this problem happens once or twice every day, and the only solution is to force a restart, losing all my work…
Thanks in advance for any advice…First, do not assign more than half your available RAM to a VM. It's best to let the VM software's configuration to select the amount used by its defaults.
Second, watch how many concurrent applications you attempt to use. Monitor your memory usage in Activity Monitor. If the Available RAM (Inactive RAM plus Free RAM) is low and Free RAM is near zero, then you have too many concurrent applications running. This forces the system to start using the disk-based virtual memory file. With too many apps swapping into the vm file disk thrashing will occur that ultimately can lead to the computer appearing to freeze up or to actually freeze up.
Of course your alternative is to install more RAM if in fact you are over-taxing what you have now. You may find the following helpful reading:
About OS X Memory Management and Usage
Reading system memory usage in Activity Monitor
Memory Management in Mac OS X
Performance Guidelines- Memory Management in Mac OS X
A detailed look at memory usage in OS X
Understanding top output in the Terminal
The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM. -
High Eden Java Memory Usage/Garbage Collection
Hi,
I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
Here are my current Java settings (running v6u24):
java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
Any other advice for performance improvements would be much appreciated.
Note: These graphs are not from a period where jrun had high CPU.
Here are the graphs:
PS Eden Space Graph
PS Survivor Space Graph
PS Old Gen Graph
PS Perm Gen Graph
Heap Memory Graph
Heap/Non Heap Memory Graph
CPU Graph
Request Average Execution Time Graph
Request Activity Graph
Code Cache GraphHi,
>Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
Yes normal to garbage collect Eden often. That is a minor garbage collection.
>Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
>Any other advice for performance improvements would be much appreciated.
You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
HTH, Carl. -
Please make the memory usage built-up a top priority to problemsolve.
Hi, Firefox is my favorite browser on OSX) , but the memory usage is really getting annoying. It's been 2 yrs and I have to restart Firefox every couple of hours to keep the memory usage in check. If i don't do that
I think this is a serious issue with your browser. None of the other browsers (Safari, Chrome)I use have this problem, however they're not that user-friendly in my opinion.
Like I said it has been 2 yrs and the forums are filled with this topic, without any real solutions, are there actually people on this problem? It's a pretty big problem, and honestly I don't understand why there's no correspondence from Firefox, that they acknowledge the problem and they're working on it or something.
Before those 2 yrs, never a hitch. The problem must've been introduced since FF21 or 22, not sure.
I'm experiencing exactly the same as "Cyberpawz":
https://support.mozilla.org/en-US/questions/958108?page=6
I'm a avid Firefox user and just want the voices of the users experiencing this problem to be heard. And ofcourse the problem to be resolved.
Thanks for listening!The memory issues will be real problems, just look at the viewing stats on the thread you linked to
'' 104 replies, 94 have this problem, 7788 views ''
The easy problems will have been identified and fixed. The difficult or least common ones need users able to help troubleshoot and most just want to complain not file bugs, (Please note such bugs will likely only be followed up if it is a Firefox bug or a problem with a single popular addon).
If you follow the advice from ''Cor-el'' and try with a new profile with all plugins disabled you should find the problem goes away. -
Solaris 10 Kernel memory usage
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones.
I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.
root@servername:~/zonecfg #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm nfs ]
::memstat
Page Summary Pages MB %Tot
Kernel 4108442 16048 49%
Anon 3769634 14725 45%
Exec and libs 9098 35 0%
Page cache 29612 115 0%
Free (cachelist) 99437 388 1%
Free (freelist) 369040 1441 4%
Total 8385263 32754
Physical 8176401 31939
Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find out how much of that kernel memory is due to ZFS?
It just seems an excessively high amount of our memory is going to the kernel, even with ZFS being used on the server.root@servername:~ #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto logindmux ptm nfs ]
::memstat
Page Summary Pages MB %Tot
Kernel 4314678 16854 51%
Anon 3538066 13820 42%
Exec and libs 9249 36 0%
Page cache 29347 114 0%
Free (cachelist) 89647 350 1%
Free (freelist) 404276 1579 5%
Total 8385263 32754
Physical 8176401 31939
:quit
root@servername:~ #kstat -m zfs
module: zfs instance: 0
name: arcstats class: misc
c 12451650535
c_max 33272295424
c_min 1073313664
crtime 175.759605187
deleted 26773228
demand_data_hits 89284658
demand_data_misses 1995438
demand_metadata_hits 1139759543
demand_metadata_misses 5671445
evict_skip 5105167
hash_chain_max 15
hash_chains 296214
hash_collisions 75773190
hash_elements 995458
hash_elements_max 1576353
hits 1552496231
mfu_ghost_hits 4321964
mfu_hits 1263340670
misses 11984648
mru_ghost_hits 474500
mru_hits 57043004
mutex_miss 106728
p 9304845931
prefetch_data_hits 10792085
prefetch_data_misses 3571943
prefetch_metadata_hits 312659945
prefetch_metadata_misses 745822
recycle_miss 2775287
size 12451397120
snaptime 2410363.20494097
So it looks like our kernel is using 16GB and ZFS is using ~12GB for it's arc cache. Is a 4GB kernel for other stuff normal? It still seems like a lot of memory to me, but I don't know how all the zones affect the amount of memory the kernel needs. -
VMWare ESX 4.1 - Guest memory usage
Hi,
We are running an ESX Server 4.1 with 36 GB RAM. We have installed 4 VM (Windows Server 2008 R2 and MS SQL Server 2008 R2), each defined with 8 GB RAM (1 ERP 6.0 EHP3 ABAPJAVA, 1 ERP 6.0 EHP4 ABAPJAVA, 1 Enterprise Portal 7.02 and 1 Solution Manager 7.0 EHP1). Each SAP system is up and running, performances are good, but only a few users are working on each of them. I see a warning in Vcenter concerning the RAM usage (94%), but the guest memory usage is low (from 5 to 12% in average during the last month).
How can I optimize ?
Kind regards,
ChristopheHi Christophe,
first of all, the warning you see is not critical. It is just a standard warning that comes up when a standard theshold of memory usage is reached. The 4 GB RAM you have left on the host should be sufficient to cover the memory allocation overheads of the VMs and the memory that is needed by the ESX hypervisor.
I assume you reserved 100 % of the assigned memory (mandatory for SAP virtualization). Reserved memory cannot be used by other VMs, therefore it is "not assignable" from the perspective of the hypervisor. This could be the reason why the threshold of the warning is reached although the utilization inside the guest ("active" memory) is low.
Kind regards,
Matthias -
Diagnostics Workload Analysis - Java Memory Usage gives BI query input
Dears
I have set up diagnostics (aka root cause analysis) at a customer side and I'm bumping into the problem that on the Java Memory Usage tab in Workload analyis the BI query input overview is given
Sol Man 7.0 EHP1 SPS20 (ST component SP19)
Wily Introscope 8.2.3.5
Introscope Agent 8.2.3.5
Diagnostics Agent 7.20
When I click on the check button there I get the following:
Value "JAVA MEMORY USAGE" for variable "E2E Metric Type Variable" is invalid
I already checked multiple SAP Notes like the implementation of the latest EWA EA WA xml file for the Sol Man stack version.
I already reactivated BI content using report CCMS_BI_SETUP_E2E and it gave no errors.
The content is getting filled in Wily Introscope, extractors on Solution Manager are running and capturing records (>0).
Did anyone come accross this issue already?
ERROR MESSAGE:
Diagnosis
Characteristic value "JAVA MEMORY USAGE" is not valid for variable E2E Metric Type Variable.
Procedure
Enter a valid value for the characteristic. The value help, for example, provides you with suggestions. If no information is available here, then perhaps no characteristic values exist for the characteristic.
If the variable for 0DATE or 0CALDAY has been created and is being used as a key date for a hierarchy, check whether the hierarchies used are valid for this characteristic. The same is valid for variables that refer to the hierarchy version.
Notification Number BRAIN 643
Kind regards
Tom
Edited by: Tom Cenens on Mar 10, 2011 2:30 PMHello Paul
I checked the guide earlier on today. I also asked someone with more BI knowledge to take a look with me but it seems the root cause analysis data fetching isn't really the same as what is normally done in BI with BI cubes so it's hard to determine why the data fetch is not working properly.
The extractors are running fine, I couldn't find any more errors in the diagnostics agent log files (in debug mode) and I don't find other errors for the SAP system.
I tried reactivating the BI content but it seems to be fine (no errors). I reran the managed system setup which also works.
One of the problems I did notice is the fact that the managed SAP systems are half virtualized. They aren't completely virtualized (no seperate ip address) but they are using virtual hostnames which also causes issues with Root Cause Analysis as I cannot install only one agent because I cannot assign it to the managed systems and when I install one agent per SAP system I have the message that there are already agents reporting to the Enterprise Manager residing on the same host. I don't know if this could influence the data extractor. I doubt it because in Wily the data is being fetched fine.
The only thing that it not working at the moment is the workload analysis - java memory analysis tab. It holds the Key Performance Indicators for the J2EE engine (garbage collection %). I can see them in Wily Introscope where they are available and fine.
When I looked at the infocubes together with a BI team member, it seemed the infocube for daily stats on performance was getting filled properly (through RSA1) but the infocube for hourly stats wasn't getting filled properly. This is also visible in the workload analysis, data from yesterday displays fine in workload analysis overview for example but data from an hour ago doesn't.
I do have to state the Solution Manager doesn't match the prerequisites (post processing notes are not present after SP-stack update, SLD content is not up to date) but I could not push through those changes within a short timeframe as the Solution Manager is also used for other scenarios and it would be too disruptive at this moment.
If I can't fix it I will have to explain to the customer why some parts are not working and request them to handle the missing items so the prerequisites are met.
One of the notes I found described a similar issue and noted it could be caused due to an old XML file structure so I updated the XML file to the latest version.
The SAPOscol also throwed errors in the beginning strange enough. I had the Host Agent installed and updated and the SAPOscol service was running properly through the Host Agent as a service. The diagnostics agent tries to start SAPOscol in /usr/sap/<SID>/SMDA<instance number>/exe which does not hold the SAPOscol executable. I suppose it's a bug from SAP? After copying the SAPOscol from the Host Agent to the location of the SMD Agent the error disappeared. Instead the agent tries to start SAPOscol but then notices SAPOscol is already running and writes in the log that SAPOscol is already running properly and a startup is not neccesary.
To me it comes down the point where I have little faith in the scenario if the Solution Manager and the managed SAP systems are not maintained and up to date 100%. I could open a customer message but the first advice will be to patch the Solution Manager and meet the prerequisites.
Another pain point is the fact that if the managed SAP systems are not 100% correct in transaction SMSY it also causes heaps of issues. Changing the SAP system there isn't a fast operation as it can be included in numerous logical components, projects, scenario's (CHARM) and it causes disruption to daily work.
All in all I have mixed feelings about the implementation, I want to deliver a fully working scenario but it's near impossible due to the fact that the prerequisites are not met. I hope the customer will still be happy with what is delivered.
I sure do hope some of these issues are handled in Solution Manager 7.1. I will certainly mail my concerns to the development team and hope they can handle some or all of them.
Kind regards
Tom -
JMX programmatically control memory usage
Hi,
I would like a server to programmatically receive updates concerning the memory usage of its clients. JMX seems to be able to do this. I have not found an example on how to do this programmatically.
Is there an easier way to do this other than using JMX? If not where can i start and learn how to do this? I have very little time and there is a sea of JMX tutorials,articles and code sample -unfortunately not relevant to what i need to do.
Can somebody at least guide me to some resources specific to my purpose?
ThanksString jmxUrl = String.format("service:jmx:rmi:///jndi/rmi://%s:%s/jmxrmi", server, port);
MBeanServerConnection con = JMXConnectorFactory.connect(new JMXServiceURL(jmxUrl), null).getMBeanServerConnection();
List<MemoryPoolMXBean> pools = new ArrayList<MemoryPoolMXBean>();
Set<ObjectInstance> beans = null;
try {
beans = con.queryMBeans(new ObjectName("java.lang:type=MemoryPool,name=*"), null);
} catch (MalformedObjectNameException e) {
e.printStackTrace();
for (ObjectInstance bean : beans) {
pools.add(ManagementFactory.newPlatformMXBeanProxy(con, bean.getObjectName().toString(), MemoryPoolMXBean.class));
}Message was edited by:
octoberdaniel
Message was edited by:
octoberdaniel -
Problem with Firefox and very heavy memory usage
For several releases now, Firefox has been particularly heavy on memory usage. With its most recent version, with a single browser instance and only one tab, Firefox consumes more memory that any other application running on my Windows PC. The memory footprint grows significantly as I open additional tabs, getting to almost 1GB when there are 7 or 8 tabs open. This is as true with no extensions or pluggins, and with the usual set, (firebug, fire cookie, fireshot, XMarks). Right now, with 2 tabs, the memory is at 217,128K and climbing, CPU is between 0.2 and 1.5%.
I have read dozens of threads providing "helpful" suggestions, and tried any that seem reasonable. But like most others who experience Firebug's memory problems, none address the issue.
Firefox is an excellent tool for web developers, and I rely on it heavily, but have now resorted to using Chrome as the default and only open Firefox when I must, in order to test or debug a page.
Is there no hope of resolving this problem? So far, from responses to other similar threads, the response has been to deny any responsibility and blame extensions and/or pluggins. This is not helpful and not accurate. Will Firefox accept ownership for this problem and try to address it properly, or must we continue to suffer for your failings?55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't.
Maybe you are looking for
-
Why can't I send email from my iPad Air
Why can't I send email from my iPad Air? I can receive them but not send. Thanks
-
Help needed: Download to excel without pop-up
Hi All, In a particular scenario, I require to trigger an action once a file have been successfully downloaded to EXCEL from Webdynpro UI. But in case of download functionilty, once the MS window pop-up for download comes up, the control is lost from
-
Hi Everyone, I m really frustrated with this issue. I am trying to install Adobe LiveCycle ES Update 1 trial version JBOSS turnkey option. It does not work!! The installation goes smoothly. But when I launch the config manager, there is no way I get
-
Unable to Publish LUMS file to Lumira Server on HANA
Hi SCN Community, I did quite a bit of research on this topic on SCN and other blogs but couldn't find the answer. I did all the configuration according to KBA 1995325. I have installed Lumira Server 1.23 on HANA SPS9. I have went through the configu
-
JDev: af:table with a large number of rows
Hi We are developing with JDeveloper 11.1.2.1. We have a VO that returns > 2.000.000 of rows and that we display in a af:table with access mode 'scrollable' (the default) and 'in Batches of' 101. The user can select one row and do CRUD operations in