Solaris 8 memory usage
Is there a tool like McDougal's prtmem that will show accruate (or more accruate) memory usage then vmstat's freemem will show?
tzzhc4 wrote:
prtmem was part of the MEMTOOLS package you just listed, I belive it relies on a kernel module in that package and doesn't work on any of the newer kernel revisions.But it certainly works on 8, right? And that's the OS you were referring to, so I assumed you were thinking of something else.
From that page:
System Requirements: SPARC/Solaris 2.6
SPARC/Solaris 7
SPARC/Solaris 8
SPARC/Solaris 9
x86 /Solaris 8
x86 /Solaris 9
So if that's what you want to use, go for it!
I thought freemem didn't include pages that had an identity, so there could be more memory free then was actually listed in freemem.What do you mean by 'identity'? Most pages are either allocated/reserved by a process (in use) or used by the disk cache. Under Solaris 7 and earlier, both reduced the 'freemem' number. Under 8 and later, only the first one does.
Darren
Similar Messages
-
Shared memory: apache memory usage in solaris 10
Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!
a) How or why does solaris choose to share memory
between processes
from the same program invoked multiple times
if that program has not
been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
Simply: if we run pmap / ipcs we can see a
shared memory reference
for our oracle database and ldap server. There
is no entry for apache.
But the total memory usage is far far less than
all the apache procs'
individual memory totted up (all 100 of them, in
prstat.) So there is
some hidden sharing going on somewhere that
solaris(2.9) is doing,
but not showing in pmap or ipcs. (virtually
no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
b) Under solaris 10, each apache process takes up
precisely the
memory reported in prstat - add up the 100
apache memory details
and you get the total RAM in use. crank up the
number of procs any
more and you get out of memory errors so it
looks like prstat is
pretty good here. The question is - why on
solaris10 is apache not
'shared' but it is on solaris 9? We set up
all the usual project details
for this user, (jn /etc/projects) but I'm
guessing now that these project
tweaks where you explicitly set the shared
memory for a user only take
effect for programs explicitly coded to use
shared memory , e.g. the
oracle database, which correctly shows up a
shared memory reference
in ipcs .
We can fire up thousands of apaches on the 2.9
system without
running out of memory - both machines have the
same ram !
But the binary versions of apache are exactly
the same, and
the config directives are identical.
please tell me that there is something really
simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
Darren -
Solaris 10 - Restrict memory usage
Bonjour,
I use Solaris 10 Release 6/06 on SPARC system.
I need to restrict the memory usage for users.
Unfortunately, for the moment, we can't increase the amount of RAM.
So, in a first time, I decided to use projects and max-shm-memory. But, this is only for shared memory segment, not real memory usage.
rcapd use SWAP instead of RAM ... it not solve my issue.
How can limit memory usage of an user ? It's possible of Solaris 10 (without zones) ?
Thx.
Guillaumedoes ulimit help you? this applies to the shell rather than a user...
[ulimit (1)|http://docs.sun.com/app/docs/doc/816-5165/ulimit-1?l=en&a=view&q=ulimit] -
Solaris process memory usage increase but not forever
On Solaris 10 I have a multithreaded process with a strange behaviour. It manages complicated C++ structures (RWTVal or RWPtr). These structures are built from data stored in a database (using Pro*C). Each hour the process looks for new informacion in database, builds new structures on memory and it frees older data. But, each time it repeats this procedure, the process memory usage increases several MB (12/16MB). Process's memory usage starts from 100M until near 1,4G. Just to this point, it seems the process has memory leaks. But the strange behaviour is that after this point, the process stops to continue growing up anymore. When I try to look for memory leaks (using Purify tool) the process doesn't grow up and no significant leaks were showed. Did anyone found a similar behaviour or can explain what could be happening?
markza wrote:
Hi, thanks for responding
Ja, i guess thats possible, but to do it all one row by row seems ridiculous, and it'll be so time consuming and sluggish surely. I mean, for a months worth of data (which is realistic) thats 44640 individual queries. If push comes to shove, then I'll have to try that for sure.
You can see by the example that I'm saving it to a text file, in csv format. So it needs to be a string array, a cluster won't be of much help I dont think.
The only other way I can think of is to break it up into more manageable chunks...maybe pull each column separately in a for loop and build up a 2D array like that until the spreadsheet storing.
You only do 1 query, but instead of Fetching All (as the Select does) you'll use the cursor to step through the data.
You can use Format to String or Write Spreadsheet fire with doubles.
You can break it down to get the data day by day instead of a full month at once.
/Y
LabVIEW 8.2 - 2014
"Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
G# - Free award winning reference based OOP for LV -
Very high memory usage..possible memory leak? Solaris 10 8/07 x64
Hi,
I noticed yesterday that my machine was becoming increasingly slow, where once it was pretty snappy. It's a Compaq SR5250NX with 1GB of RAM. Upon checking vmstat, I noticed that the "Free" column was ~191MB. Now, the only applications I had open were FireFox 2.0.11, GAIM, and StarOffice. I closed all of them, and the number reported in the "Free" column became approximately 195MB. "Pagefile" was about 5.5x that size. There were no other applications running and it's a single user machine, so I was the only one logged in. System uptime: 9 days.
I logged out, logged back in, to see if that had an affect. It did not. Rebooted and obviously, that fixed it. Now with only FireFox, GAIM, and a terminal open, vmstat reports "Free" as ~450MB. I've noticed if I run vmstat every few seconds, the "Free" total keeps going down. Example:
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870888 450220 9 27 10 0 1 0 8 2 -0 -0 -0 595 1193 569 72 1 28
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870880 450204 9 27 10 0 1 0 8 2 -0 -0 -0 596 1193 569 72 1 28
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870828 450092 9 27 10 0 1 0 8 2 -0 -0 -0 596 1193 570 71 1 28
unknown%Output of prstat -u Kendall (my username ) is as follows:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
2026 Kendall 124M 70M sleep 59 0 0:01:47 1.4% firefox-bin/7
1093 Kendall 85M 77M sleep 59 0 0:07:15 1.1% Xsun/1
1802 Kendall 60M 15M sleep 59 0 0:00:08 0.1% gnome-terminal/2
1301 Kendall 93M 23M sleep 49 0 0:00:30 0.1% java/14
1259 Kendall 53M 15M sleep 49 0 0:00:32 0.1% gaim/1
2133 Kendall 3312K 2740K cpu1 59 0 0:00:00 0.0% prstat/1
1276 Kendall 51M 12M sleep 59 0 0:00:11 0.0% gnome-netstatus/1
1247 Kendall 46M 10M sleep 59 0 0:00:06 0.0% metacity/1
1274 Kendall 51M 13M sleep 59 0 0:00:05 0.0% wnck-applet/1
1249 Kendall 56M 17M sleep 59 0 0:00:07 0.0% gnome-panel/1
1278 Kendall 48M 9240K sleep 59 0 0:00:05 0.0% mixer_applet2/1
1245 Kendall 9092K 3844K sleep 59 0 0:00:00 0.0% gnome-smproxy/1
1227 Kendall 8244K 4444K sleep 59 0 0:00:01 0.0% xscreensaver/1
1201 Kendall 4252K 1664K sleep 59 0 0:00:00 0.0% sdt_shell/1
1217 Kendall 55M 16M sleep 59 0 0:00:00 0.0% gnome-session/1
779 Kendall 47M 2208K sleep 59 0 0:00:00 0.0% gnome-volcheck/1
746 Kendall 5660K 3660K sleep 59 0 0:00:00 0.0% bonobo-activati/1
1270 Kendall 49M 10M sleep 49 0 0:00:00 0.0% clock-applet/1
1280 Kendall 47M 8904K sleep 59 0 0:00:00 0.0% notification-ar/1
1199 Kendall 2928K 884K sleep 59 0 0:00:00 0.0% dsdm/1
1262 Kendall 47M 2268K sleep 59 0 0:00:00 0.0% gnome-volcheck/1
Total: 37 processes, 62 lwps, load averages: 0.11, 0.98, 1.63System uptime is 9 hours, 48 minutes. I'm just wondering why the memory usage seems so high to do...nothing. It's obviously a real problem as the machine turned very slow when vmstat was showing 195MB free.
Any tips, tricks, advice, on which way to go with this?
Thanks!Apologies for the delayed reply. School has been keeping me nice and busy.
Anyway, here is the output of prstat -Z:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
2040 Kendall 144M 76M sleep 59 0 0:04:26 2.0% firefox-bin/10
28809 Kendall 201M 193M sleep 59 0 0:42:30 1.9% Xsun/1
2083 Kendall 186M 89M sleep 49 0 0:02:31 1.2% java/58
2260 Kendall 59M 14M sleep 59 0 0:00:00 1.0% gnome-terminal/2
2050 Kendall 63M 21M sleep 49 0 0:01:35 0.6% realplay.bin/4
2265 Kendall 3344K 2780K cpu1 59 0 0:00:00 0.2% prstat/1
29513 Kendall 71M 33M sleep 39 0 0:07:25 0.2% gaim/1
28967 Kendall 56M 18M sleep 59 0 0:00:24 0.1% gnome-panel/1
29060 Kendall 93M 24M sleep 49 0 0:02:58 0.1% java/14
28994 Kendall 51M 13M sleep 59 0 0:00:23 0.1% wnck-applet/1
28965 Kendall 49M 14M sleep 59 0 0:00:33 0.0% metacity/1
649 noaccess 164M 46M sleep 59 0 0:09:54 0.0% java/23
28996 Kendall 51M 12M sleep 59 0 0:00:50 0.0% gnome-netstatus/1
2264 Kendall 1352K 972K sleep 59 0 0:00:00 0.0% csh/1
28963 Kendall 9100K 3792K sleep 59 0 0:00:03 0.0% gnome-smproxy/1
ZONEID NPROC SWAP RSS MEMORY TIME CPU ZONE
0 80 655M 738M 73% 1:18:40 7.7% global
Total: 80 processes, 322 lwps, load averages: 0.27, 0.27, 0.22Sorry about the bad formatting, it's copied from the terminal.
In any event, we can see that FireFox is sucking up 145MB (??!?!!? crazy...) XSun, 200MB, and java 190MB. I'm running Java Desktop System (Release 3) so I assume that is what accounts for the the high memory usage RE: java process. But, XSun, 200MB?
Is this normal and I just need to toss another gig in, or what?
Thanks -
Solaris 10 Kernel memory usage
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones.
I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.
root@servername:~/zonecfg #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm nfs ]
::memstat
Page Summary Pages MB %Tot
Kernel 4108442 16048 49%
Anon 3769634 14725 45%
Exec and libs 9098 35 0%
Page cache 29612 115 0%
Free (cachelist) 99437 388 1%
Free (freelist) 369040 1441 4%
Total 8385263 32754
Physical 8176401 31939
Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find out how much of that kernel memory is due to ZFS?
It just seems an excessively high amount of our memory is going to the kernel, even with ZFS being used on the server.root@servername:~ #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto logindmux ptm nfs ]
::memstat
Page Summary Pages MB %Tot
Kernel 4314678 16854 51%
Anon 3538066 13820 42%
Exec and libs 9249 36 0%
Page cache 29347 114 0%
Free (cachelist) 89647 350 1%
Free (freelist) 404276 1579 5%
Total 8385263 32754
Physical 8176401 31939
:quit
root@servername:~ #kstat -m zfs
module: zfs instance: 0
name: arcstats class: misc
c 12451650535
c_max 33272295424
c_min 1073313664
crtime 175.759605187
deleted 26773228
demand_data_hits 89284658
demand_data_misses 1995438
demand_metadata_hits 1139759543
demand_metadata_misses 5671445
evict_skip 5105167
hash_chain_max 15
hash_chains 296214
hash_collisions 75773190
hash_elements 995458
hash_elements_max 1576353
hits 1552496231
mfu_ghost_hits 4321964
mfu_hits 1263340670
misses 11984648
mru_ghost_hits 474500
mru_hits 57043004
mutex_miss 106728
p 9304845931
prefetch_data_hits 10792085
prefetch_data_misses 3571943
prefetch_metadata_hits 312659945
prefetch_metadata_misses 745822
recycle_miss 2775287
size 12451397120
snaptime 2410363.20494097
So it looks like our kernel is using 16GB and ZFS is using ~12GB for it's arc cache. Is a 4GB kernel for other stuff normal? It still seems like a lot of memory to me, but I don't know how all the zones affect the amount of memory the kernel needs. -
How to specify maximum memory usage for Java VM in Tomcat?
Does any one know how to setup memory usage for Java VM, such as "-Xmx256m" parameter, in Tomcat?
I'm using Tomcat 3.x in Apache web server on Sun Solaris platform. I already tried to add the following line into tomcat.properties, like:
wrapper.bin.parameters=-Xmx512m
However, it seems to me that this doesn't work. So, how about if my servlet will consume a large amount of memory that exceeds the default 64M memory boundary of Java VM?
Any idea will be appreciated.
HaohuaWith some help we found the fix. You have to set the -Xms and -Xmx at installation time when you install Tomcat 4.x as a service. Services do not read system variables. Go to the command prompt in windows, and in the directory where tomcat.exe resides, type "tomcat.exe /?". You will see jvm_options as part of the installation. Put the -Xms and -Xmx variables in the proper place during the install and it will work.
If you can't uninstall and reinstall, you can apply this registry hack that dfortae sent to me on another thread.
=-=-=-=-=-=
You can change the parameters in the Windows registry. If your service name is "Apache Tomcat" The location is:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Apache Tomcat\Parameters
Change the JVM Option Count value to the new value with the number of parameters it will now have. In my case, I added two parameters -Xms100m and -Xmx256m and it was 3 before so I bumped it to 5.
Then I created two more String values. I called the first one I added 'JVM Option Number 4' and the second 'JVM Option Number 5'. Then I set the value inside each. The first one I set to '-Xms100m' and the second I set to '-Xmx256m'. Then I restarted Tomcat and observed when I did big processing the memory limit was now 256 MB, so it worked. Hope this helps!
=-=-=-=-=
I tried this and it worked. I did not want to have to go through the whole reinstallation process, so this was best for me.
Thanks to all who helped on this. -
How to calculate memory usage base on graphic utilization
Dear All ,
We have t2000 server with solaris 10 and 15 zones inside , and install SMC server include module ,Harddware configuration is 16 Gb Memory , 4 x 72 gb Hardisk and Swap 4Gb .,from menu Manage container manager , we select host of the server then click utilization , but i see memory usage 19759 Mb , How to calculate memory from this graph ? cause maximum ream RAM only 16 Gb in our server.
Regards
HadiPL/SQL collections are stored in the PGA. So you can monitor the PGA utilization of the session(s) to see how much PGA they use.
SELECT sid, name, value
FROM v$statname name
JOIN v$sesstat using (statistic#)
WHERE name.name in ('session pga memory', 'session pga memory max' )That will show you, for each session, the current PGA consumed by the session and the high water mark of PGA consumption by that session. You can join to V$SESSION and add additional predicates to narrow things down to the particular sessions you are interested in.
Justin -
Best GC options for minimum memory usage?
It seems that the more -Xmx memory I set, the more memory my app is using. Alas, the garbage collector is controlling the memory by itself and even System.gc() does not garbage collect all unused objects.
There are lots of parameters at http://java.sun.com/docs/hotspot/VMOptions.html, but what's the most aggressive one(s) to free unused memory as soon as possible?Hi Martin,
please, let me tell you a few basic things first:
- under good circumstances the GC runs only when some heap space is full and more
memory is needed.
- even if the GC frees memory it will usually not be returned to the OS immediately but only
when some other process needs it. This means you will not see processes shrink
immediately when the JVM reduces its heap sizes. This is certainly true for Solaris.
- it is bad practice to call System.gc() in your Java code.
- the smaller the configured heap the more time (in percent) the JVM will use for GC. This is
because the memory in use has to be searched each time the GC runs
- the heap actually used by your JVM adds to the real memory size of the process
- the max values (-Xmx + -XX:MaxPermSize) add to the virtual memory size of the process
even if these max values are not actually used
Now, knowing these, to minimize your actual heap usage, I recommend you the following settings:
-Xms20m # set this to the minimum you will need, omiting it only slows down JVM startup
-Xmx200m # set this to the maximum you might need or can spare
-XX:MinHeapFreeRatio=20 # minimum heap percentage free after a Full GC, default=40
-XX:MaxHeapFreeRatio=40 # maximum heap percentage free after a Full GC, default=70
-XX:NewSize=10m # don't make it too small because this will make GC less efficient, the
# actual value depends on the nature of the app
-XX:MaxNewSize=10m # ditto, you can only make it larger than NewSize
-XX:SurvivorRatio=6 # try to compensate for small NewSize
-XX:TargetSurvivorRatio=80 # don't waste survivor space memory
I hope this helps. Please, keep in mind that a GC-based system is not really meant to
minimize memory usage! -
Problem with Firefox and very heavy memory usage
For several releases now, Firefox has been particularly heavy on memory usage. With its most recent version, with a single browser instance and only one tab, Firefox consumes more memory that any other application running on my Windows PC. The memory footprint grows significantly as I open additional tabs, getting to almost 1GB when there are 7 or 8 tabs open. This is as true with no extensions or pluggins, and with the usual set, (firebug, fire cookie, fireshot, XMarks). Right now, with 2 tabs, the memory is at 217,128K and climbing, CPU is between 0.2 and 1.5%.
I have read dozens of threads providing "helpful" suggestions, and tried any that seem reasonable. But like most others who experience Firebug's memory problems, none address the issue.
Firefox is an excellent tool for web developers, and I rely on it heavily, but have now resorted to using Chrome as the default and only open Firefox when I must, in order to test or debug a page.
Is there no hope of resolving this problem? So far, from responses to other similar threads, the response has been to deny any responsibility and blame extensions and/or pluggins. This is not helpful and not accurate. Will Firefox accept ownership for this problem and try to address it properly, or must we continue to suffer for your failings?55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't. -
Problem with scanning and memory usage
I'm running PS CS3 on Vista Home Premium, 1.86Ghz Intel core 2 processor, and 4GB RAM.
I realise Vista only sees 3.3GB of this RAM, and I know Vista uses about 1GB all the time.
Question:
While running PS, and only PS, with no files open, I have 2GB of RAM, why will PS not let me scan a file that it says will take up 300Mb?
200Mb is about the limit that it will let me scan, but even then, the actual end product ends up being less than 100Mb. (around 70mb in most cases)I'm using a Dell AIO A920, latest drivers etc, and PS is set to use all avaliable RAM.
Not only will it not let me scan, once a file I've opened has used up "x" amount of RAM, even if I then close that file, "x" amount of RAM will STILL be unavaliable. This means if I scan something, I have to save it, close PS, then open it again before I can scan anything else.
Surely this isn't normal. Or am I being stupid and missing something obvious?
I've also monitored the memory usage during scanning using task manager and various other things, it hardly goes up at all, then shoots up to 70-80% once the 70ishMb file is loaded. Something is up because if that were true, I'd actually only have 1Gb of RAM, and running Vista would be nearly impossible.
It's not a Vista thing either as I had this problem when I had XP. In fact it was worse then, I could hardly scan anything, had to be very low resolution.
Thanks in advance for any help55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't. -
Problem with JTree and memory usage
I have problem with the JTree when memory usage is over the phisical memory( I have 512MB).
I use JTree to display very large data about structure organization of big company. It is working fine until memory usage is over the phisical memory - then some of nodes are not visible.
I hope somebody has an idea about this problem.55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't. -
High memory usage when many tabs are open (and closed)
When I open Firefox the memory usage is about 70-100 MB RAM. When I'm working with Firefox I often open 15-20 tabs at once, then I close them and open others. Memory usage increaes up to 450 - 500 MB RAM. After closing the tabs the usage usually decreases, but after sometime. It starts decreasing very slow and never comes back to the level from the beginning. After few hours of work Firefox uses about 400 MB RAM even if one tab is open. First I thought it's because of my plugins (Firebug, Speed Dial, Adlock Plus) but I've checked it and it's not. I reinstalled Firefox but the problem occurs as well. I'm not sure if it's normal. Could you help me, please?
Hi,
Not an exact answer but please [http://kb.mozillazine.org/Reducing_memory_usage_(Firefox) see this.] The KB article ponders through various general situations which may or may not be applicable to specific instances but nevertheless could be helpful.
Useful links:
[https://support.mozilla.com/en-US/kb/Options%20window All about Tools > Options]
[http://kb.mozillazine.org/About:config Going beyond Tools > Options - about:config]
[http://kb.mozillazine.org/About:config_entries about:config Entries]
[https://addons.mozilla.org/en-US/firefox/addon/whats-that-preference/ What's That Preference? add-on] - Quickly decode about:config entries - After installation, go inside about:config, right-click any preference, enable (tick) MozillaZine Results to Panel and again right-click a pref and choose MozillaZine Reference to begin with.
[https://support.mozilla.com/en-US/kb/Keyboard%20shortcuts Keyboard Shortcuts]
[http://kb.mozillazine.org/Profile_folder_-_Firefox Firefox Profile Folder & Files]
[https://support.mozilla.com/en-US/kb/Safe%20Mode Safe Mode]
[http://kb.mozillazine.org/Problematic_extensions Problematic Extensions/Add-ons]
[https://support.mozilla.com/en-US/kb/Troubleshooting%20extensions%20and%20themes Troubleshooting Extensions and Themes] -
Memory usage of excel stays high after Macro is executed and excel crashes after trying to close it
Hi,
I'm trying to resolve an issue with an excel based tool. The macros retrieve data from an Oracle database and do calculations with the data. They also open and write into files in the same directory. The macros all run and finish the calculations. I can
continue to use and modify the sheet. I can also close the workbook, however excel memory usage I see in the windows Task manager stays elevated.If I close Excel it says: Excel stopped working and then it tries to recover information...
I assume something in the macro did not finish properly and memory was not released. I would like to check what is still open (connection, stream or any other object) when I close the workbook I would like to have a list of all still used memory. Is there
a possibility to do so.
Here the code I'm using, its reduced to functions which open something. Functions
get_v_tools() and get_change_tools() are same as get_client_positions().
Public conODBC As New ADODB.Connection
Public myPath As String
Sub get_positions()
Dim Src As range, dst As range
Dim lastRow As Integer
Dim myPath As String
lastRow = Sheets("SQL_DATA").Cells(Sheets("SQL_DATA").rows.Count, "A").End(xlUp).Row
Sheets("SQL_DATA").range("A2:AD" & lastRow + 1).ClearContents
Sheets("SQL_DATA").range("AG2:BE" & lastRow + 2).ClearContents
Sheets("SQL_DATA").range("AE3:AF" & lastRow + 2).ClearContents
k = Sheets("ToolsList").Cells(Sheets("ToolsList").rows.Count, "A").End(xlUp).Row + 1
Sheets("ToolsList").range("A2:M" & k).ClearContents
'open connection
Call open_connection
lastRow = Sheets("SQL_DATA").Cells(Sheets("SQL_DATA").rows.Count, "A").End(xlUp).Row
If lastRow < 2 Then GoTo ErrorHandling
'copy bs price check multiplications
Set Src = Sheets("SQL_DATA").range("AE2:AF2")
Set dst = Worksheets("SQL_DATA").range("AE2").Resize(lastRow - 1, Src.columns.Count)
dst.Formula = Src.Formula
On Error GoTo ErrorHandling
'new prices are calculated
newPrice_calculate (lastRow)
Calculate
myPath = ThisWorkbook.Path
'Refresh pivot table in Position Manager
Sheets("Position Manager").PivotTables("PivotTable3").ChangePivotCache ActiveWorkbook. _
PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _
myPath & "\[Position_Manager_v1.0.xlsm]SQL_DATA!R1C2:R" & lastRow & "C31" _
, Version:=xlPivotTableVersion14)
ErrorHandling:
Set Src = Nothing
Set dst = Nothing
If conODBC.State <> 0 Then
conODBC.Close
End If
End Sub
Sub open_connection()
Dim sql_data, sql_data_change, sql_data_v As Variant
Dim wdth, TotalColumns, startRow As Integer
Dim rst As New ADODB.Recordset
Errorcode = 0
On Error GoTo ErrorHandling
Errorcode = 1
With conODBC
.Provider = "OraOLEDB.Oracle.1"
.ConnectionString = "Password=" & pswrd & "; Persist Security Info=True;User ID= " & UserName & "; Data Source=" & DataSource
.CursorLocation = adUseClient
.Open
.CommandTimeout = 300
End With
startRow = Sheets("SQL_DATA").Cells(Sheets("SQL_DATA").rows.Count, "A").End(xlUp).Row + 1
sql_data = get_client_positions(conODBC, rst)
wdth = UBound(sql_data, 1)
Sheets("SQL_DATA").range("A" & startRow & ":AA" & wdth + startRow - 1).Value = sql_data
'Run change tools instruments
startRow = Sheets("ToolsList").Cells(Sheets("ToolsList").rows.Count, "A").End(xlUp).Row + 1
sql_data_change = get_change_tools(conODBC, rst)
wdth = UBound(sql_data_change, 1)
Sheets("ToolsList").range("A" & startRow & ":M" & wdth + startRow - 1).Value _
= sql_data_change
'open SQL for V tools instruments
startRow = Sheets("ToolsList").Cells(Sheets("ToolsList").rows.Count, "A").End(xlUp).Row + 1
sql_data_v = get_v_tools(conODBC, rst)
wdth = UBound(sql_data_v, 1)
Sheets("ToolsList").range("A" & startRow & ":L" & startRow + wdth - 1).Value = sql_data_v
conODBC.Close
ErrorHandling:
If rst.State <> 0 Then
rst.Close
End If
Set rst = Nothing
End Sub
Private Function get_client_positions(conODBC As ADODB.Connection, rst_posi As ADODB.Recordset) As Variant
Dim sql_data As Variant
Dim objCommand As ADODB.Command
Dim sql As String
Dim records, TotalColumns As Integer
On Error GoTo ErrorHandling
Set objCommand = New ADODB.Command
sql = read_sql()
With objCommand
.ActiveConnection = conODBC 'connection for the commands
.CommandType = adCmdText
.CommandText = sql 'Sql statement from the function
.Prepared = True
.CommandTimeout = 600
End With
Set rst_posi = objCommand.Execute
TotalColumns = rst_posi.Fields.Count
records = rst_posi.RecordCount
ReDim sql_data(1 To records, 1 To TotalColumns)
If TotalColumns = 0 Or records = 0 Then GoTo ErrorHandling
If TotalColumns <> 27 Then GoTo ErrorHandling
If rst_posi.EOF Then GoTo ErrorHandling
l = 1
Do While Not rst_posi.EOF
For i = 0 To TotalColumns - 1
sql_data(l, i + 1) = rst_posi.Fields(i)
Next i
l = l + 1
rst_posi.MoveNext
Loop
ErrorHandling:
rst_posi.Close
Set rst_posi = Nothing
Set objCommand = Nothing
get_client_positions = sql_data
End Function
Private Function read_sql() As String
Dim sqlFile As String, sqlQuery, Line As String
Dim query_dt As String, client As String, account As String
Dim GRP_ID, GRP_SPLIT_ID As String
Dim fso, stream As Object
Set fso = CreateObject("Scripting.FileSystemObject")
client = Worksheets("Cover").range("C9").Value
query_dt = Sheets("Cover").range("C7").Value
GRP_ID = Sheets("Cover").range("C3").Value
GRP_SPLIT_ID = Sheets("Cover").range("C5").Value
account = Sheets("Cover").range("C11").Value
sqlFile = Sheets("Cover").range("C15").Value
Open sqlFile For Input As #1
Do Until EOF(1)
Line Input #1, Line
sqlQuery = sqlQuery & vbCrLf & Line
Loop
Close
' Replace placeholders in the SQL
sqlQuery = Replace(sqlQuery, "myClent", client)
sqlQuery = Replace(sqlQuery, "01/01/9999", query_dt)
sqlQuery = Replace(sqlQuery, "54747743", GRP_ID)
If GRP_SPLIT_ID <> "" Then
sqlQuery = Replace(sqlQuery, "7754843", GRP_SPLIT_ID)
Else
sqlQuery = Replace(sqlQuery, "AND POS.GRP_SPLIT_ID = 7754843", "")
End If
If account = "ZZ" Then
sqlQuery = Replace(sqlQuery, "AND AC.ACCNT_NAME = 'ZZ'", "")
Else
sqlQuery = Replace(sqlQuery, "ZZ", account)
End If
' Create a TextStream to check SQL Query
sql = sqlQuery
myPath = ThisWorkbook.Path
Set stream = fso.CreateTextFile(myPath & "\SQL\LastQuery.txt", True)
stream.Write sql
stream.Close
Set fso = Nothing
Set stream = Nothing
read_sql = sqlQuery
End FunctionThanks Starain,
that's what I did the last days and found that the problem is in the
newPrice_calculate (lastRow)
function. This function retrieves data (sets it as arrays) which was correctly pasted into the sheet, loops through all rows and does math/calendar calculations with cell values using an Add-In("Quantlib")
Public errorMessage as String
Sub newPrice_calculate(lastRow)
Dim Type() As Variant
Dim Id() As Variant
Dim Price() As Variant
Dim daysTo() As Variant
Dim fx() As Variant
Dim interest() As Variant
Dim ObjCalend as Variant
Dim newPrice as Variant
On Error GoTo Catch
interest = Sheets("SQL_DATA").range("V2:V" & lastRow).Value
Type = Sheets("SQL_DATA").range("L2:L" & lastRow).Value Id = Sheets("SQL_DATA").range("M2:M" & lastRow).Value Price = Sheets("SQL_DATA").range("T2:T" & lastRow).Value
daysTo = Sheets("SQL_DATA").range("K2:K" & lastRow).Value
fx = Sheets("SQL_DATA").range("U2:U" & lastRow).Value
qlError = 1
For i = 2 To lastRow
If (i, 1) = "LG" Then
'set something - nothing spectacular like
interest(i, 1) = 0
daysTo(i , 1) = 0
Else
adjTime = Sqr(daysTo(i, 1) / 365)
ObjCalend(i,1) =Application.Run("qlCalendarHolidaysList", _
"CalObj", ... , .... other input parameters)
If IsError(ObjCalend(i,1)) Then GoTo Catch
'other calendar calcs
newPrice(i,1) = Application.Run( 'quantLib calcs)
End If
Catch:
Select Case qlError
Case 1
errorMessage = errorMessage & " QuantLibXL Cal Error at: " & i & " " & vbNewLine & Err.Description
ObjCalend(i,1) (i, 1) = "N/A"
End Select
Next i
Sheets("SQL_DATA").range("AB2:AB" & lastRow).Value = newPrice
'Sheets("SQL_DATA").range("AA2:AA" & lastRow).Value = daysTo
' erase and set to nothing all arrays and objects
Erase Type
Erase id
Erase Price
Set newPrice = Nothing
Is there a possibility to clean everything in:
Private Sub Workbook_BeforeClose(Cancel As Boolean)
End Sub
Thanks in advance
Mark -
TOP - Pageouts and memory usage.
Looks like I have plenty of memory for what I am doing. But What do the cumulative numbers around pagein's mean?
Processes: 80 total, 7 running, 3 stuck, 70 sleeping... 523 threads 15:20:14
Load Avg: 1.99, 1.53, 1.37 CPU usage: 10.33% user, 9.00% sys, 80.67% idle
SharedLibs: num = 5, resident = 13M code, 0 data, 1468K linkedit.
MemRegions: num = 32848, resident = 2116M + 18M private, 877M shared.
PhysMem: 1853M wired, 2116M active, 3466M inactive, 7439M used, 753M free.
VM: 19G + 284M 332461(0) pageins, 47(0) pageouts
PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE
12256 top 11.1% 0:14.05 1 18 40 684K 820K 1392K 18M
12238 bash 0.0% 0:00.01 1 14 19 276K 184K 916K 18M
12237 login 0.0% 0:00.02 1 17 136 908K 8724K 3164K 32M
12236 Terminal 1.7% 0:01.45 3 99 433 5264K+ 49M+ 21M+ 419M+
12225 SyncServer 0.0% 0:00.15 2 50 290 3512K 41M 12M 88M
11248 WinAppHelp 0.0% 0:00.40 1 51 289 3208K 40M 11M 339M
11027 Skype 0.0% 0:19.47 31 313 865 46M 76M 88M 557M
10315 PrinterPro 0.0% 0:00.83 2 85 476 12M 51M 29M 423M
10313 launchd 0.0% 0:01.25 3 24 25 120K 296K 464K 18M
8626 QuickTime 0.0% 17:21.09 5 187 655 13M 67M 43M 486M
8556 DashboardC 0.0% 0:06.31 4 124 538 16M 58M 38M 436M
8555 DashboardC 0.0% 0:10.95 4 149 545 23M 58M 49M 444M
8534 Livestatio 16.8% 59:00.13 12 198 771 30M- 94M 62M 584M
8220 Mail 0.0% 0:43.91 27 323 968 24M 94M 80M 554M
2806 Python 0.8% 5:20.05 2 33 281 13M- 248K 14M- 36M-
2761 Transmissi 0.9% 2:49.17 21 270 773 20M 90M 56M 506MAbout OS X Memory Management and Usage
Reading system memory usage in Activity Monitor
Memory Management in Mac OS X
Performance Guidelines- Memory Management in Mac OS X
A detailed look at memory usage in OS X
Understanding top output in the Terminal
The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM.
Maybe you are looking for
-
Adobe AIR Error Message - Application Descriptor
Upon starting my computer I keep getting an error message for Adobe AIR stating, "Application descriptor could not be found for this application. Try re-installing or contacting the publisher for assistance." I have uninstalled and re-installed Adobe
-
Directory Proxy Server 5.2 and MS Active Directory
Hi, The features of DPS are exactely what I'm looking for but the Directory Servers I would like to run it against are Microsoft Active Directory Domain Controllers... Did anyone tried this before ? Was it a success story ? ;-) Thanks for your input.
-
Hi, I am Java Developer !
-
How to create unique index on a View
Hi All, 11.2.0.1 How do I create an index on a view or any workaround that my view wont get duplicates? SQL> create unique index indx01 on db_backup_details_vw(id); create unique index indx01 on db_backup_details_vw(id) ERROR at line 1: ORA-01702: a
-
I am living in Vietnam and my iPhone 5S Had been dropped in water, now it is not working any more. Since in Vietnam don't have Apple store, so can i ship to Apple store in US for refurbish/repair? what is the process for that? Thank you very much