Mem usage vs Virtual Memory Size
Hi Gents!
I have java server application and it has strange behaviour in memory management after one day of uptime. Windows Task Manages shows me Mem Usage=136m, Peak Mem Usage=342m, VM Size=557m. And Java Visual VM shows Heap Size=160m, PermGen=32m.
Do anyone have ideas how to deallocate Virtual Memory to decrease it size?
JVM parameters:
-Xms32m
-Xmx192m
-Xss128k
-Xincgc
-agentlib:hprof=cpu=samples
-XX:+AggressiveOpts
-XX:+DisableExplicitGC
-XX:ParallelGCThreads=4
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:SurvivorRatio=16
-XX:TargetSurvivorRatio=90
-XX:MaxTenuringThreshold=31
-XX:MinHeapFreeRatio=20
-XX:MaxHeapFreeRatio=25
-Djava.net.preferIPv4Stack=true
-Dsun.rmi.dgc.client.gcInterval=990000
-Dsun.rmi.dgc.server.gcInterval=990000
-javaagent:./lib/sizeofag.jar
Is it a multi-threaded application? How many user threads?
Similar Messages
-
Endeca Server: RuntimeException: Unable to load physical or total virtual memory size
Hey!
I've got a problem with Endeca Server. After i had some problems with Studio, EndecaServer kept getting a Runtime Exception. When i try to create a Data Domain I get this error:
The Endeca Server returned an error: OES-000075: Caught exception when starting Endeca Server: java.lang.RuntimeException: Unable to load total physical memoery size or total virtual memory size
and in the Server console it says:
<23.01.2014 10:59 Uhr MEZ> <Error> <com.endeca.opmodel.ws.ManagePortImpl> <OES-0
00066> <OES-000066: Service error: com.endeca.endeca_server.manage._2.ManageFaul
t: OES-000075: Caught exception when starting Endeca Server: java.lang.RuntimeEx
ception: Unable to load total physical memoery size or total virtual memory size
com.endeca.endeca_server.manage._2.ManageFault: OES-000075: Caught exception wh
en starting Endeca Server: java.lang.RuntimeException: Unable to load total phys
ical memoery size or total virtual memory size
at com.endeca.opmodel.ws.ManagePortImpl.defaultManageException(ManagePor
tImpl.java:116)
at com.endeca.opmodel.ws.ManagePortImpl.checkCluster(ManagePortImpl.java
:124)
at com.endeca.opmodel.ws.ManagePortImpl.createDataDomain(ManagePortImpl.
java:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.j2ee.ws.server.jaxws.ServiceEndpointRuntime.processMessage(Ser
viceEndpointRuntime.java:370)
at oracle.j2ee.ws.server.jaxws.ServiceEndpointRuntime.processMessage(Ser
viceEndpointRuntime.java:202)
at oracle.j2ee.ws.server.jaxws.JAXWSRuntimeDelegate.processMessage(JAXWS
RuntimeDelegate.java:474)
at oracle.j2ee.ws.server.provider.ProviderProcessor.doEndpointProcessing
(ProviderProcessor.java:1187)
at oracle.j2ee.ws.server.WebServiceProcessor.invokeEndpointImplementatio
n(WebServiceProcessor.java:1112)
at oracle.j2ee.ws.server.provider.ProviderProcessor.doRequestProcessing(
ProviderProcessor.java:581)
at oracle.j2ee.ws.server.WebServiceProcessor.processRequest(WebServicePr
ocessor.java:233)
at oracle.j2ee.ws.server.WebServiceProcessor.doService(WebServiceProcess
or.java:193)
at oracle.j2ee.ws.server.WebServiceServlet.doPost(WebServiceServlet.java
:485)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run
(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecuri
tyHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.jav
a:301)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
va:56)
at com.endeca.util.ChangeHeaderFilter.doFilter(ChangeHeaderFilter.java:5
0)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
va:56)
at com.endeca.util.TimingFilter.doFilter(TimingFilter.java:72)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
va:56)
at com.endeca.router.RoutingServlet.doFilter(RoutingServlet.java:227)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
va:56)
at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119)
at java.security.AccessController.doPrivileged(Native Method)
at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:31
5)
at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUt
il.java:442)
at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.jav
a:103)
at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:1
71)
at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
va:56)
at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:13
9)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.ja
va:56)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationActio
n.wrapRun(WebAppServletContext.java:3730)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationActio
n.run(WebAppServletContext.java:3696)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(Authenticate
dSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:
120)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppS
ervletContext.java:2273)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletC
ontext.java:2179)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.j
ava:1490)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
Caused by: java.lang.Exception: OES-000075: Caught exception when starting Endec
a Server: java.lang.RuntimeException: Unable to load total physical memoery size
or total virtual memory size
... 47 more
every time i start Server and everytime i try to create or check a Data Domain.
Can someone help me?
regards,
AlexI reinstalled the VM and now it works again. It was on a Win2008 Server with 8Gb Ram. It first worked but then I somehow crashed studio and after that Server never worked again, even after reinstalling everything.
-
Activity Monitor showing incorrect amount of virtual memory size
Hi,
I noticed that in Activity Monitor that the Virtual Memory size was incorrect.
Does any body else have the same issue?Ignore that. The method it's calculated works as following:
1. You have the Finder and Safari open.
2. Both programs and no other ones use the content of a single block of data. This block takes up 2MB of RAM or swapfile space.
3. The block is counted as 4MB in the VM size entry because it's being used twice.
Expanding the scenario over many processes and blocks adds up to a VM size figure which is far larger than the actual amount in use or the combined total of RAM and swapfile space.
(48360) -
Activity monitor inconsistancy - what is real virtual memory size???
When I look at my activity montior system memory screen I see real and virtual memory in MB (with the largest at about 500MB). As in my previous post I see a VM size of >240GB. When I go to the File menu and hit Save... and then look at the resulting processes.txt file I see that the virtual memory is now listed in GB. The real memory is consistent with what I see on the activity monitor window. See below. What's up? Which virtual memory is real and which is "virtual"?
Active Memory: 1.47 GB
Free Memory: 846.1 MB
Wired Memory: 770.8 MB
Used Memory: 3.17 GB
Inactive Memory: 972.1 MB
Total VM: 231.35 GB
Number of processes: 100
PID Process Name User CPU Real Mem Virtual Mem
0 kernel_task root 4.9 384.3 MB 4.45 GB
1 launchd root 0.0 2.2 MB 2.34 GB
11 UserEventAgent root 0.0 4.4 MB 2.35 GB
12 kextd root 0.0 3.4 MB 2.33 GB
14 notifyd root 0.0 1.5 MB 2.34 GB
15 securityd root 0.0 8.9 MB 2.36 GB
16 powerd root 0.0 1.8 MB 2.35 GB
17 configd root 0.0 4.5 MB 2.34 GB
18 syslogd root 0.0 1.1 MB 2.34 GB
19 diskarbitrationd root 0.0 1.5 MB 2.33 GB
20 distnoted root 0.0 2.1 MB 2.35 GB
21 cfprefsd root 0.0 1.8 MB 2.33 GB
22 opendirectoryd root 0.0 9.8 MB 2.36 GB
25 warmd nobody 0.0 5.5 MB 2.34 GB
26 usbmuxd _usbmuxd 0.0 2.3 MB 2.34 GB
29 stackshot root 0.0 1.2 MB 2.33 GB
30 SleepServicesD root 0.0 1.5 MB 2.33 GB
32 revisiond root 0.0 2.6 MB 2.35 GB
37 mds root 0.0 124.2 MB 3.04 GB
38 mDNSResponder _mdnsrespo 0.0 3.8 MB 2.34 GB
41 loginwindow prdwyer 0.0 27.8 MB 2.47 GB
42 locationd _locationd 0.0 7.3 MB 2.36 GBIgnore that. The method it's calculated works as following:
1. You have the Finder and Safari open.
2. Both programs and no other ones use the content of a single block of data. This block takes up 2MB of RAM or swapfile space.
3. The block is counted as 4MB in the VM size entry because it's being used twice.
Expanding the scenario over many processes and blocks adds up to a VM size figure which is far larger than the actual amount in use or the combined total of RAM and swapfile space.
(48360) -
Hi
Does anyone know what's happening with Virtual Memory in Logic 9.1.3 in 32-bit mode?
Looking at Activity Monitor on a freshly-booted empty template, Logic is showing 215Mb real mem, and 210Mb Virtual mem.
However, if you double-click the Logic process and look at the Memory tab, it says Virtual Memory Size 2.1GB
Now, I know 2.1 to be the true figure, because as soon as this reaches 3.9+ Logic runs out of memory. (and the number increases correctly as I load samples)
Why on earth is Logic booting with 2.1GB of memory already used? To compound the limits with 32-bit, this now means I can only load 2GB of plugs/samples instead of the expected approx. 3.5GB
Unfortunately I'm working on a couple of old projects that won't run in 64-bit because of unstable plugs in the bridge.
If there's no explanation, can anyone (running 32-bit) confirm that their Logic is using 2GB before you've loaded anything?
Thanks
SteveYou're ignoring that memory can be dynamically re-allocated.
Logic might be spending 800 MB on FlexTime routines that it's no longer using actively, as all FlexTime calculations and edits have already been made. When you load an instance of Space Designer, the FlexTime routines still in memory can be replaced by Space Designer processes.
You only run into trouble when everything you're ACTIVELY using within an application hits 4GB.
Obviously, this can happen rather easily these days, what with multi-gigabyte sample instruments and assorted plug-ins, and the general tendency to insert effects directly into the channel strip, rather than using bus sends. -
After upgrading to Lion, the Finder will increase the virtual memory while using external HD
after upgrading to Lion, the Finder will increase the "virtual memory size" while using external HD. I check it using Activity Monitor and found that "Finder" wll increase the "Virtual Memory Size" and "Private Memory Size". and then after a while. the system told me my HD is full and hanging there.
My External HD is 1TB. and I have installed new version Paragon NTFS for Mac V.9.01. I am not sure it is related or not.
but I attached my external HD. the "Finder" will start to increase the "Virtual Memory Size".
please help out this case and solve this issue.
thank you !!!The one has nothing to do with the other. Virtual memory size and Private memory size are used by developers to analyze their software. it has nothing to do with anything related to the user.
-
High Virtual memory usage when using Pages 2.0.2
Hey there,
I was just wondering whether there had been any other reports of unusually high memory usage when using Pages 2.0.2, specifically Virtual memory. I am running iWork 06 on the Mac listed below and Pages has been running really slowly recently. I checked the Activity Monitor and Pages is using hardly any Physical memory but loads of Virtual memory (so much so that the Page outs are almost as high as the Page ins (roughly 51500 page ins / 51000 page outs).
Any known problems, solutions or comments for this problem? Thanks in advanceI don't know if this is specifically what you're seeing, but all Cocoa applications, such as Pages, have an effectively infinite Undo. If you have any document that you've been working on for a long time without closing, that could be responsible for a large amount of memory usage.
While it's good practice to save on a regular basis, if you're making large amounts of changes it's also a good idea to close and reopen your document every once in awhile, simply to clear the undo. I've heard of some people seeing sluggish behavior after working on a document for several days, which cleared up when the document was closed and reopened.
Titanium PowerBook Mac OS X (10.4.8) -
Mystery Virtual memory/disk usage
I have installed SL on a Mac Mini and MBP with no problem. But now I have installed it on a Macbook and something is eating the disk and/or virtual memory.
After a fresh reboot I have 120G free disk space. This steadily reduces until the OS starts popping up the force quit dialog saying that I am out of application memory. Quitting applications has minimal effect and soon I am forced to reboot. At which point I am back to 120G of free disk space.
I have had a look at it with activity monitor as well as top and there are no real standouts in terms of of real memory or virtual memory. Perhaps it is something crashing over and over again, but there is nothing in /cores.
Any suggestions on how I can track this down?Hi Andrew and welcome to the forums
Any suggestions on how I can track this down?
In the Finder, do command-f and when that search window opens, set to search your mac & set filters (add/+) to show all visible and hidden files over a certain size, say 1 or 2 gb or larger....it may take a couple tries depending on max file sizes that legitimately sit on your drive, etc. - see what shows up and go from there. Good luck in any case. -
Hi gurus
In resource based throttling, what's the recommended setting for "Process memory usage" ("process virtual" in the resource based throttling tab of the UI) for a 64-bit host
on a 64-bit Windows OS?
According to MS (http://msdn.microsoft.com/en-us/library/ee308808(v=bts.10).aspx):
"By default, the
Process memory usage throttling threshold is set to 25. If this value is exceeded and the BizTalk process memory usage is more than 300 MB, a throttling condition may occur. On a 32-bit
server, you can increase the Process memory usage value to 50. On a 64-bit server, you can increase this value to 100. This allows for more memory consumption by the BizTalk process before throttling
occurs."
Does this mean that 100 is the recommended setting for a 64-bit host on a 64-bit Windows?
Thanks
Michael Brandt LassenHi Michael,
Recommended setting is the default setting which is 25 .dot.
If your situation is abnormal and you see message delivery throttling state to “4” when the following performance counters are high or if you expect any of you integration
process could have impact on following counters, then you can consider the suggestion by Microsoft. Don’t change the default setting.
High process memory
Process memory usage (MB)
Process memory usage threshold (MB)
You can see these counters under “BizTalk:MessageAgent”
You can gauge these performance counter and its maximum values if have done any regression/performance testing in your test servers. If you have seen these counters having
high values and causing throttling, then you can update the Process memory usage.
Or unexpectedly you’re process high throughput messages in production which is causing these counters to go high and cause throttling, then up can update the Process memory
usage.
The above two cases where I know my expected process usage (by doing performance testing) or suddenly my production server processing has gone high due to unexpected business
hike (or any reasons) which caused throttling, then do changes to default throttling setting.
Just changing the default setting without actual reason could have adverse effect where you end up allocating
more processing capacities but the actual message processing message usage ever is low means you end up investing in underutilised resources.
Regards,
M.R.Ashwin Prabhu
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply. -
Activity Monitor shows virtual memory usage is way too high
35 Gb of virtual memory for Safari, Mail, iTunes, Activity Monitor and all running processes!
Why these numbers are so high compared to Tiger? Can someone please explain? Is it a bug or something or is it the new way introduced with Leopard?
RAM and CPU usage looks fine though but this VM usage seems too high to me. I will run out of disk space after a couple of days of runtime. And the Adobe Creative Suite is not even running. Jeez...
Message was edited by: flec65Thanks for the hint Niel.
The /private/var/vm folder is actually only 64 Mb. I can calm myself now...
(via Go menu in the Finder, select Go to Folder and type /private/var/vm to access it)
But why is Activity Monitor behaving like this in Leopard? -
How to reduce page faults (virtual memory) usage in LabVIEW
I am running LabVIEW 2010 on Windows Embedded Standard system 4G (3G usable) of RAM.
The program (executable running on the Run-Time Engine) continuously samples from a USB-6251 at 1MS/sec (2 channels @ 500K) and scans the data stream for anomalous events, which are recorded to file. The system is intended to run 24/7 for at least 2 months, and although everything is working fine (6% CPU, no memory leaks), it generates 6000 Page Faults / second. I am concerned that I will kill the hard-drive at this rate, over long periods of time. There is plenty of RAM available (LabVIEW is only using 200K, and there is over 2G free), but the program is choosing to rely on Virtual Memory instead.
Is there a way to force (coerce) a LabVIEW application to consume more RAM and less VM?
The code is heavy (2 independent routines to collect and process the data stream through a shared double buffer, lots of in-obvious logic...), but I will post if it would help to answer the question.Craig Akers wrote:
I am running LabVIEW 2010 on Windows Embedded Standard system 4G (3G usable) of RAM.
The program (executable running on the Run-Time Engine) continuously samples from a USB-6251 at 1MS/sec (2 channels @ 500K) and scans the data stream for anomalous events, which are recorded to file. The system is intended to run 24/7 for at least 2 months, and although everything is working fine (6% CPU, no memory leaks), it generates 6000 Page Faults / second. I am concerned that I will kill the hard-drive at this rate, over long periods of time. There is plenty of RAM available (LabVIEW is only using 200K, and there is over 2G free), but the program is choosing to rely on Virtual Memory instead.
Is there a way to force (coerce) a LabVIEW application to consume more RAM and less VM?
The code is heavy (2 independent routines to collect and process the data stream through a shared double buffer, lots of in-obvious logic...), but I will post if it would help to answer the question.
IIRC, the decision to move a page of memory from physical memory to disk is made at the OS level. There probably isn't any setting you can change in LabVIEW to change this behavior.
Keep in mind that not every page fault results in a page being loaded from disk. If your program (or the LabVIEW run-time) is frequently allocating and freeing memory, you could get a lot of soft page faults as the physical memory pages are repeatedly allocated to your process and returned to the OS. If you're only running at 6% CPU, this wouldn't be a problem.
You could try disabling the page file altogether, if the machine has enough RAM, but I wouldn't do this unless you actually have a performance (or hard-disk durability) problem. Having a page file to back up the physical memory is the difference between your program suffering from degraded performance vs. simply crashing if the machine runs out of physical memory.
Mark Moss
Electrical Validation Engineer
GHSP -
REP-0065: Virtual Memory System Error REP-0200: Cannot allocate enough mem
HI i am facing this error since a while.
i dont know how to rectify.
For my report I am using like 7 ref cursors from the database to output the data. It’s been a while that I am facing this error.
I have tried many ways by modifying my code, but still getting this error. When I replace the whole code in the RDF itself, then its running fine without any problems.
I just got stucked with this report. I totally ran out of ideas to resolve this error.
if you have any idea about this error please let me know.
i need to do from the database side. dont know wat to do.
please post the solution ASAP, i have to submit . i going mad.
i am using oracle reports builder 10g.(10.1.2.0)no actually i tried to download and install one of the patch p4505133_10105_WINNT, i am unable to install the other . my DBA assisted installing the patches. but he could error while installing the second one.
do i need to install on the server or my local machine.
i dont know for some dates its working as there are around 48 rows. for other dates whose date is 400 plus its not working. when execute from reports it runs fine.
date is the parameter for my report to fetch.
it displays the data, but when i try to go the last page then it throws this 2 errors:
Re: REP-0065: Virtual Memory System Error REP-0200: Cannot allocate enough . in data model there are 7 plsql block in which i am calling the procedure for each plsql.
i tried to call one by one with main detail plsql, then its working . when i try to call all together. then probllem rises..
i have checked all the data types . everything looks good.
please help. i am in cruical situation.
thanks in Anticipation
Erat -
Oracle 10.2.0.4g Windows Server 2003 EE (16GB Physical RAM)
Virtual Memory:
Initial Size:16.4GB
Maximum Size: 20GB
I create ADDM report and:
FINDING 1: 100% impact (50400 seconds)
Significant virtual memory paging was detected on the host operating system.
RECOMMENDATION 1: Host Configuration, 100% benefit (50400 seconds)
ACTION: Host operating system was experiencing significant paging but no
particular root cause could be detected. Investigate processes that
do not belong to this instance running on the host that are consuming
significant amount of virtual memory. Also consider adding more
physical memory to the host.
When users don't work in Task Manager I see:
Oracle.exe
Mem usage: 1.3GB
Page Fault: 2.4GBuser10921739 wrote:
Oracle 10.2.0.4g Windows Server 2003 EE (16GB Physical RAM)
Virtual Memory:
Initial Size:16.4GB
Maximum Size: 20GBWhat is the size of the SGA? Not sure if Windows support pinning the SGA in memory - but for performance it's quite important that the SGA not be swapped to disk. If you can pin the SGA in memory, then I suggest you try it.
What else is running on the box? If only Oracle then there should not be that much swapping - unless there are conflicts between the memory requirements for SGA and PGA, resulting in an increase in page swapping. 16GB RAM provides a fair amount of memory flexibility and swapping should be minimal.
If the box is used for other stuff and Oracle - not the best of ideas. For performance, Oracle is best serviced by a dedicated server. (also true of most other server software - mix these at own risk)
When users don't work in Task Manager I see:
Oracle.exe
Mem usage: 1.3GB
Page Fault: 2.4GBAre those soft or hard faults? Hard faults are a problem as this means having to swap a page in from swap space in order to satisfy the request. Soft faults means a page has moved in memory - this does not require an expensive disk I/O to swap the page into memory from swap space. -
Shared memory (System V-style) - High usage of phys memory and page outs
Hi!
I get a much higher usage of physical memory when I use shared memory than I would expect. Please, I would really need someone to confirm my conclusions, so that I can revise my ignorance in this subject.
In my experiments I create a shared memory segment of 200 MB and I have 7 processes attaching to it. I have a ws of 1 GB.
I expect to see what I see when I attach to the shared memory segment in terms of virtual size, i.e. SIZE in prstat. After attaching (mapping it) to the process all 7 processes are about ~203 MB each and this makes sense. RSS, in prstat, is about 3 MB for each process and this is ok to me too.
It is what I see when each of the 7 processes start to write a simple string like 'Hello!' in parallel to each page in their shared memory segment that I get surprised. I run out of memory on my ws after a while and the system starts to wildly page out physical memory to disk so that the next 'Hello!' can be written to the next page in the shared memory segment. It seems that each page written to in the shared memory chunk is mapped into each process private address space. This means that the shared memory is not physically shared, just virtually shared. Is this correct?
Can memory be physically shared, so that my 7 processes only use 200 MB? ISM? DISM?
I create a shared memory segment in a C-program with the following calls:
shmid = shmget(key, SHM_SIZE, 0644 | IPC_CREAT)
data = shmat(shmid, (void *)0, 0);Thanks in advance
/SuneYour problem seemed reasonable. What were you doing wrong?
Darren -
What's up with my Virtual Memory?
I've got a MacBook Pro 15" 2Ghz Core i7 running the 10.8.4 with all updates and 16gb of ram. I notice a couple of odd things over the course of a multiday session:
- Over a few days of usage launching and quiting most apps but with Mail, Terminal, LaunchBar and Safari running constantly, memory and virtual memory usage grow to the point where I have only a few dozen Mb of ram left and over 12 Gb of Virtual Memory consumed.
- Odder still is never, ever, in any session regardless how long, do my virtual memory statistics show more than 14 hits, not 14%, 14 all during boot time, despite hundreds of thousand of page-ins and lookups, and millions (22 million + this session) page faults.
I've noticed that Safari is the main culprit with memory consumption as it only frees up the bulk of any used if quit and relaunched. Even closing all tabs and windows has little to no affect on memory or virtual memory usage, but I have no idea why lookups once the machine is booted never, ever result in a hit despite a constantly growing number and size of swapfiles. Its almost as if Lion is encrypting the swaps but then doesn't have access to read them though I don't know how that might come about.
Anyone care to offer suggestions as to cause or debugging/troubleshooting tips?
Other notes:
This has been happening at least since 10.7.x.
Aside from apps listed above, other frequent apps are, Adobe Photoshop 5.5.x, Lightroom 5, LaunchBar, and ocassional MS Office 2011 with all updates.
Thanks,
DavidAbout OS X Memory Management and Usage
Using Activity Monitor to read System Memory & determine how much RAM is used
Memory Management in Mac OS X
Performance Guidelines- Memory Management in Mac OS X
A detailed look at memory usage in OS X
Memory Usage Performance Guidelines- About the Virtual Memory System
Understanding top output in the Terminal
The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM.
Adding RAM only makes it possible to run more programs concurrently. It doesn't speed up the computer nor make games run faster. What it can do is prevent the system from having to use disk-based VM when it runs out of RAM because you are trying to run too many applications concurrently or using applications that are extremely RAM dependent. It will improve the performance of applications that run mostly in RAM or when loading programs.
Maybe you are looking for
-
Difference Between Unique Index vs Unique Constraint
Can you tell me what is the Difference Between Unique Index vs Unique Constraint. and Difference Between Unique Index and bitmap index. Edited by: Nilesh Hole,Pune, India on Aug 22, 2009 10:33 AM
-
My old computer crashed andI bought a new one. When I went to Itunes I deauthorized all computers and then authorized my new one but I can not get my prevous purchases and I can not add my new purchases to my IPods with out it deleting all the music.
-
Line Item fields and Open Item fields
Hi dear SAP firends I got the ticket from client that The special fields & Standard fields are available in Line Item process( O7Z3) but I could not find the special fields in Open Item Process. (O7Z4) Now client needs the special fields in Open Item
-
Tractability - ModifiedBy and CreatedBy
Hi, I have just started a new project, and, I have just discovered that the database implements a 'tractability' requirement whereby every table has "ModifiedBy" and "CreatedBy" (and ModifiedWhen, CreatedWhen) columns for tracing changes to particula
-
I am trying to print multiple PDFs wihtout opening each one individually. In Explorer I highlight them all and right click for the context menu and select printer. With any version of Adobe reader after 5 all I get is rubblish. If I open each individ