Freeing Memory
Is it better to use the terminal, sudo purge command or use a Clean Memory app. for freeing memory?
Can the Terminal command cause damage?
Thanx.
SSR,
Thank you for the feedback. My issue, it seems, was VLC. I say was, because I have since removed it.
I would check my free memory and it would be around 1.02 GB.
I would then watch a couple of movies or TV shows on VLC.
After finishing and shutting down VLC, my memory would be at 16MB and my MBP would be running poorly.
I used sudo purge and memory went back up to 1.75GB.
After reading your post, I removed all traces of VLC and installed MPlayer.
No more free memory problem.
Thank you.
Similar Messages
-
Hi,
I know this topic has been discussed many times before, but....
Is there any way to force the GC in 1.5 to release memory back to the operating system.
I have an application that loads many images into a custom page. Lets say that it has allocated 5MB heap (Runtime.totalMemory) before I load the page of images. I then load the page, heap goes up to say 100MB. I then leave the page, heap goes back down to about 5MB - but the operating system memory allocated by the process (via TaskManager or top) still remains at 100MB+.
It seems the JVM is very reluctant to release this memory. This is with the default GC and only -Xmx200m set as JVM params.
I can forceably get it to release the memory, but only after FOUR System.gc()
Arrggghhh.
Does anyone have any advice?
TIA,
James BrayI suppose someday someone will explain why unused
memory (which will be paged to the harddrive)needs
to be released.OK, so you are saying that the JVM does not release
the memory back to Windows because Windows will
automatically page the memory to disk therefore
freeing the memory anyway?No.
What I am saying is that one of the following will be true...
- The memory is in use
- The memory is not in use and in physical memory
- The memory is not in use and is paged to the hard drive.
In the first case none of this discussion is relevant.
In the second case it doesn't matter because if something needs the physical memory then the OS will page it.
In the third case it takes up hard drive space so why would it matter?
>
I understand that perfectly and it probably makes
more sense from a performance point of view than
releasing the memory back to Windows (where it will
probably just need reallocating at some point).
However, try telling a customer why a Java
a application sits at 100MB+ in TaskManager. Try telling them that TaskManager is not an optimal tool for investigating what the computer is doing.
They dont know or care about the semantics of how virtual
memory works, they just equate large processes to
memory loss.
Err...either they should understand how memory on modern computers works or it shouldn't be their concern as to how much memory is being used. -
ByteBuffer.allocateDirect vs. NewDirectByteBuffer and freeing memory
Hi all, I have a very basic question that I'm afraid I couldn't find in any documentation. My question is, when a ByteBuffer in my JVM wraps a direct memory buffer does it try to free that native memory when it gets GC'd? It seems like it would have to do this when I (the user) create it via ByteBuffer.allocateDirect; but if I use a JNI call into NewDirectByteBuffer, I understand that I'm taking responsibility for freeing that memory when I'm finished with it. Both will return true for isDirect(), but in the case where I created the buffer myself and passed it to NewDirectByteBuffer, then free'd that memory myself, will the ByteBuffer object know that it's not supposed to free that memory?
Thanks in advance.koverton wrote:
Hi all, I have a very basic question that I'm afraid I couldn't find in any documentation. My question is, when a ByteBuffer in my JVM wraps a direct memory buffer does it try to free that native memory when it gets GC'd? Yes, but I have found it does this very slowly. i.e. When the JVM runs low on heap space it will GC before it throws an OutOfMeoryError However for non-heap space this doesn't happen so reliably.
It seems like it would have to do this when I (the user) create it via ByteBuffer.allocateDirect; but if I use a JNI call into NewDirectByteBuffer, I understand that I'm taking responsibility for freeing that memory when I'm finished with it. Both will return true for isDirect(), but in the case where I created the buffer myself and passed it to NewDirectByteBuffer, then free'd that memory myself, will the ByteBuffer object know that it's not supposed to free that memory?Your best bet is to check the finialize() code. You may be able to break point it to see what it does in your case.
If you are not sure, why not call ByteBuffer.allocateDirect(). -
Garbage Collecting and freeing memory in JTree
Suppose you have a JTree with a model that takes a lot of memory. after you are finished using the tree you want to free this memory. so you go on and use the
tree.setModel(null) method, hoping everything will be fine. to be extra sure you even set the root to null.
what happens is that the tree view is cleared but the memory isn't.
my conclusion is that somwhere in the tree package there are refrences to the tree root, model or whatever that keeps it from been garbage collected. to make sure my code doesn't keep any refrences to the tree model I am using a very thin class that only instantiates a tree and fills it with data. this class keeps no members, listeners, refrences and so on.
calling system.gc() does nothing.
Is there a simple way to clear memory in a JTree ?
without using weak refrences and other unwanted complexities.Hi, thanks for the response.The C API version is 6.5.3 or 6.5.4.2, depending on the environment.I'll paste the code in here, but it is completely based on the sample programs so I'm not sure where else we could free up memory (any insights appreciated!):ESS_MEMBERINFO_T *pChildMemberInfo = NULL;sts = ESS_Init();if(sts == ESS_STS_NOERR){ sts = ESS_Login(srvrName, adminUserName, password);if(sts == ESS_STS_NOERR){// set the active dbsts = EssSetActive(hCtx, appName, dbName, &Access);if(sts == ESS_STS_NOERR){memset(&Object, '\0', sizeof(Object));// open the outline for use in subsequent callsObject.hCtx = hCtx;Object.ObjType = ESS_OBJTYPE_OUTLINE;Object.AppName = appName;Object.DbName = dbName;Object.FileName = dbName;sts = EssOtlOpenOutline(hCtx, &Object, ESS_FALSE, ESS_FALSE, &hOutline);if(sts == ESS_STS_NOERR){// get member names from outline, so// this section includes a number of // sts = EssGetMemberInfo(hCtx, // category, &pChildMemberInfo);// calls to query member names.// Then some calls are made to free // these resources:if(pChildMbrInfo){EssOtlFreeStructure(hOutline, 1, ESS_DT_STRUCT_MBRINFO, pChildMbrInfo);}if(pChildMemberInfo){EssFree(hInst, pChildMemberInfo);}EssOtlCloseOutline(hOutline);}}ESS_Logout();}ESS_Term();}
-
I need to free up memory space to load windows on boot camp.. but my computer is saying I have small amount of memory remaining. I only really use my computer for work files, which although there are a large number are not significant enough to take up this space.
How can I find out what is taking up all the room?FYI, memory = RAM, not hard disk storage space, which is what you are actually referring to.
Use Disk Inventory X:
http://www.derlien.com -
Freeing memory for Internal Frames when they are closed
Does anyone have any advice for a way to completely free all references to InternalFrames and their contents when they are closed? It appears that many references to the frame are held deep within Swing. Our application hold references to large memory structures in the InternalFrame, which means that we have a large memory leak whenever a frame is closed.
try to remove all listeners added to components which are part of the JIF before you dispose of the JIF.
;o)
V.V. -
GC stops freeing memory after a while
Hi,
We have four manged servers running on weblogic81 with hotspot; and are using the following GC setting for our application:
GC_PARAMS="-XX:ParallelGCThreads=4 -XX:+CMSParallelRemarkEnabled -XX:MaxTenuringThreshold=10 -XX:PermSize=128m "
JAVA_OPTIONS="-D${SERVER_NAME} -D${WL_DOMAIN_NAME} -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Ddefault.client.encoding=UTF-8 -Dweblogic.s
ecurity.SSL.ignoreHostnameVerify=false -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false"
MEM_ARGS="-Xms865m -Xmx865m -XX:CMSInitiatingOccupancyFraction=60 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:MaxPermSize=1
28m -XX:SurvivorRatio=3 -XX:MaxPermSize=128m -XX:SurvivorRatio=3 -Xloggc:${LOG_HOME}/${SERVER_NAME}/gc_pipe ${GC_PARAMS} "
JAVA_VM="-server"
The follwing is the output from the gc analyser perl script
Processing gc_pipe ...
Call rate = 55 cps ...
Active call-setup duration = 32000 ms
Number of CPUs = 1
DEBUG timestr: bt=60.884 et=7831.278 to=7770394
---- GC Analyzer Summary : gc_pipe ----
Application info:
Application run time = 7770394.00 ms
Heap space = 868 MB
Eden space = 130944 KB
Semispace = 64 KB
Tenured space = 757760 KB
Permanent space = 131072 KB
Young GC --------- (Copy GC + Promoted GC ) ---
Copy gc info:
Total # of copy gcs = 180
Avg. size copied = 4469555 bytes
Periodicity of copy gc = 43086.2519483333 ms
Copy time = 82 ms
Percent of pause vs run time = 0%
Promoted gc info:
Total number# of promoted GCs = 1365
Average size promoted = 2509130 bytes
Periodicity of promoted GC = 5637.85 ms
Promotion time = 54.74 ms
Percent of pause vs run time = 0.96 %
Young GC info:
Total number# of young GCs = 1545
Average GC pause = 57.99 ms
Copy/Promotion time = 57.99 ms
Overhead(suspend,restart threads) time = -10.68 ms
Periodicity of GCs = 4971.39 ms
Percent of pause vs run time = 1.15 %
Avg. size directly created old gen = 0.43 KB
Old concurrent GC info :
Heap size = 757760 KB
Avg. initial-mark threshold = 67.78 %
Avg. remark threshold = 0.00 %
Avg. Resize size = 0.00 KB
Total GC time (stop-the-world) = 133.22 ms
Concurrent processing time = 2338.00 ms
Total number# of GCs = 1
Average pause = 133.22 ms
Periodicity of GC = 7770260.78 ms
Percent of pause vs run time = 0.00 %
Percent of concurrent processing vs run time = 0.03 %
Permanent Generation GC info:
Total GC time = 0;
Total number# of GCs = 72;
Average pause = 0.00;
Periodicity = 107922.14;
Percent of pause vs run time = 0.00;
Total Old GC info (Concurrent + MS + Perm Gen):
Total GC time (stop-the-world) = 133.22 ms
Total Concurrent processing time = 2338.00 ms
Total number# of GCs = 1
Average pause = 133.22 ms
Periodicity of GC = 7770260.78 ms
Percent of stop-the-world pause vs run time = 0.00 %
Percent of concurrent processing vs run time = 0.03 %
Total (young and old) GC info:
Total count = 1546
Total GC time = 89725.68 ms
Average pause = 58.04 ms
Percent of pause vs run time = 1.15 %
Call control info:
Call-setups per second (CPS) = 55
Call rate, 1 call every = 18 ms
Number# call-setups / young GC = 273.426591796764
Total call throughput = 422444.08
Total size of short lived data / call-setup = 481217 bytes
Total size of long live data / call-setup = 9176 bytes
Average size of data / call = 490393
Total size of data created per young gen GC = 134086656 bytes
Execution efficiency of application:
GC Serial portion of application = 1.15%
Actual CPUs = 1
CPUs used for concurrent processing = 1.00
Application Speedup = 1.00
Application Execution efficiency = 1.00
Application CPU Utilization = 99.97 %
Concurrent GC CPU Utilization = 0.03 %
--- GC Analyzer End Summary ----------------
#--- Detailed and confusing calculations; dig into this if you need more info about what is happening above ----
---- GC Log stats ...
---- Young generation calcs ...
Average young gen dead objects size / GC = 131577525.92 bytes
Average young gen live objects size / GC cycle = 2509130.08 bytes
Ratio of short lived / long lived for young GC = 52.44
Average young gen size promoted = 0.00 bytes
Average number# of Objects promoted = 0
Total promoted times = 74723.82 ms
Average object promoted times = 0.00 ms
Total promoted GCs = 1365
Periodicity of promoted GCs = 5637.85 ms
Total copy times = 14868.65 ms
Total copy GCs = 180
Average copy GC time = 82.60 ms
Periodicity of copy GCs = 43086.25 ms
Total number# of young GCs = 1545
Total time of young GC = 89592.47 ms
Average young GC pause = 57.99 ms
Periodicity of young GCs = 4971.39 ms
--- Old generation concgc calcs ....
Total concurrent old gen times = 133.22 ms
Total number# of old gen GCs = 1
Average old gen pauses = 133.22 ms
Periodicity of old gen GC = 7770260.78 ms
--- Traditional MS calcs ...
Total number# mark sweep old GCs = 0
Total mark sweep old gen time = 0.00 ms
Average mark sweep pauses = 0.00 ms
Average free threshold = 0.00 %
Total mark sweep old gen application time = 0.00 ms
Average mark sweep apps time = 0.00 ms
--- Mark-Compact calcs ...
Total time taken by MC gc = 133.22 ms
Total number# of old gen GCs = 1
Total number# of old gen pauses with 0 ms = 133
Periodicity of MC gc = 0.00 ms
---- GC as a whole ...
Total GC time = 89725.68 ms
Average GC pause = 58.04 ms
Total # of gcs = 1546.00 ms
--- Heap calcs ...
Eden = 134086656 Bytes
Semispace = 65536 Bytes
Old gen heap = 775946240 Bytes
Perm gen heap = 134217728 Bytes
Total heap = 910098432.00 Bytes
## for concgc
Live objects per old GC = 0.00 KB
Dead objects per old GC = 0.00 KB
Ratio of (short/long) lived objects per old GC = 0.00
--- Memory leak verification ...
Total size of data promoted = 3344690.00 KB
Total size of data directly created in
old generation = 666.00 KB
Total size of data in old gen = 3345356.00 KB
Total size of data collected throughout app. run = 0.00 KB
--- Active duration calcs ...
Active duration of each call = 32000 ms
Number# number of calls in active duration = 1759
Number# of promotions in active duration = 5
Long lived objects(promoted objects) / active duration = 14241618.90 bytes
Short lived objects (tenured or not promoted) / active duration = 846941930.91 bytes
Total objects created / active duration = 861183549.81 bytes
Percent% long lived in active duration = 1.65 %
Percent% short lived in active duration = 98.35 %
Number# of active durations freed by old GC = 0.00
Ratio of live to freed data = 0.00
Average resized memory size = 0.00
Time when init GC might take place = 0.00 ms
Time when remark GC might take place = 0.00 ms
Periodicity of old GC = 7770260.78 ms
--- Application run times calcs ...
Total application run times during young GC = 5572109.73 ms
Total application run times during old GC = 0.00 ms
Total application run time = 5511433.32 ms
Calculated or specified app run time = 7770394.00 ms
Ratio of young (gc_time/gc_app_time) = 0.02
Ratio of young (gc_time/app_run_time) = 0.01
Ratio of (old gc_time/total gc_app_time) = 0.00
Ratio of (old gc_time/app_run_time) = 0.00
Ratio of total (gc_time/gc_app_time) = 0.02
Ratio of total (gc_time/app_run_time) = 0.01
weloadm@vwrpa41s:/var/applogs/weblogic/live/managed2/gc_128
What happens is after about 45 - 50 mins(sometimes 65-70 mins) on a load of 800 users, major gc is unable to free up any memory at all and the memory usage keeps on increasing. The above script is for the managed server that dies due to lack of memory.Hi,
We have four manged servers running on weblogic81 with hotspot; and are using the following GC setting for our application:
GC_PARAMS="-XX:ParallelGCThreads=4 -XX:+CMSParallelRemarkEnabled -XX:MaxTenuringThreshold=10 -XX:PermSize=128m "
JAVA_OPTIONS="-D${SERVER_NAME} -D${WL_DOMAIN_NAME} -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Ddefault.client.encoding=UTF-8 -Dweblogic.s
ecurity.SSL.ignoreHostnameVerify=false -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false"
MEM_ARGS="-Xms865m -Xmx865m -XX:CMSInitiatingOccupancyFraction=60 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:MaxPermSize=1
28m -XX:SurvivorRatio=3 -XX:MaxPermSize=128m -XX:SurvivorRatio=3 -Xloggc:${LOG_HOME}/${SERVER_NAME}/gc_pipe ${GC_PARAMS} "
JAVA_VM="-server"
The follwing is the output from the gc analyser perl script
Processing gc_pipe ...
Call rate = 55 cps ...
Active call-setup duration = 32000 ms
Number of CPUs = 1
DEBUG timestr: bt=60.884 et=7831.278 to=7770394
---- GC Analyzer Summary : gc_pipe ----
Application info:
Application run time = 7770394.00 ms
Heap space = 868 MB
Eden space = 130944 KB
Semispace = 64 KB
Tenured space = 757760 KB
Permanent space = 131072 KB
Young GC --------- (Copy GC + Promoted GC ) ---
Copy gc info:
Total # of copy gcs = 180
Avg. size copied = 4469555 bytes
Periodicity of copy gc = 43086.2519483333 ms
Copy time = 82 ms
Percent of pause vs run time = 0%
Promoted gc info:
Total number# of promoted GCs = 1365
Average size promoted = 2509130 bytes
Periodicity of promoted GC = 5637.85 ms
Promotion time = 54.74 ms
Percent of pause vs run time = 0.96 %
Young GC info:
Total number# of young GCs = 1545
Average GC pause = 57.99 ms
Copy/Promotion time = 57.99 ms
Overhead(suspend,restart threads) time = -10.68 ms
Periodicity of GCs = 4971.39 ms
Percent of pause vs run time = 1.15 %
Avg. size directly created old gen = 0.43 KB
Old concurrent GC info :
Heap size = 757760 KB
Avg. initial-mark threshold = 67.78 %
Avg. remark threshold = 0.00 %
Avg. Resize size = 0.00 KB
Total GC time (stop-the-world) = 133.22 ms
Concurrent processing time = 2338.00 ms
Total number# of GCs = 1
Average pause = 133.22 ms
Periodicity of GC = 7770260.78 ms
Percent of pause vs run time = 0.00 %
Percent of concurrent processing vs run time = 0.03 %
Permanent Generation GC info:
Total GC time = 0;
Total number# of GCs = 72;
Average pause = 0.00;
Periodicity = 107922.14;
Percent of pause vs run time = 0.00;
Total Old GC info (Concurrent + MS + Perm Gen):
Total GC time (stop-the-world) = 133.22 ms
Total Concurrent processing time = 2338.00 ms
Total number# of GCs = 1
Average pause = 133.22 ms
Periodicity of GC = 7770260.78 ms
Percent of stop-the-world pause vs run time = 0.00 %
Percent of concurrent processing vs run time = 0.03 %
Total (young and old) GC info:
Total count = 1546
Total GC time = 89725.68 ms
Average pause = 58.04 ms
Percent of pause vs run time = 1.15 %
Call control info:
Call-setups per second (CPS) = 55
Call rate, 1 call every = 18 ms
Number# call-setups / young GC = 273.426591796764
Total call throughput = 422444.08
Total size of short lived data / call-setup = 481217 bytes
Total size of long live data / call-setup = 9176 bytes
Average size of data / call = 490393
Total size of data created per young gen GC = 134086656 bytes
Execution efficiency of application:
GC Serial portion of application = 1.15%
Actual CPUs = 1
CPUs used for concurrent processing = 1.00
Application Speedup = 1.00
Application Execution efficiency = 1.00
Application CPU Utilization = 99.97 %
Concurrent GC CPU Utilization = 0.03 %
--- GC Analyzer End Summary ----------------
#--- Detailed and confusing calculations; dig into this if you need more info about what is happening above ----
---- GC Log stats ...
---- Young generation calcs ...
Average young gen dead objects size / GC = 131577525.92 bytes
Average young gen live objects size / GC cycle = 2509130.08 bytes
Ratio of short lived / long lived for young GC = 52.44
Average young gen size promoted = 0.00 bytes
Average number# of Objects promoted = 0
Total promoted times = 74723.82 ms
Average object promoted times = 0.00 ms
Total promoted GCs = 1365
Periodicity of promoted GCs = 5637.85 ms
Total copy times = 14868.65 ms
Total copy GCs = 180
Average copy GC time = 82.60 ms
Periodicity of copy GCs = 43086.25 ms
Total number# of young GCs = 1545
Total time of young GC = 89592.47 ms
Average young GC pause = 57.99 ms
Periodicity of young GCs = 4971.39 ms
--- Old generation concgc calcs ....
Total concurrent old gen times = 133.22 ms
Total number# of old gen GCs = 1
Average old gen pauses = 133.22 ms
Periodicity of old gen GC = 7770260.78 ms
--- Traditional MS calcs ...
Total number# mark sweep old GCs = 0
Total mark sweep old gen time = 0.00 ms
Average mark sweep pauses = 0.00 ms
Average free threshold = 0.00 %
Total mark sweep old gen application time = 0.00 ms
Average mark sweep apps time = 0.00 ms
--- Mark-Compact calcs ...
Total time taken by MC gc = 133.22 ms
Total number# of old gen GCs = 1
Total number# of old gen pauses with 0 ms = 133
Periodicity of MC gc = 0.00 ms
---- GC as a whole ...
Total GC time = 89725.68 ms
Average GC pause = 58.04 ms
Total # of gcs = 1546.00 ms
--- Heap calcs ...
Eden = 134086656 Bytes
Semispace = 65536 Bytes
Old gen heap = 775946240 Bytes
Perm gen heap = 134217728 Bytes
Total heap = 910098432.00 Bytes
## for concgc
Live objects per old GC = 0.00 KB
Dead objects per old GC = 0.00 KB
Ratio of (short/long) lived objects per old GC = 0.00
--- Memory leak verification ...
Total size of data promoted = 3344690.00 KB
Total size of data directly created in
old generation = 666.00 KB
Total size of data in old gen = 3345356.00 KB
Total size of data collected throughout app. run = 0.00 KB
--- Active duration calcs ...
Active duration of each call = 32000 ms
Number# number of calls in active duration = 1759
Number# of promotions in active duration = 5
Long lived objects(promoted objects) / active duration = 14241618.90 bytes
Short lived objects (tenured or not promoted) / active duration = 846941930.91 bytes
Total objects created / active duration = 861183549.81 bytes
Percent% long lived in active duration = 1.65 %
Percent% short lived in active duration = 98.35 %
Number# of active durations freed by old GC = 0.00
Ratio of live to freed data = 0.00
Average resized memory size = 0.00
Time when init GC might take place = 0.00 ms
Time when remark GC might take place = 0.00 ms
Periodicity of old GC = 7770260.78 ms
--- Application run times calcs ...
Total application run times during young GC = 5572109.73 ms
Total application run times during old GC = 0.00 ms
Total application run time = 5511433.32 ms
Calculated or specified app run time = 7770394.00 ms
Ratio of young (gc_time/gc_app_time) = 0.02
Ratio of young (gc_time/app_run_time) = 0.01
Ratio of (old gc_time/total gc_app_time) = 0.00
Ratio of (old gc_time/app_run_time) = 0.00
Ratio of total (gc_time/gc_app_time) = 0.02
Ratio of total (gc_time/app_run_time) = 0.01
weloadm@vwrpa41s:/var/applogs/weblogic/live/managed2/gc_128
What happens is after about 45 - 50 mins(sometimes 65-70 mins) on a load of 800 users, major gc is unable to free up any memory at all and the memory usage keeps on increasing. The above script is for the managed server that dies due to lack of memory. -
I just noticed something strange about metro apps. If I close them they still show up in the task manager as running and using memory. is this a known glitch and is MS planning on fixing it. TIA
Hi,
This is Store APP memory management mechanism since Windows 8. By this way, Windows store app user could start store app quickly next time when the app closed. If system memory was exhaust, it would release the former app memory space.
You can refer to the content of the link below for more details:
http://blogs.msdn.com/b/b8/archive/2012/04/17/reclaiming-memory-from-metro-style-apps.aspx
Roger Lu
TechNet Community Support -
Freeing memory space by cleaning up iPhoto library
I would like to free some space from my macbook so I plan to transfer the photo files generated by iPhoto to an external drive. how do i do it? will removing the photos from the iphoto library affect its performance?
Are you running a Managed or a Referenced Library?
A Managed Library, is the default setting, and iPhoto copies files into the iPhoto Library when Importing. The files are then stored in the Library package
A Referenced Library is when iPhoto is NOT copying the files into the iPhoto Library when importing because you made a change at iPhoto -> Preferences -> Advanced. The files are then stored where ever you put them and not in the Library package. In this scenario you are responsible for the File Management.
If you're running an Managed Library:
Make sure the drive is formatted Mac OS Extended (Journaled)
1. Quit iPhoto
2. Copy the iPhoto Library from your Pictures Folder to the External Disk.
3. Hold down the option (or alt) key while launching iPhoto. From the resulting menu select 'Choose Library' and navigate to the new location. From that point on this will be the default location of your library.
4. Test the library and when you're sure all is well, trash the one on your internal HD to free up space.
Regards
TD -
Ejecting disk issues and freeing memory
Ok, so sometimes the disk gets stuck and I can't find a way to get it to eject. What should I do?
Also, something is taking up space on my macbook pro and I don't know what it is. I deleted unwanted programs, videos, and movies. I still want to free more space. When i click "About this mac" it says 34.22GB free out of 211GB. There is this huge yellow portion that says 61.35 what is that? Thanks.In Mac Help:
If you’re trying to eject a CD or DVD:
Choose Apple menu > Log Out, and then log in again. Try to eject the disc again.
If you still can’t eject the CD or DVD, choose Apple menu > Restart. While your computer restarts, hold down the mouse button or trackpad button until the disc is ejected.
The Yellow portion is a catch all category for items that do not fall into the other ones. Best way to find what you don't want is to down load from the Internet OmniDiskSweeper (free) and open it. It will show you in a sequential format all you files and the space they are taking.
Ciao. -
Follow up on an old thread about memory utilization
This thread was active a few months ago, unfortunately its taken me until now
for me to have enough spare time to craft a response.
From: SMTP%"[email protected]" 3-SEP-1996 16:52:00.72
To: [email protected]
CC:
Subj: Re: memory utilization
As a general rule, I would agree that memory utilzation problems tend to be
developer-induced. I believe that is generally true for most development
environments. However, this developer was having a little trouble finding
out how NOT to induce them. After scouring the documentation for any
references to object destructors, or clearing memory, or garbage collection,
or freeing objects, or anything else we could think of, all we found was how
to clear the rows from an Array object. We did find some reference to
setting the object to NIL, but no indication that this was necessary for the
memory to be freed.
I believe the documentation, and probably some Tech-Notes, address the issue of
freeing memory.
Automatic memory management frees a memory object when no references to the
memory
object exist. Since references are the reason that a memory object lives,
removing
the references is the only way that memory objects can be freed. This is why the
manuals and Tech-Notes talk about setting references to NIL (I.E. freeing memory
in an automatic system is done by NILing references and not by calling freeing
routines.) This is not an absolute requirement (as you have probably noticed
that
most things are freed even without setting references to NIL) but it accelerates
the freeing of 'dead' objects and reduces the memory utilization because it
tends
to carry around less 'dead' objects.
It is my understanding that in this environment, the development tool
(Forte') claims to handle memory utilization and garbage collection for you.
If that is the case, then it is my opinion that it shoud be nearly
impossible for the developer to create memory-leakage problems without going
outside the tool and allocating the memory directly. If that is not the
case, then we should have destructor methods available to us so that we can
handle them correctly. I know when I am finished with an object, and I
would have no problem calling a "destroy" or "cleanup" method. In fact, I
would prefer that to just wondering if Forte' will take care of it for me.
It is actually quite easy to create memory leaks. Here are some examples:
Have a heap attribute in a service object. Keep inserting things into
the heap and never take them out (I.E. forgot to take them out). Since
service objects are always live, everything in the heap is also live.
Have an exception handler that catches exceptions and doesn't do
anything
with the error manager stack (I.E. it doesn't call task.ErrMgr.Clear).
If the handler is activated repeatedly in the same task, the stack of
exceptions will grow until you run out of memory or the task terminates
(task termination empties the error manager stack.)
It seems to me that this is a weakness in the tool that should be addressed.
Does anyone else have any opinions on this subject?
Actually, the implementation of the advanced features supported by the Forte
product
results in some complications in areas that can be hard to explain. Memory
management
happens to be one of the areas most effected. A precise explanation to a
non-deterministic process is not possible, but the following attempts to
explain the
source of the non-determinism.
o The ability to call from compiled C++ to interpreted TOOL and back
to compiled C++.
This single ability causes most of the strange effects mentioned in
this thread.
For C++ code the location of all variables local to a method is not
know
(I.E. C++ compilers can't tell you at run-time what is a variable
and what
isn't.) We use the pessimistic assumption that anything that looks
like a
reference to a memory object is a reference to a memory object. For
interpreted
TOOL code the interpreter has exact knowledge of what is a reference
and what
isn't. But the TOOL interpreter is itself a C++ method. This means
that any
any memory objects referenced by the interpreter during the
execution of TOOL
code could be stored in local variables in the interpreter. The TOOL
interpreter
runs until the TOOL code returns or the TOOL code calls into C++.
This means
that many levels of nested TOOL code can be the source of values
assigned to
local variables in the TOOL interpreter.
This is the complicated reason that answers the question: Why doesn't a
variable that is created and only used in a TOOL method that has
returned
get freed? It is likely that the variable is referenced by local
variables
in the TOOL interpreter method. This is also why setting the
variable to NIL
before returning doesn't seem to help. If the variable in question is a
Array than invoke Clear() on the Array seems to help, because even
though the
Array is still live the objects referenced by the Array have less
references.
The other common occurrence of this effect is in a TextData that
contains a
large string. In this case, invoking SetAllocatedSize(0) can be used
to NIL
the reference to the memory object that actually holds the sequence of
characters. Compositions of Arrays and TextData's (I.E. a Array of
TextData's
that all have large TextDatas.) can lead to even more problems.
When the TOOL code is turned into a compiled partition this effect
is not
noticed because the TOOL interpreter doesn't come into play and
things execute
the way most people expect. This is one area that we try to improve
upon, but it is complicated by the 15 different platforms, and thus
C++ compilers,
that we support. Changes that work on some machines behave
differently on other
machines. At this point in time, it occasionally still requires that
a TOOL
programmer actively address problems. Obviously we try to reduce
this need over
time.
o Automatic memory management for C++ with support for multi-processor
threads.
Supporting automatic memory management for C++ is something that is
not a very
common feature. It requires a coding standard that defines what is
acceptable and
what isn't. Additionally, supporting multi-processor threads adds
its own set of
complications. Luckily TOOL users are insulated from this because
the TOOL to C++
code generator knows the coding standard. In the end you are
impacted by the C++
compiler and possibly the differences that occur between different
compilers and/or
different processors (I.E. Intel X86 versus Alpha.) We have seen
applications that
had memory utilization differences of up to 2:1.
There are two primary sources of differences.
The first source is how compilers deal with dead assignments. The
typical TOOL
fragment that is being memory manager friendly might perform the
following:
temp : SomeObject = new;
... // Use someObject
temp = NIL;
return;
When this is translated to C++ it looks very similar in that temp
will be assigned the
value NULL. Most compilers are smart enough to notice that 'temp' is
never used again
because the method is going to return immediately. So they skip
setting 'temp' to NULL.
In this case it should be harmless that the statement was ignored
(see next example for a different variation.) In more
complicated examples that involve loops (especially long
lived event loops) a missed NIL assignment can lead to leaking the
memory object whose
reference didn't get set to NIL (incidentally this is the type of
problem that causes
the TOOL interpreter to leak references.)
The second source is a complicated interaction caused by history of
method invocations.
Consider the following:
Method A() invokes method B() which invokes method C().
Method C() allocates a temporary TextData, invokes
SetAllocatedSize(1000000)
does some more work and then returns.
Method B() returns.
Method A() now invokes method D().
Method D() allocates something that cause the memory manager to look
for memory objects to free.
Now, even though we have returned out of method C() we have starting
invoking
methods. This causes us to use re-use portions of the C++ stack used to
maintain the history of method invocation and space for local variables.
There is some probability that the reference to the 'temporary' TextData
will now be visible to the memory manager because it was not overwritten
by the invocation of D() or anything invoked by method D().
This example answers questions of the form: Why does setting a local
variable to
NIL and returning and then invoking task.Part.Os.RecoverMemory not
cause the
object referenced by the local variable to be freed?
In most cases these effects cause memory utilization to be slightly
higher
than expected (in well behaved cases it's less than 5%.) This is a small
price to pay for the advantages of automatic memory management.
An object-oriented programming style supported by automatic memory
management makes it
easy to extended existing objects or sets of objects by composition.
For example:
Method A() calls method B() to get the next record from the
database. Method B()
is used because we always get records, objects, of a certain
type from
method B() so that we can reuse code.
Method A() enters each row into a hash table so that it can
implement a cache
of the last N records seen.
Method A() returns the record to its caller.
With manual memory management there would have to be some interface
that allows
Method A() and/or the caller of A() to free the record. This
requires
that the programmer have a lot more knowledge about the
various projects
and classes that make up the application. If freeing doesn'
happen you
have a memory leak, if you free something while its still
being used the
results are unpredictable and most often fatal.
With automatic memory management, method A() can 'free' its
reference by removing
the reference from the hash table. The caller can 'free' its
reference by
either setting the reference to NIL or getting another
record and referring
to the new record instead of the old record.
Unfortunately, this convenience and power doesn't come for free. Consider
the following,
which comes from the Forte' run-time system:
A Window-class object is a very complex beast. It is composed of two
primary parts:
the UserWindow object which contains the variables declared by the
user, and the
Window object which contains the object representation of the window
created in
the window workshop. The UserWindow and the Window reference each
other. The Window
references the Menu and each Widget placed on the Window directly. A
compound Window
object, like a Panel, can also have objects place in itself. These
are typically
called the children. Each of the children also has to know the
identity of it's
Mom so they refer to there parent object. It should be reasonably
obvious that
starting from any object that make up the window any other object
can be found.
This means that if the memory manager finds a reference to any
object in the Window
it can also find all other objects in the window. Now if a reference
to any object
in the Window can be found on the program stack, all objects in the
window can
also be found. Since there are so many objects and the work involved
in displaying
a window can be very complicated (I.E. the automatic geometry
management that
layouts the window when it is first opened or resized.) there are
potentially many
different reference that would cause the same problem. This leads to
a higher than
normal probability that a reference exists that can cause the whole
set of Window
objects to not be freed.
We solved this problem in the following fashion:
Added a new Method called RecycleMemory() on UserWindow.
Documented that when a window is not going to be used again
that it is
preferably that RecycleMemory() is invoked instead
of Close().
The RecycleMemory() method basically sets all references
from parent to
child to NIL and sets all references from child to
parent to NIL.
Thus all objects are isolated from other objects
that make up
the window.
Changed a few methods on UserWindow, like Open(), to check
if the caller
is trying to open a recycled window and throw an
exception.
This was feasible because the code to traverse the parent/child
relationship
ready existed and was being used at close time to perform other
bookkeeping
operations on each of the Widgets.
To summarize:
Automatic memory management is less error prone and more productive but
doesn't come totally for free.
There are things that the programmer can do that assists the memory
manager:
o Set object reference to NIL when known to be correct (this
is the
way the memory is deallocated in an automatic system.)
o Use methods like Clear() on Array and SetAllocatedSize()
on TextData to
that allow these objects to set their internal
references to NIL
when known to be correct.
o Use the RecycleMemory() method on windows, especially very
complicated
windows.
o Build similar type of methods into your own objects when
needed.
o If you build highly connected structures that are very
large in the
number of object involved think that how it might be
broken
apart gracefully (it defeats some of the purpose of
automatic
management to go to great lengths to deal with the
problem.)
o Since program stacks are the source of the 'noise'
references, try
and do things with less tasks (this was one of the
reasons that
we implemented event handlers so that a single task
can control
many different windows.)
Even after doing all this its easy to still have a problem.
Internally we have
access to special tools that can help point at the problem so that
it can be
solved. We are attempting to give users UNSUPPORTED access to these
tools for
Release 3. This should allow users to more easily diagnose problems.
It also
tends to enlighten one about how things are structured and/or point out
inconsistencies that are the source of known/unknown bugs.
Derek
Derek Frankforth [email protected]
Forte Software Inc. [email protected]
1800 Harrison St. +510.869.3407
Oakland CA, 94612I beleive he means to reformat it like a floppy disk.
Go into My Computer, Locate the drive letter associated with your iPod(normally says iPod in it, and shows under removable storage).
Right click on it and choose format - make sure to not have the "quick format" option checked. Then let it format.
If that doesnt work, There are steps somewhere in the 5th gen forum( dont have the link off hand) to try to use the usbstor.sys to update the USB drivers for the Nano/5th gen. -
Is the Memory Suite thread safe?
Hi all,
Is the memory suite thread safe (at least when used from the Exporter context)?
I ask because I have many threads getting and freeing memory and I've found that I get back null sometimes. This, I suspect, is the problem that's all the talk in the user forum with CS6 crashing with CUDA enabled. I'm starting to suspect that there is a memory management problem when there is also a lot of memory allocation and freeing going on by the CUDA driver. It seems that the faster the nVidia card the more likely it is to crash. That would suggest the CUDA driver (ie the code that manages the scheduling of the CUDA kernels) is in some way coupled to the memory use by Adobe or by Windows alloc|free too.
I replaced the memory functions with _aligned_malloc|free and it seems far more reliable. Maybe it's because the OS malloc|free are thread safe or maybe it's because it's pulling from a different pool of memory (vs the Memory Suite's pool or the CUDA pool)
comments?
EdwardZac Lam wrote:
The Memory Suite does pull from a specific memory pool that is set based on the user-specified Memory settings in the Preferences. If you use standard OS calls, then you could end up allocating memory beyond the user-specified settings, whereas using the Memory Suite will help you stick to the Memory settings in the Preferences.
When you get back NULL when allocating memory, are you hitting the upper boundaries of your memory usage? Are you getting any error code returned from the function calls themselves?
I am not hitting the upper memory bounds - I have several customers that have 10's of Gb free.
There is no error return code from the ->NewPtr() call.
PrMemoryPtr (*NewPtr)(csSDK_uint32 byteCount);
A NULL pointer is how you detect a problem.
Note that changing the size of the ->ReserveMemory() doesn't seem to make any difference as to whether you'll get a memory ptr or NULL back.
btw my NewPtr size is either
W x H x sizeof(PrPixelFormat_YUVA4444_32f)
W x H x sizeof(PrPixelFormat_YUVA4444_8u)
and happens concurrently on #cpu's threads (eg 16 to 32 instances at once is pretty common).
The more processing power that the nVidia card has seems to make it fall over faster.
eg I don't see it at all on a GTS 250 but do on a GTX 480, Quadro 4000 & 5000 and GTX 660
I think there is a threading issue and an issue with the Memory Suite's pool and how it interacts with the CUDA memory pool. - note that CUDA sets RESERVED (aka locked) memory which can easily cause a fragmenting problem if you're not using the OS memory handler. -
Hello, I was just watching a presentation where the spokesperson is stating that Windows VMs (8.1 I assume) on a Hyper-V host with Dynamic Memory enabled can possibly use as little as 300 MB of RAM when idle.
My 8.1 Enterprise VM's with Dynamic Memory enabled still use about 2600 MB of RAM when sitting idle. Is this normal? It would be great to get this usage down. I hear rumors of 'MinWin' but don't think this really relates to a VDI deployment.
Also, I have configured 8 GB for startup RAM if that matters.
Thanks
- Mr. Sid
- Mr. SidAmount of memory is very dependent upon workload on the system. If nothing else on the Hyper-V host is asking for memory, the VM will not simply give it up. If you have plenty of RAM on your host, and the total amount of memory needed by all
the VMs and the host is less than the amount of RAM on the host, you will not likely see any decrease in the size of the VMs. With dynamic memory, if there is less physical memory than the sum of all the running VMs, one VM may need memory so it asks
for it. This might cause the memory manager to request to take the memory away from another VM. That VM looks at what it has allocated and determines what it can give up. Without this 'external' pressure, there is no need for the VM to give
up memory. In concept, this is no different than what happens among processes on a stand-alone system, except that when a process exits, it releases its memory. So, if the VM is sitting idle, that means processes are not exiting, so no memory is
being freed. It there were other machines needing the memory, it is possible that the idle VM could give up memory to those VMs needing more memory.
8GB for startup memory seems really excessive for a Window 8.1 installation. I can't think of any sort of typical workstation application that would require that much memory to start up. But, that makes no difference on freeing memory.
.:|:.:|:. tim -
Hi!
My Nokia N97 failed yesterday so I had to hard reset it and than reinstall the operating software. I am now re-programming all the settings on my phone, but for some reason, in SMS, I can not set the memory to E: Mass Memory, it will only stay on C: Phone Memory or F: Card, however I had a bunch of important SMSs on my phone on the E: Memory, and now I cant access them. Any idea on how to solve this?? Do I need to reinstall the phone software AGAIN?? I press the E: Mass Memory link and the phone just acts as if nothing happened.
Can someone please help me?
Thanks@hadimassa: Thanks very much for the tip. There are indeed a lot of active processes. Most of them seem to be there with nothing you can do about it. Some of them were due to applications I installed but did not think of being loaded (e.g. Truphone).
But none of these could explain the out of memory problems I was having. I really had only a few applications running that I could influence
I have fixed the problem, at least temporarily: I have removed the battery from the phone, reinserted the battery and the N97 feels as snappy as immediately after upgrading to the v20 firmware. And the memory issues have gone. I hope the memory issue does not come back. But it feels like some applications are not freeing memory and the N97 doesn't interfere.
Regards, Jean-Marc -
Clearing memory in ActionScript 3.0
Does anyone know how to clear memory in AS3.0. My app loads a lot of SWF and the memory usage keep accumulate and does not go down untill the app crash. I've tried flash.system.System.gc(); , set movie clip to null,and removeChild but seems no luck for me.
This is how i trace the memory usage:
var mem:String = Number( System.totalMemory / 1024 / 1024 ).toFixed( 2 );
trace( "RAM Usage: "+mem+"MB" ); // eg traces “24.94Mb”
Any help means a lot to me. Appreciate in advance.
AmirThis article should give you more insight:
http://www.rgbeffects.com/blog/uncategorized/flash-optimization-freeing-memory/
It`s also memorable that manually calling the GarbageCollector only works in AIR and the Debug-Version of the Flash Player.
Maybe you are looking for
-
What is the formula for finding Pythagorean integer triples? For ex: 3 4 5 is a triplet. How do I find the next triplet?
-
Installation problem with oracle 10g release 1 in RHEL ES 4 platform
Hi, I am facing such a problem. I want to install Oracle 10g release 1. I am using linux 86_64 os is RHEL ES 4 UPGRADE 5 X_86_64 When i put the command ./runInstaller It shows os is not supported. Is there any way to install this oracle version. Can
-
Interactive PDF not supporting .swf
We are currently producing an digital publication in InDesign CS5 that we would like to export as an interactive PDF. The only way to get this to work, was to export as a Flash .swf, then place that into a new Indesign document, then export THAT as
-
Replace the ScrollBar of a ListView
Hello, Can the internal ScrollBar of a ListView be replaced with an external ScrollBar ? I really mean to replace it, not updating its scroll value when changing the external ScrollBar. A further question is, if yes, can it be done for multiple ListV
-
More All - where's the rest?
When I'm reading a long e-mail, why does clicking on "More all" only sometimes work to download the rest of the message, and at other times, just blink at me and do nothing, even though I'm connected to the network?