Lightroom's large memory footprint

After massaging many pictures in "develop" mode, the system began to become very slow (like locking up for 30 seconds). I opened process explorer and found LightRoom was consuming 1.8Gig of virtual memory and had a working set of about 1.2Gig. This seems quite excessive for general photo editing. I'm really only performing simple adjustments like color and contrast.
I closed down Lightroom and restarted it, and it then worked fine again for another 50 or 60 pictures, at which time slowness occurred again, and the memory footprint was up again. Now that I know what to expect, I'm shutting LR down every 30 pictures or so to avoid the excessive memory consumption.
I suspect there is a memory leak or creep in LR.
I have a machine with 4Gig of RAM, running Vista Ultimate.

EricP,
LR does "accumulated background work" when nothing else is going on, esp if you have the Library set to All Photos. Also it appears that LR is very sensitive to where the pagefile(s) are located and their size. I only can speak to XP Pro though. Vista is a different animal. You might try putting a controlled size [1.5 RAM size for both Min and Max values] on both [or more] HDs you have. Also set the system to keep as much of the Kernel in RAM as possible and set the system to optimize for Applications. Those changes helped me. If they can be accomplished in Vista, they may help also.
Good luck and keep us informed if you get any fixes working.
Mel

Similar Messages

  • ADF UIX WebStart - large memory footprint

    Hi everyone,
    I am running a three-tier model jclient app with java webstart. java sdk 5.0 with jvm 5.0 - it creates a large footprint and then it says no more memory left to allocate to the app.
    i looked at if the jvm was the cause but i am using java hotspot virtual machine. now what i am wondering is where the problem is? is it the model .... etc ?
    Any help would be appreciated. Thanks!

    Hi there,
    This observation may be coming little late to be of use to you. But, we thought we'd post it here for others benefit.
    We've encountered similar situation with our ADF application. In the end, the following tweaking helped reduce the heap size and brought back our app's GUI performance.
    1. Added the following options to Sun JDK.
    -XX:SoftRefLRUPolicyMSPerMB=100 -XX:+ParallelRefProcEnabled
    This did the magic. Over and above this, we have also tried the following option setting to tune ADF Security, but it didn't seem to give any further improvement.
    -DUSE_JAAS=false -Djps.policystore.hybrid.mode=false -Djps.combiner.optimize.lazyeval=true -Djps.combiner.optimize=true -Djps.authz=ACC -Djbo.debugoutput=silent
    2. Alternatively, we also tried JRockIt JVM and interestingly enough, JRockIt handled the soft references clearing very well out of box. No tweaking was required on this.
    We suspect this could an issue with configuration of security in our app. As of now, we are not sure yet. But, we have a temporary workaround.

  • Extremely Large Memory Footprint under Linux

    I have not experienced this problem as I don't use linux regularly but a friend of mine has dismissed java as too slow because of this. He says that after he launches Forte or JBuilder (both java apps) they take about 500 MB of RAM. I know Forte is a memory hog but something is very wrong here. He says that he's using IBM's JRE 1.3.1 and some debian distro of linux.
    Also, he said that he found info on java object "headers" each taking up a huge amount of memory. Plain java objects take only 8 bytes of memory and I've never heard of this header business before. This is the main reason I posted in the advanced forum.
    Has anyone seen this type of problem before? I have no problem running java apps on windows (the performance is very good) and I'm assuming many people are running it successfully on Linux as well. Any info on this is much appreciated.
    Thanks.

    Your friend is incorrect. The typical JVM footprint in Linux is not what he thinks it is...
    Part of the "problem" is the way Linux reports threads in CLI programs like ps and top. If you look at the output from those programs, you'd think the JVM is eating you alive - but it's not. I have 512Mb of memory and I constantly run one JVM for a small home control client, not to mention firing off an IDE to work/test in - with no memory problems whatsoever.
    For example, I just started NetBeans 3.3.1 up while I was typing this and "top" reports this (sorted by memory usage):
      PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
    5722 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5723 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5724 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:03 java
    5725 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5726 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5727 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5728 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5729 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5730 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:03 java
    5732 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5733 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5735 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5736 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:10 java
    5737 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5738 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5739 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5740 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:02 java
    5741 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5742 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5743 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5744 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5746 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5747 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5749 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5750 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5751 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5752 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5754 crackers  20   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5755 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5756 crackers  15   0 98772  96M 41968 S     0.0 19.2   0:00 java
    5757 crackers  16   0 98772  96M 41968 S     0.0 19.2   0:00 javaAmazing how all those java "processes" each take 19.2% of the memory - which would mean that I'm using up 614% of memory. The actual footprint is really what one of those threads reports.
    Java is no slower (and actually a trifle faster) on Linux than it is on a Windows machine.

  • Doesn't Lightroom support large images?

    I recently migrated my photos over to Lightroom, but after the export I was given a notice about some images not being imported, because they were too large. These were panoramas with sizes of about 10.000x2000 px and somewhat above, though not exceeding 15.000. Why doesn't Lightroom support these sizes? Are there any hidden settings to play around with?

    "It's not the number of GHz that counts, but the tightness of the code. "
    No, actually it's both. You can optimize code for certain operations but operations outside of that window will not be. Camera Raw/Lightroom is a case in point. It is VERY well optimized for dealing with raw images from digital cameras. In all cases (except for scanning backs) single captures from DSLR's fall into the 10K limit. Camera Raw/Lightroom has been expended to also work with camera jpgs/tiffs and now with Lightroom and Camera Raw 4 in Photoshop CS3 to general tiffs & psds. But large tiffs and psds is NOT what the original code for Camera Raw Lightroom was optimized for.
    Larger memory buffers and faster processors CAN expand on what is possible with code optimizations alone-which are not sufficient alone to improve applications. Even if you could optimize the code, the pure processor clicks required could not make ACR/LR run on old, slow processors.
    If you count the pure numbers of images digitally captured each year to the pure numbers of photographers doing panos, panos are an edge case. ACR/LR is designed for the masses of images, not edge case use at this time. Arguably, improved pano software (such as in Photoshop CS3) may increase the numbers of photographers engaging in pano creation, which would be a good argument for increasing the upper limit that ACR/LR can handle...but it ain't gonna happen immediately.

  • Memory footprint is HUGE

    I just wanted to see if anyone else has a concern with the memory footprint and when/if this will be addressed. We have an ADF web app and now when we try to run under jdeveloper 11G the combination of jdeveloper and the weblogic java process is over 900M and grows when you do any clicking around. Under the previous TP4 release this was less than half.
    I have windows XP with Firefox, OracleXE, jdeveloper/weblogic and the memory footprint is at 2G. We already had to upgrade our systems, do we need to upgrade yet again???

    It seems that the commandline for starting the embedded weblogic has two instances of the -Xmx and Xms parameters. I think the last one is the one that is used, and it is set to 1024M, which is large for a large portion of development projects.
    The parameters are present in setDomainEnv.sh/cmd. It is situated in <JDEV_HOME?>system11.1.1.0.31.51.56/DefaultDomain/bin
    I've seen this directory show up in funny places so search for it if you can't find it.
    I've set the second set of parameters to the same as the first ones -Xms256m -Xmx512m.
    Trygve

  • Query is allocating too large memory error in OBIEE 11g

    Hi ,
    We have one pivot table(A) in our dashboard displaying , revenue against a Entity Hierarchy (i.e we have 8 levels under the hierarchy) And we have another pivot table (B) displaying revenue against a customer hierarchy (3 levels under it) .
    Both tables running fine under our OBIEE 11.1.1.6 environment (windows) .
    After deploying the same code (RPD&catalog) in a unix OBIEE 11.1.1.6 server , its throwing the below error ,while populating Pivot table A :
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    *State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)*
    But , pivot table B is running fine . Help please !!!!!
    data source used : essbase 11.1.2.1
    Thanks
    sayak

    Hi Dpka ,
    Yes ! we are hitting a seperate essbase server with Linux OBIEE enviorement .
    I'll execute the query in essbase and get back to you !!
    Thanks
    sayak

  • I'm on iTunes 11.0.0.63 for Windows. iTunes used to display the total number of songs in my library, along with total playtime and memory footprint, at the bottom of the window. How can I make it display this information like it used to?

    Hello! Thanks in advance for reading my question.
    so, I updated iTunes to 11.0.0.63 on Windows, and the new layout looks nice, but iTunes no longer gives my total song count, memory footprint, and total play-time at the bottom of the window, the way it used to. I just want to know how to get that info back. Thanks so much!

    Ctrl / or View > Show Status Bar should do it.
    Weirdly having turned mine on the menu is not there to hide it again and the shortcut doesn't work.
    tt2

  • Low Memory Footprint JVM needed, Please suggest.

    Hi guys,
    I want a light weight (low memory footprint) Java Virtual Machine compatible with Java 1.5. It should be open source. Can anybody suggest me please. I've googled and tried to use some JVMs like Kaffe and SableVM. But I want some reliable VM.
    Thanks,
    Dhaval.

    Dhaval.Yoganandi wrote:
    can you flash more light on it ?
    Dhaval.Yoganandi wrote:I'm currently using Sun JVM with Java 1.5 its taking 35-40 MB of RAM and 333 MB in VRAM. I need to make very low. how can I do that ? I've tried many options to start sun JVM but no luck.. Virtual Memory is reduced to 280MB nothing else.. I want it to consume Virtual Ram only 64MB and RAM only 30MB..
    You need to tell us what tool you are using to arrive at those numbers.

  • Query is allocating too large memory

    I’m building an Analysis in OBIEE against an ASO cube and am seeing the following error:
    Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits
    The report we’re trying to build is intended to show information from eight dimensions. However, when I try to add just a few of the dimensions we get the “Query is allocating too large memory” error. Even if I filter down the information so that I only have 1 or 2 rows in the Analysis I get the error. It seems like there is something wrong that is causing our queries to become so bloated. We're using OBIEE 11.1.1.6.0.
    Any help would be appreciated.

    950121 wrote:
    I’m building an Analysis in OBIEE against an ASO cube and am seeing the following error:
    Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits
    The report we’re trying to build is intended to show information from eight dimensions. However, when I try to add just a few of the dimensions we get the “Query is allocating too large memory” error. Even if I filter down the information so that I only have 1 or 2 rows in the Analysis I get the error. It seems like there is something wrong that is causing our queries to become so bloated. We're using OBIEE 11.1.1.6.0.
    Any help would be appreciated.Hi,
    This sounds like a known Bug 13331507 : RFA - DEBUGGING 'QUERY IS ALLOCATING TOO LARGE MEMORY ( > 4GB)' FROM ESSBASE.
    Cause:
    A filter has been added in several lines in the 'Data Filters' Tab of the 'Users Permissions' Screen in the Administration Tool (click on Manage and then Identity menu items). This caused the MDX Filter statement to be added several times to the MDX issues to the underlying Database, which in turn caused too much memory to be used in processing the request.
    Refer to Doc ID: 1389873.1 for more information on My Oracle Support.

  • Query is allocating too large memory Error ( 4GB) in Essbase 11.1.2

    Hi All,
    Currently we are preparing dashboards in OBIEE from the Hyperion Essbase ASO (11.1.2) Cubes.When are trying to retrieve data with more attributes we are facing the below error
    "Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)"
    Currently we have data file size less than 2GB so we are using "Pending Cache Size=64MB".
    Please let me know which memory I have to increase to resolve this issue
    Thanks,
    SatyaB

    Hi,
    Do you have any dynamic hierarchies? What is the size of the data set?
    Thanks,
    Nathan

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

  • SQL Developer Memory Footprint

    We're looking at replacing around 200 TOAD licenses with SQL Developer. The only technical concern is the memory footprint, as in many cases it would be run from a terminal server with dozens of people logging on. A VM Size of 150MB seems to be not unusual for SQL Developer, and that all adds up of course.
    Are there any recommendations for reducing the memory footprint, or at least not letting it get much higher than 150? Features that can be turned off by default, versions of JDK, etc?

    Hi,
    The memory consume is quite worrying.
    However change the code into VB / Delphi will lose its availability as Java's write once run anywhere. :-)
    You won't be able to use this tool on Solaris, Linux, and Mac without changing the code and compiler. Thus would be less acceptable.
    I wonder if limiting SQL Dev's initial class load would give impact on memory consume.
    And why it seems that Java's garbage collector didn't do any collecting since the memory gets higher and higher time by time.
    Or maybe the code doesn't allow the object's become collectable?
    I ever get memory reach up to 500MB after doing a canceled Export Wizard for USER.
    But..... memory would never come down.
    Regards,
    Buntoro

  • Firefox memory footprint

    greetings,
    i write regarding the memory footprint of the mozilla-firefox package for arch.  i downloaded the gtk2/xft binary of firefox 0.9 from the mozilla website and used it in anticipation of the arch package being released.  previously on my arch box i had used the mozilla-fire* package rather than the mozilla.org binary.  but now i have noticed a discrepency in memory footprint between the two.  'ps v' gives for the arch package and mozilla.org package, respectively:
    <pre>
    1798 pts/1    S      0:04      0    66 37829 23280 18.2 /opt/mozilla-firefox/l
    1979 pts/1    S      0:06      9  9190 27345 19564 15.3 /tmp/firefox/firefox
    </pre>
    both were taken immediately after firefox startup.  this seems to be a pretty significant difference.  it is enough for me to prefer the mozilla.org package on my obsolete box with 128 megs ram, anyway.
    p.s.  i've been using arch for some time now.  i would just like to take this opportunity to thank those who created and maintain arch linux.  it is an enjoyable distribution

    i have mozilla-firefox using mem this way:
    13600 pts/32 S 0:00 0 47 3604 2140 0.2 /opt/gnome/libexec/gconfd-2 11
    13651 pts/32 S+ 0:00 0 577 1666 1096 0.1 /bin/sh /opt/mozilla-firefox/bin/firefox
    13669 pts/32 S+ 0:00 0 577 1702 1108 0.1 /bin/sh /opt/mozilla-firefox/lib/firefox-0.9/run-mozilla.sh /opt
    13674 pts/32 S+ 0:02 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13675 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13676 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13678 pts/33 Ss 0:00 0 577 2854 2456 0.3 -bash
    13691 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13692 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13693 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13694 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    and i cannot see any problem with that
    but i found out something strange while running it:
    [damir@Asteraceae /]$ mozilla-firefox
    LoadPlugin: failed to initialize shared library /opt/mozilla-plugins/Blender3DPlugin.so [/opt/mozilla-plugins/Blender3DPlugin.so: undefined symbol: _ZTV16nsQueryInterface]
    libxpt: bad magic header in input file; found 'XPCOM
    TypeLib
    –@', expected 'XPCOMnTypeLibrn32'
    *** loading the extensions datasource
    [damir@Asteraceae /]$
    the blender plugin is broken --- funny enough: i hear the first time that such thing exist, so it would be nice that someone else confirm this ;-)

  • Memory footprint

    hi,
    i have seen lot of mobile database vendors advertise their product's memory footprint. however, have found no corresponding information about oracle lite besides the official "oracle database lite is a small footprint, java enabled....".
    so the million dollar question is...what is the memory footprint of oracle lite 9i and 10g?

    We use pocket PC devices, and the oracle lite foot print itself has not been an issue.
    By default the databases will be created in the following locations
    1) SD or CF card
    2) built in storage (eg: DELL's)
    3) main memory
    Main memory is fastest, but can be more fragile (try dropping an IPAQ), but provided you use a fast SD card like Kingston 45x, the differential is minimal, and the size of the card and relative cost provides a lot of advantages (we tend to go for a minimum of 256 MB which should be plenty for normal applications.

  • Safari on Windows has huge memory footprint

    Running Gmail app:
    Safari - 85,304K
    IE7 - 33,884K
    Firefox - 28,572K
    I know it's a beta, but I've noticed that Firefox seems to be the best still. It's the smallest memory footprint, it's as fast (that I can humanly tell) as Safari, and the most standards compliant.
    There are still some web pages that use heavy AJAX controls and other JavaScript stuff that Safari doesn't do well with. Hopefully they'll get ironed out in the beta and the memory footprint under better control.
    IBM ThinkPad Intel Duo   Windows Vista  

    Hmmm... Firefox conservative with memory? Thats a good joke!
    The other day, I was surfing the web, and doing little else, when I noticed a marked slowdown in performance (ihave an e67000, and 2gb of ram), so I was annoyed that my system could be faultring just from using Firefox alone.
    One look at task manager made my jaw hit the floor! That crafty fox had hogged almost1.35gb of ram!!!! with the optional extras I load up at start, I was left with 34mb to play with! Now thats a ridiculous memory footprint!
    Alas this problem has been around since the stone age, the dev guys at Mozilla seem unwilling or incapable of sorting the problem out.

Maybe you are looking for

  • Can I Create a New User Account and Move Existing Account ???

    Lion was reinstal. I'm suspecting issue with one user account. Can I creat a new user account and move all my files to that new user? What are the impact? How about permissons on this files? What about the Library Folder? Thank you for your help and

  • Change 220v 110v

    Hi!!! I'm in USA for one year, and i've brought with me my european mac book pro. Is there any problem for using the european adaptator for charge (220v) in the USA (110v) It's going to destroy my battery in te end? Thank you!!!

  • Vendor Master Creation in MDM

    Hi,     We have a requirement wherein the client has around 300 company codes and we need to create a same vendor for all the company codes . The vendor master (a single record) is created for one company code via EP(Enterprise Portal) which is got a

  • Error message when connecting to I Tunes

    I am getting an error message when I connect to ITUNES that says I do not have anymore capacity for more songs. I have a mini and only have 300 songs on it. Any ideas? It still says I have 2.9GB avialable on MINI. Any help would be appreciated...

  • Can't tap or left click on Flex 2 14 periodical​ly

    I have been having trouble with the touchpad on my Flex 2 14 since I bought the machine in December. All of a sudden I will be unable to click or tap the touchpad to get a result. I have to use the touchscreen for the minute or two while this happens