2.6.38 speed improvements

Well, all my programs start faster. And no, they aren't cached in memory at that point...but when you start them for the very first time or after you drop caches.
So it could be because of the THP (according to J. Corbet: link) or some changes in VFS. Also my DE loads quicker too.
Is it just me or...?

Since 2.6.38 is still in testing...I am moving this to [testing] repo forum..

Similar Messages

  • Massive Disk Speed Improvement Plan

    I am moving forward with a disk storage speed improvement plan using my Dell Precision T5400 workstation as the test bed.
    Specifically, my goal is to create a super fast 2 TB drive C: from four OCZ Vertex 3 480GB SATA3 SSD drives in RAID 0 configuration.  This will replace an already fast RAID 0 array made from two Western Digital 1TB RE4 drives.
    So far I have ordered two of these fast SSD drives, along with what is touted to be a very good value in high performance SATA3 RAID controllers, a Highpoint 2420SGL.  I'll get started with this combination and get to know it first as a data drive before trying to make it bootable.
    Getting any kind of hard information online about putting SSDs into RAID is a bit like pulling teeth, so I'm not 100% confident that these parts will work perfectly together, but I think the choice of SSD drives is the right one.  I had briefly considered a PCIe RevoDrive SSD card made by OCZ, but was just too esoteric...  I'm actually getting double the storage this way for the same price, I can swap to a different RAID controller if need be, and these drives can easily be ported to any new workstation I may get in the future.
    Notably, some early concerns with using SSD in RAID configurations (and things like TRIM commands) have already been alleviated, as the drives are now quite intelligent in their internal "garbage collection" processes.  I've verified this with the engineers at OCZ.  They have said that with these modern SSD drives you really don't have to worry about them being special - just use them as you would a normal drive.
    Once I get the first two SSDs set up in RAID 0 I'll specifically do some comparisons with saving large files and also using the array as the Photoshop scratch drive, vs. the spinning 1 TB drive I have in that role now.
    Assuming all goes well, I'll then add the additional two SSDs to complete the four drive array.  After a quick test of that, I'll see if I can restore a Windows System Image backup made from my 2 TB C: (spinning drive) array, which (if it works) will let me hit the ground running using the same exact Windows setup, just faster.
    My current C: drive, made from two Western Digital 1 TB RE4 drives, delivers about 210 MB/sec throughput with very large files, with 400 MB/sec bursts with small files (these drives have big caches).  Where they fall down dismally (by comparison to SSD) is operations involving seeking...  The PassMark advanced "Workstation" benchmark generates random small accesses such as what you might see during real work (and I can hear the drives seeking like crazy) results in a meager 4 MB/sec result.
    My current D: drive, a single Hitachi 1 TB spinning drive, clocks in at about 100 MB/sec for large reads/writes.
    The SSD array should push the throughput up at least 5x as compared to my current drive C: array, to over 1 GB/sec, but the biggest gain should be with random small accesses (no seek time in an SSD), where I'm hoping to see at leasdt a 25x improvement to over 100 MB/second.  That last part is what's going to speed things up from an every day usage perspective.
    I imagine that when the dust settles on this build-up, I'll end up pointing virtually everything at drive C:, including the Photoshop scratch file, since it will have such a massively fast access capability.  It will be interesting to experiment.  I suppose I'll have to come up with some gargantuan panoramas to stitch in order to force Photoshop to go heavily to the scratch drive for testing.
    I'll let you all know how it works out, and I'll be sure and do before/after comparisons of real use scenarios (big files in Photoshop, and various other things).  Perhaps fully my "real world" results can help others looking to get more Photoshop performance out of their systems understand what SSD can and can't do for them.
    I welcome your thoughts and experiences.
    -Noel

    Not sure who might be following this thread, but I have executed the final phase of this plan, restoring a system backup from my spinning drive array onto the new 4 drive SSD array.
    All went off without a hitch, I have my same system configuration including all apps and everything just as it was, except everything is now MUCH faster.
    The 4 drive array achieves a staggering 1.74 gigabytes/second sustained throughput rate.
    Windows 7 WEI score is 7.9 for the Primary hard disk category.
    Windows boots up quickly, everything starts immediately, nothing bogs the system down, and just overall everything feels very fluid and snappy.  And there is no seeking noise from the drives.
    Regarding what this has done for Photoshop...  I've only tested on Photoshop CS6 beta so far today, but everything is incrementally improved.  Startup time is faster, things seem more smooth and fluid while editing overall, and a benchmark I created using an action to run a lot of image adjustment operations on a big, multi-layer image ran this long to completion:
    When the file is opened from (and the Photoshop scratch file is on) a single spinning disk: 
    4 minutes 26 seconds (266 seconds)
    When the file is opened from (and the scratch file was is on) a fast array of spinning drives: 
    3 minutes 45 seconds (225 seconds)
    When the entire system is run from the SSD array: 
    2 minutes 31 seconds (151 seconds)
    During the action, because so many steps are performed on the big file, Photoshop writes a 30+ gigabyte scratch file on the scratch drive.
    Summary
    Clearly the very fast disk access markedly improves Photoshop's speed when it uses scratch space. 
    Plus copying big image files around is virtually instantaneous. 
    I don't use Bridge myself, but I have noticed that all the image thumbnails (via FastPictureViewer Codec Pack) just show up immediately in Explorer windows and Photoshop File Open/Save dialogs.  We can only assume this kind of drive speed would really make Bridge blaze through its operations as well.
    Following my footsteps would be expensive, but it can really work.
    -Noel

  • DRI (declarative referential integrity) and speed improvements.

    EDITED: See my second post--in my testing, the relevant consideration is whether the parent table has a compound primary key or a single primary key.  If the parent has a simple primary key, and there is a trusted (checked) DRI relation
    with the child, and a query requests only records from the child on an inner join with the parent, then sql server (correctly) skips performing the join (shown in the execution plan).  However, if the parent has a compound primary key, then sql server
    performs a useless join between parent and child.   tested on sql 2008 r2 and denali.  If anyone can get sql server NOT to perform the join with compound primary keys on the parent, let me know.
    ORIGINAL POST: I'm not seeing the join behavior in the execution plan given in the link provided (namely that the optimizer does not bother performing a join to the parent tbl when a query needs information from the child side only AND
    trusted DRI exists between the tables AND the columns are defined as not null).  The foreign key relation "is trusted" by Sql server ("is not trusted" is false), but the plan always picks both tables for the join although only one is needed. 
    If anyone has comments on whether declarative ref integrity does produce speed improvements on certain joins, please post.  thanks.
    http://dinesql.blogspot.com/2011/04/does-referential-integrity-improve.html

    I'm running sql denali ctp3 x64 and sql 2008 r2 x64, on windows 7 sp1. I've tested it on dozens of tables, and I defy anyone to provide a counter-example (you can create ANY parent table with two ints as a composite primary key, and then a child table using
    that compound as a foreign key, and create a trusted dri link between them and use the above queries I posted)--any table with a compound foreign key relation as the basis for the DRI apparently does not benefit from referential integrity between those tables
    (in terms of performance). Or to be more precise, the execution plan reveals that sql server performs a costly and unnecessary join in these cases, but not when the trusted DRI relation between them is a single primary key. If anyone has seen a different result,
    please let me know, since it does influence my design decisions.
    fwiw, a similar behavior is true of sql server's date correlation optimization: it doesn't work if the tables are joined by a composite key, only if they are a joined by a single column:
    "There must be a single-column
    foreign key relationship between the tables. "
    So I speculate, knowing absolutely nothing, that there must be something deep in the bowels of the engine that doesn't optimize compound key relations as well as single column ones.
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[parent](
    [pId1] [int] NOT NULL,
    [pId2] [int] NOT NULL,
    CONSTRAINT [PK_parent] PRIMARY KEY CLUSTERED
    [pId1] ASC,
    [pId2] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    CREATE TABLE [dbo].[Children](
    [cId] [int] IDENTITY(1,1) NOT NULL,
    [pid1] [int] NOT NULL,
    [pid2] [int] NOT NULL,
    CONSTRAINT [PK_Children] PRIMARY KEY CLUSTERED
    [cId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[Children] WITH CHECK ADD CONSTRAINT [FK_Children_TO_parent] FOREIGN KEY([pid1], [pid2])
    REFERENCES [dbo].[parent] ([pId1], [pId2])
    ON UPDATE CASCADE
    ON DELETE CASCADE
    GO
    /* the dri MUST be trusted to work, but it doesn't work anyway*/
    ALTER TABLE [dbo].[Children] CHECK CONSTRAINT [FK_Children_TO_parent]
    GO
    /* Enter data in parent and children */
    select c.cId FROM dbo.Children c INNER JOIN Parent p
    ON p.pId1 = c.pId1 AND p.pId2 = c.pId2;
    /* Execution plan will be blind to the trusted DRI--performs the join!*/

  • Have speed improvements been made?

    I just want to say that my posts on the forums today have appeared in the topic list nearly instantaneously — which wasn't the case just a few days ago. And if the forum administrators have made speed improvements recently, but are disappointed no-one has noticed, then perhaps this will bring a modicum of satisfaction.
    ...I wonder if someone "in the know" (Eric?) can indicate whether this is a permanent improvement. If it's merely due to draining the _internet tubes_, it may not be...

    Thanks for your reply, but I'm not sure it explains the particular slow-down I'm seeing...
    Displaying forums is as fast as ever, as is the system's acceptance of new posts/replies. The problem is, as I said, that +"posts take a long time to appear"+. ...I list the main forum page and expect to see my just-accepted post at the top of the list, but it doesn't appear for between one and a few minutes after my post has been accepted. Yes, there's a warning to expect a delay, but I'm pointing out that these forums go through periods of near instantaneous appearance of new posts (as was the case a week ago) to a now sluggish appearance of new posts.
    The idea that there is a geographic aspect to the problem is perhaps nullified by the comment by MGW (in New Hampshire?) above:
    "Actually, the speedup has been noticeable for the past couple of days, having complained bitterly about the clog..."
    Also during "clogged" periods, duplicate posts from all over the world tend to appear as members mistakenly think their post, although accepted by the system, didn't "take" and re-enter it — because it doesn't show up in the forum's main list for a couple of minutes or so.
    ...We seem to regularly go through these alternating multi-week periods of members reporting speed and sluggishness but, so far, with no acknowledgement or explanation from the hosts.
    By the way, this post itself took a minute to appear in the main forum list — a week ago, it would have appeared almost instantaneously.
    Message was edited by: Alancito

  • Speed improvements in JDeveloper 3.0

    Will the overall response time/speed improve in JDeveloper 3.0?
    Thanks
    Mike
    null

    Hi
    JDeveloper 3.0 is in beta right now, and it has improvements in
    response time/speed.
    For example the deployment wizard is way faster.
    regards
    raghu
    Michael Maculsay (guest) wrote:
    : Will the overall response time/speed improve in JDeveloper 3.0?
    : Thanks
    : Mike
    null

  • On an IMac will a 4GB graphics card give a noticeable speed improvement over a 2GB card?

    On an IMac will a 4GB graphics card give a noticeable speed improvement over a 2GB card?

    In terms of Lightroom I'm pretty sure the answer would be "no".

  • Site Speed Improvements

    To All:
    I am looking for ways to improve my website loading speed. I have several albums on the site (www.lens-perspective.com) and I am wondering...
    1.) if the site would load faster if limited my albums to 10-15 pictures.
    2.) Should I optimize the pictures for the web myself instead of letting iWeb do it?
    2.) Also, instead of using the ALBUM Template page in iWeb, am I better of using a blank page and placing one picture on it for each album and then hyperlinking to each album from that page?
    3.) Lastly, there should be 2 pictures on my "About Me" page and they don't seem to load after my site is uploaded. Do you think that this is a browser issue? I host the site at GO Daddy and upload it using Fetch.
    Any other "speed" recommendations are welcome.
    Thank you,
    Randy

    1. Yes. Large albums take much longiing to load and keeping the albums under 15 images is a good idea.
    2. Probably not worth the time and effort. Your current files average about 100 KB's (good size for Web use). 15 on a page would be 1.5 MB's that needs to download.
    3. I see three images on the About Me page. One large and two smaller (lower part of the page).

  • Will URLS (Unified light speed) improves the current app. perf. as well ?

    Hi,
    We are using ABAP webdynpro application that encapsulates Interactive PDF as well. But it's performance is not good.
    Normally it takes 30-50 seconds when user opens the workitem from UWL which further calls the webdynpro application and shows the pdf to user.
    My question is if we go for EHP1 for NW that will give Unified Rendering Light Speed in Web Dynpro ABAP  then will this technology helps to improve the current webdynpro application performance or only the new applications that will be built using this.
    Please suggest me for this or tell me the other way to improve the performance of webdynpro application.
    Thanks,
    Rahul

    Hi Rahul,
    The new light speed rendering engine is the rendering framework to render the Webdynpro applications. Therefore there is nothing where by you specify that a particular application is developed using the Light speed rendering engine.
    In other words to answer your question, EHP1 will improve the performance for all the applications whether developed on EHP1 or developed prior to EHP1.
    Regards
    Rohit Chowdhary

  • Bravo on great speed improvement in LR5!

    I'm currently in the process of editing large sets of close to 300 photos each.  My workflow needs me to go through each single shot, calibrate them, and eventually make a selection of around 100 good photos per set.  I often navigate between the Library and Develop modules, and I use pretty much all tools (as necessary) when calibrating/editing the shots.  My files are D800 raw files. Note also that my library contains 13 years of photography work, with around 250,000 photos (of various sources evidently).
    I just wanted to take a few minutes off my editing process to comment on the huge improvement on speed I am experiencing in LR5 over LR4.
    In LR4, my workflow with my D800 files was very painful, with all the slugginess I would get everytime I moved from the Develop module to the Library Module, and everytime I used tools in the develop module.  There was lag virtually everywhere.  I'm on a very up to date system, with SSD drives both for the phtoos I work on, and my system disc. 
    Since LR5, I can honestly say it feels pleasant once again to edit my photos.  As where I feared opening LR4 to do my job, I now look forward again to visit my shots. 
    I read on the forum about the sharpness issue in lower res exports, and I'm sure there are a few more things to fix here and there.  But the HUGE improvement in responsiveness and speed in the application makes LR5 a real winner to me.  Speed is definitly a main issue that should always be addressed first with each update, always trying to improve on it and make the workflow as pleasant as possible so that we, as photographers, have to focus only on one thing: creativity. 
    I'm glad this issue was addressed with LR5, and my only wish would have been to see it addressed sooner in LR4.
    Back to editing I go...
    cheers!

    I'm currently in the process of editing large sets of close to 300 photos each.  My workflow needs me to go through each single shot, calibrate them, and eventually make a selection of around 100 good photos per set.  I often navigate between the Library and Develop modules, and I use pretty much all tools (as necessary) when calibrating/editing the shots.  My files are D800 raw files. Note also that my library contains 13 years of photography work, with around 250,000 photos (of various sources evidently).
    I just wanted to take a few minutes off my editing process to comment on the huge improvement on speed I am experiencing in LR5 over LR4.
    In LR4, my workflow with my D800 files was very painful, with all the slugginess I would get everytime I moved from the Develop module to the Library Module, and everytime I used tools in the develop module.  There was lag virtually everywhere.  I'm on a very up to date system, with SSD drives both for the phtoos I work on, and my system disc. 
    Since LR5, I can honestly say it feels pleasant once again to edit my photos.  As where I feared opening LR4 to do my job, I now look forward again to visit my shots. 
    I read on the forum about the sharpness issue in lower res exports, and I'm sure there are a few more things to fix here and there.  But the HUGE improvement in responsiveness and speed in the application makes LR5 a real winner to me.  Speed is definitly a main issue that should always be addressed first with each update, always trying to improve on it and make the workflow as pleasant as possible so that we, as photographers, have to focus only on one thing: creativity. 
    I'm glad this issue was addressed with LR5, and my only wish would have been to see it addressed sooner in LR4.
    Back to editing I go...
    cheers!

  • MacBook Pro vs iMac - speed improvement for Parallels Windows 8.1

    Will there be a noticeable increase in speed and performance in switching from an iMac 2.9GHz quad-core Intel Core i5 with 1TB HardDrive to a MacBook Pro - Retina Display 13-inch 2.6GHz dual-core Intel Core i5 with 512 SSD.??? Both being brand new iMac and MacBook Pro.
    I am not sure if the SSD drive will make a noticeable difference in this scenario. I am trying to investigate if doing the switch will make the use of Parallels with Windows 8.1 better; the iMac seems to be dragging a bit while running Windows through Parallels.
    Thanks,

    I would be looking at the Quad core MBP for Parallels. A Quad-core vs Dual-Core is not a fair comparison.
    For the same CPU, the two bottlenecks for Parallels are RAM and disk speed.
    You will see an improvement if you max out the RAM and the disk speed. The SSD in the MBP will be much faster than the HDD in the iMac.

  • LabVIEW MathScript computation speed improvement

    I am using a MathScript node to make calculations on an sbRIO FPGA module and the speed of these computations is critical.  What are some ways to improve the speed of calculations and is there a faster way to do matrix calculations than MathScript?  If I make the MathScript portion into a subVI will it improve the speed of calculations?
    Thanks for any ideas
    Solved!
    Go to Solution.

    Please look at the attached VI. It has your original .m code, my modifications to your .m code, and the G code equivalent to the modified .m code. First, let me describe to you the numbers I saw on a cRIO 9012 for each of the three approaches.
    I ran each of the three approaches for hundred iterations, ignored the first 30 iterations to allow for memory allocations (which caused a huge spike in run-time performance on RT), and then took the average run-time for each loop iteration for the remaining iterations
    Original M: 485 msec/iteration
    Modified M: 276 msec/iteration
    G: 166 msec/iteration
    The modifications I made to your .m code are the following:
    (1) Added ; to end of each line to suppress output (used for debugging)
    (2) Moved the random code generation out - used whitenoise (seems like that's what you were doing)
    (3) Switch on the data type highlighting feature. Noticed that majority of the data was casted to complex, although didn't seem like you needed the complex domain. The source was sqrt function. Modified it to using real(sqrt(...))
    This improved performance by over 40%. I believe more can be squeezed if you follow the documentation - Writing MathScript for Real-Time Applications. 
    Then, I took the MathScript you had and wrote equivalent G leaving the algorithm as is. This gave us performance improvement of another 40% over the modified G. This is a known issue that on slow controllers MathScript adds a 2x penalty to equivalent G. We are currently investigating this issue and may be able to fix it in a future release.
    If you profile the G code, you will notice that most of the time is spent in matrix multiplication. Unless you rethink your algorithm, I doubt this can improve further.
    Let me know if you have questions
    Regards,
    Rishi Gosalia
    Attachments:
    Mathcript_efficiencyProblem Modified.vi ‏255 KB
    MathScript_efficiencyProblem_G.vi ‏62 KB

  • Would upgrading my internet speed improve Netflix performanc​e?

    I know there is a lot of controversy surrounding netflix and ISPs which I'm not really interested in. I just know my netflix streaming is terrible right now and I would be willing to upgrade my internet speed if it meant that would improve. Any idea if it would? Thank you/

    NO! Don't do it.
    You'll just be throwing more money at Verizon with no change in performance. Realistically you could probably downgrade your speed, save yourself money, and see no change in your Netflix performance.

  • Temporary speed improvement

    Having complained for some time to BT about poor broadband speed (max 500kbps), eventually an engineer came last week.  He found some problem with the line, explained that it was capped at about 600 while the fault was active and that it would improve ... which indeed it did, up to about 1200kbps later that day.  But it's now fallen back down to 500.
    BT Broadband Checker says I should get "Between 0.75Mb and 2.5Mb (Estimated speed: 1.0Mb)" - I guess that's just a sales pitch !
    BT's SpeedTester (when I can get it to run at all, which is very infrequent) says :
    Download speedachieved during the test was - 451 Kbps
     For your connection, the acceptable range of speeds is 50-500 Kbps.
     Additional Information:
     Your DSL Connection Rate :896 Kbps(DOWN-STREAM), 448 Kbps(UP-STREAM)
     IP Profile for your line is - 500 Kbps
    So how was the engineer able to get it up to 1.2Mbps last week, and more importantly, how do I get that speed back again ?

    Beejay
    I'm not sure how much you know - so apologies if this reply is a bit basic.
    Your download speed of 451kps is about 90% of your IP profile - which is about the maximum you will get unless your IP profile is raised. The acceptable range is again based on your IP profile so can be raised. Your DSL connection Rate of 896 can also go up and down but is unlikely to change unless you reset your router - if you have a home hub you can just press the reset switch - but do not turn off a home hub as this may be detected as a fault at your local exchange and result in a cut in your IP profile. Your DSL connection rate could also go down but may go up a lot depending on your connection. 448kps up-stream is probably the maximum and unlikely to change.
    To get your IP profile raised ring the BT helpline on 0800 800 150 and if you are phoning from your home phone press 1 and then 4 for broadband help. Then press 2. Ignore any messages until you get an option to get through to someone. Tell them you are being limited to 500kps and want this raised. If you get the right person your line can be reset in 2 hours but unfortunately the ability of the person on the other end does vary so be patient and call again if necessary.
    It might help to run a "ping test" to check the quality of your line - you can do this at www.pingtest.net. The results vary but you will get a good idea over time on your line quality. To give you an idea of what to expect my line runs a ping of about 40 and jitter (interference) about 3 which is A rated. This woud effectively prove the problem is not your end. Irrespective of this your download speed will not improve unless your IP profile is raised.

  • DB Access Speed improvement

    When try accessing and acquiring huge data and speed is too slow, what should I consider in order to improve this poor performance.
    There can be a server side improvement and client sidefor example ADO.NET code).
    I want to know everything I should consider.

    Hi dy0803,
    For a SELECT statement in SQL Server 2012, to improve speed, you may reference the below suggestion.
    Create proper
    INDEXs on the columns that after the WHERE,GROUP BY,JOIN ON.
    Using the
    Sargable query to take the advantage of INDEXs
    Try different approaches and compare the
    execution plans to select a most efficient statement.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Speed improvements with 4.1

    While waiting patiently (well, sort of) for the new update, I found this really helpful review and tests of 4.1 on the 3G phone. Three pages of good info:
    http://www.anandtech.com/show/3893/caring-for-the-elderly-ios-41-speed-boost-on- iphone-3g

    dbk9999 wrote:
    http://www.anandtech.com/show/3893/caring-for-the-elderly-ios-41-speed-boost-on- iphone-3g
    I have absolutely no idea what that article is talking about with regards to typing speed. On my 3G with 3.1.3 there is no lag to my typing, no matter how fast I type. I have just tried tapping as fast as I can using two hands, on just two letters. no missed letters or lag. Its instant. Typing is perfection for me.
    I have 6 pages of apps, I'm forever buying more, apps com and go off my phone all the time. I have never restored my phone or done any other sort of maintenance to my phone in two years of ownership.
    I just don't understand how folk's experiences with their iphones can vary so much! Perhaps the moral of the story is that we can't always judge what our phones will be like from articles, forum posts, etc. We just have to try it for ourselves and cross fingers firmly.
    Having said that I'm sticking to v3.1.3 on the 3G. Two many horror stories for me, and besides the iphone 4 is calling me to buy

  • Qt 4.6 Speed improvements

    http://qt.nokia.com/about/news/nokia-releases-qt-4.6
    Im telling about the part which tells "More horsepower"
    Does anyone already tried Qt 4.6 and felt the speed improvement

    mcsaba77 wrote:
    Adriano ML wrote:
    KDE 4.3 is butter smooth on my 1GB Athlon 64 939, geforce 7600GS. Be it with or without kwin composition. My aton with GMA 950 fares really well too.
    Can't wait for QT 4.6 and KDE 4.4
    I second that, I dunno how can anyone describe KDE 4.3.x as slow, unless something is seriously broken in the installation, or if there are driver problems...
    As to ram usage, mine eats 14% of total RAM (I have 2 GB, so that's <300Mb) after start up (and yes, I wait for all services to load, in fact System Load Viever shows only 12% at first), which is ridiculously low considering all the services it provides. I mean KDE is not fluxbox, and yes, I have everything that's needed for a rich desktop experience running in the background (wicd, hal, cups, pulseaudio, kdm, etc.).  I also have 2 desktops with different wallpapers and widgets on each, plus I have desktop effects enabled.
    That really depends upon the system as well.
    For example, I can run KDE (vanilla) on my old p4 1.6 ghz computer with an nvidia geforce 2 agp video card and 1 gb ram. It runs, and as long as i am not trying to do too much it's cpu usage stays around 25%. I can even use kopete on it and do some light browsing. But eventually it stars to slow down as CPU usage races up amongst other things. I 'can' enable compositing effects but that slows tings down even more until eventually it freezes or kde disables it automatically. Thus I am using fluxbox for that computer.
    On my wife's pentium dual core 1.6 gig laptop with 2 gig ram, kde(mod) runs perfectly fine and is fast and able to handle xp in virtual mode with no real problems and with compositing effects on. Occasionally though it does swap when running virtualbox xp and other programs. But other than that, it is running as fast (after the inital load) as fluxbox on my much lder desktop.
    I do suspect that once I get my new machine built in march, that I will be able to enjoy KDE with much greater speeds on it. But it just goes to show you, that on older computers KDE does not feel fast and does feel heavy. Also there is something to be said about not having your DE automatically using up 300-400 MB and loading it's inital settings within a mere second, leaving much more free memory to load other programs faster. But I do also agree that on the latest and greatest machines out there now, unless you are really watching the memory usages like a hawk and timing everything with a nanosecond stopwatch, you probably wouldn't notice the differences in app loading speeds on kde or a lighter environment like fluxbox. Infact you may notice that kde loads kde apps faster than fluxbox loads kde apps (especially true if you use preloading).

Maybe you are looking for

  • How about Verizon updating their outdated information on their tech support pages?

    I'll start. On this page: http://my.verizon.com/services/speedoptimizer/fios/ Remove this verbage: MAC computers may optimize using the Apple Broadband Tuner. There is no Apple Broadband Tuner.  It went away years ago...somewhere around 2005. 

  • Setting Acrobat Reader as default

    when I open a PDF file it opens in Preview. How do I change it so that by default the Acrobat Reader opens the PDf files. I went to the Acrobat Reader website but their instructions didn't work for me. Thank you for responding.

  • ITunes not opening after downloading latest update (7.1)

    Hi, ever since updating my iTunes earlier this week, it has stopped working. It says (or so I believe, mine is in Danish so I'm not sure this is translated properly!): "Not enough diskspace available", which is clearly a mistake. I've tried deleting

  • Finding DB details from Admin console

    Hi All, We have SOA11g server. Now i want to find the database used for that server from Admin Console. please help me in this. TIA, Bob

  • Where is  mobile stock management customizing guide ?

    Hello For CRM 2007 mobile stock management in Mobile service. Netweaver 7.1 mobile is mandatory. However, I could not find the IMG guide for this area. Such as connectivity setup between that and CRM or backend ECC. Has anyone done that part? Thanks