SSD Performance Questions

I've just ordered a MBP 2.4Ghz with 7200rpm 750GB drive, I will be adding either a single 8GB memory stick (total of 10GB) or a 16GB kit.  I'm looking for my first SSD use as a boot drive for OSX and potentially Win 7 in BootCamp and hope I can get some counsel on model selection and configuration.
I read that SSDs suffer a performance drop (sometimes significant) as the drive fills up.  Also there are both 256GB and 240GB drives.  Ideally I'd like to avoid the frustration associated with seeing performance drop and maximize my formatted capacity.
1)  Are there models that maintain performance over time?  Is there a specification that indicates how performance drops as the drive fills up?
2)  Is there a performance difference (current / long-term) between 256 and 240GB drives?
3)  Does an abundance of RAM improve performance and / or longevity? 
4)  How much space (if any) should be kept free for swap files in OSX / Windows?
5)  With 240 / 256GB SSDs, how much usable space is available after formatting?
6)  Is there a difference in performance based on file format NTFS vs. HPS+?
7)  Do I need to be concerned about major name brands (Intel, Samsung, OCZ, Kingston, etc.) being incompatible with MBPs?
8)  Are some SSDs easier to install / configure / maintain on a MBP than others?
9)  Are there any issues I should be aware of regarding the installation or use of a SSD that would impact my MBP's warranty?
Based on performance, reliability and 5-year warranty, I've been attracted to the Intel 520.  I've also read good reports on the Samsung 830.  One review indicated that the 830 maintained performance over time while the 520 experienced a significantly greater drop.  True?
I can buy either drive in the $330 range.  Here on the forum I've read many recommendations for the OWC drives and support.  For comparable performance and a 5-year warranty, it looks like the Extreme Pro would be the model to buy, however, at $460 it is 50% more expensive.  Thoughts?
Thanks in advance!  This is the kind of purchase that I can only make once every 5-years, so I really appreciate any help.
JD

kayakjunkie wrote:
I've just ordered a MBP 2.4Ghz with 7200rpm 750GB drive, I will be adding either a single 8GB memory stick (total of 10GB) or a 16GB kit.
Your performance with the large 16GB should outweigh any drawback the 7,200 RPM drive or even a 5,400 RPM drive unless you start swaping then the 7,200 shoud be fine enough.
The SSD is good for transfering large data sets off the machine, to another SSD, but little benefit on the same machine with most files as they are small so you don't see any benefit really in most day to day operations. If you had low RAM then the SSD would help with a faster memory swap. As you know SSD's wear out faster than hard drives.
I'm looking for my first SSD use as a boot drive for OSX and potentially Win 7 in BootCamp and hope I can get some counsel on model selection and configuration.
Here's the speed demon chart, note the fastest ones are smaller in capacity
http://www.harddrivebenchmark.net/high_end_drives.html
Again, unless your transfering large data to a external SSD via Thunderbolt on a constant basis (and can afford to replace the worn out SSD's) then a SSD as a boot drive really isn't worth it for most computers if you have a large amount of RAM.
It used to be with 32bit processors/OS , 3.5GB RAM limits, that having a fast boot drive mattered in day to day because of the faster memory swap, but not anymore. My 4GB with  5,400 RPM stock is fast enough, but I will be getting 16GB soon for my virtual machine OS's.
I read that SSDs suffer a performance drop (sometimes significant) as the drive fills up.  Also there are both 256GB and 240GB drives.  Ideally I'd like to avoid the frustration associated with seeing performance drop and maximize my formatted capacity.
Hard drives do this too because the files have to be broken up more to fit into tiny spaces.
Hard drives also suffer a bit past the 50% filled as the sectors get smaller.
I use my 750GB partitioned 50/50, A cloned to B so I can option boot either as I won't use the second 50% of the drive day to day as it's too slow for my tastes.
IMO 250GB is too small for a drive with Windows too, the fastest 500GB SSD would be better $$$. better balance of speed and onboard storage.
1)  Are there models that maintain performance over time?  Is there a specification that indicates how performance drops as the drive fills up?
2)  Is there a performance difference (current / long-term) between 256 and 240GB drives?Not that I know of.
Not that I know of.
3)  Does an abundance of RAM improve performance and / or longevity?
Yes, more RAM = less swapping to the SSD means it will last longer and run faster.
4)  How much space (if any) should be kept free for swap files in OSX / Windows?
I would suggest 25% for a SSD should be free space, ideally 50% filled for a hard drive, but to 75% max is likely more realistic for most people.
6)  Is there a difference in performance based on file format NTFS vs. HPS+?
You will have little choice of format for OS X or Windows, OS X needs HFS+ and Windows needs NTFS.
If you do a third partition (hard) then exFAT would likely be the best choice for both OS's to access.
7)  Do I need to be concerned about major name brands (Intel, Samsung, OCZ, Kingston, etc.) being incompatible with MBPs?
8)  Are some SSDs easier to install / configure / maintain on a MBP than others?
Not that I know of.
9)  Are there any issues I should be aware of regarding the installation or use of a SSD that would impact my MBP's warranty?
Just don't break anything doing so, as one is allowed to replace the RAM/storage. However the warranty/AppleCare doesn't cover the newley added items of course.
http://eshop.macsales.com/installvideos/
Based on performance, reliability and 5-year warranty, I've been attracted to the Intel 520.  I've also read good reports on the Samsung 830.  One review indicated that the 830 maintained performance over time while the 520 experienced a significantly greater drop.  True?
Performance isn't going to matter unless your dealing with large amounts of data on a constant basis, a long warranty is always good. But SSD's have no moving parts that I know of, so...easy to give a 5 year warranty. IMO.
Look here
http://www.harddrivebenchmark.net/
I can buy either drive in the $330 range.  Here on the forum I've read many recommendations for the OWC drives and support.  For comparable performance and a 5-year warranty, it looks like the Extreme Pro would be the model to buy, however, at $460 it is 50% more expensive.  Thoughts?
OWC is good, but your basically doing all the work anyway so you can choose to install what you want if you find a faster/larger SSD someplace else.
You need to learn how Lion Recovery Partition works, there are no OS X install disks anymore, it'a all on a partition to boot to install Lion. If you remove the drive, you need to install Lion somehow again right?
Carbon Copy Cloner, clones your entire Lion and Lion Recovery Partition to a external drive, can option boot from it and it's the same thing. Reverse clone onto the new SSD.
Other info you will need.
https://support.apple.com/kb/HT4718
https://support.apple.com/kb/dl1433
http://osxdaily.com/2011/08/08/lion-recovery-disk-assistant-tool-makes-external- lion-boot-recovery-drives/
A option is you can choose is after you installed Windows in Bootcamp (as the machine won't boot a Windows disk from a external optical drive) on the SSD and used WinClone to clone Bootcamp for backup, is to replace the Superdrive with a kit and place the hard drive there for partitioning and storage.
This way the SSD stays unchanged and fast, the hard drive takes all the work of the users files, changes etc, places the wear and tear on that instead.
The Superdrive goes into a enclosure (sold with the kit) and is a external optical drive.
This modification will of course void your warranty/Applecare.
For more information, see Bmer (Dave Merten) over at MacOwnersSupportGroup as he has done this and knows all the tricks.

Similar Messages

  • SATA II or III on late 2010 MBP - Also SATA III and SSD performance

    Hello:
    I would like to confirm that only the new 2011 MBP's have 6gb SATA interfaces. My MBP reports Series 5 and 3gb on both optical and HD bay. I have an SSD in the optical bay and a WD Scorpio Black in the HD slot. Both have a negotiated link speed of 3gb. I put the SSD in the optical bracket and the spinning drive in the real drive slot to take advantages of the physical protection in the HD bay.
    If the new MBP's have one SATA III channel, I would assume it's important to put the SSD in that slot.
    My SSD has a SATA II (3gb) interface. Some new ones have 6gb. Do SSD drives even approach that throughput or 3gb throughput? Which leads to the question, would a SATA III SSD perform better in a SATA III MBP?
    I know the benchmarks may not even be noticeable in real life use for most of us

    I have just out a SSD on my 2010 MBP, from all the info i could find both the HDD and optibay in the 2010 late SATA II, this is the reason i  put my SSD in the optibay as the speed would be the same.
    I did think about getting a SATA III SSD for the future when i upgrade but at the time they were very expensive and i had read reports about problems connecting to a SATA II.
    The new MBP do only have SATA III in the HDD spot so this would be the best place for any SATA III SSD.
    Personally on a 2010 MBP im happy, i still get great speeds Vs a HDD but with keeping the HDD in its original spot i get to keep spin down and the motion sensor.

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Does Linux do anything About SSD Performance Degradation?

    I have been thinking about getting an SSD for my laptop but just read this article on performance degradation of SSDs: http://www.anandtech.com/show/2738/8
    I typically use ext4. Does anyone know if any of the Linux filesystems/kernels/etc do anything about SSD performance degradation? Has anyone found a solution to this problem? Maybe new SSDs don't have the issue?

    My OCZ Vertex 2 SSDs slowed down just a tiny bit soon after installing them, but TRIM seems to be working well and they don't seem to be slowing down any further. I'm running them on ext4 with journaling enabled, and only the log files are being directed to temp files.  Everything else is going to the SSDs, and they are being used just like regular hard drives.

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Compaq 8510w, poor SSD performance

    Something seems to be limiting my read performance with a SSD installed.  Running Windows 7 (64-bit), with the latest BIOS, Intel Driver INF and SSD Toolbox (for TRIM).  Drive is Intel X25-M G2 80GB, spec-ed at ~250MB/s for sequential writes.  Benchmarks tested include HD Tune, HD Tach, ATTO, Datamarck and DiskMark.  All show around 55MB/s sequential read with W7 resource monitor confirming that throughput.
    I have found articles online describing "poor SSD" performance due to either running SATA-I vs. SATA-II or AHCI vs. IDE.  The drive is running IDE (UDMA-5) and SATA-II.  What is described as poor performance is 225MB/s for IDE vs. UHCI and 175MB/s for SATA-I vs. SATA-II.  No where near the performance I see.
    I have also run across posts mentioning that the 8510w utilizes a bridge for IDE to SATA.  If this is the case, the bridge could be the culprit.  Any insight would be appreciated.

    Hi doasc,
    I just re-imaged my 8510w to Windows 7 onto a Kingston SSDNow V Series 128 GB drive.
    I was quite silly and did not run HD Tune before enabling the employer mandated McAfee Endpoint Encryption whole disk encryption.
    I get 60MB/sec in HD Tune with the disk encrypted - and you are likely quite correct about getting better results without disk encryption enabled and I wish I had tested it that way first.
    /dan

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • [Problem disappeared in new kernel] Bad SSD performance

    Hello all, I'm an ex-gentoo user turned arch. Looking forward to it!
    Anyway, I have a brand new OCZ Vertex SSD -- theoretical 230MB/s read times. In the livecd installer, hdparm gets 210MB/s reads, pretty awesome. However, in my current installation, I am only getting 75MB/s.
    My kernel is 2.6.29, the livecd has 2.6.28. I checked, UDMA/133 is enabled on my drive. I (and some generous IRC users) have absolutely no idea what's happening here.
    Any help would be appreciated!
    [root@wakka jake]# hdparm -i /dev/sda
    /dev/sda:
    Model=OCZ-VERTEX, FwRev=1370, SerialNo=92M9W154QY80YY06496G
    Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
    RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=12288
    BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=16
    CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=62533296
    IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
    PIO modes: pio0 pio1 pio2 pio3 pio4
    DMA modes: mdma0 mdma1 mdma2
    UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
    AdvancedPM=no WriteCache=enabled
    Drive conforms to: Unspecified: ATA/ATAPI-5,6,7
    Last edited by wakkadojo (2009-06-17 13:40:38)

    I have been all over google and many other forums, there's really nothing on this. I'm almost certain it's an Arch linux problem of some sort.
    I just gave your suggestion a shot. Here's what we get:
    # hdparm -t -W 0 /dev/sda
    /dev/sda:
    setting drive write-caching to 0 (off)
    write-caching = 1 (on)
    Timing buffered disk reads: 220 MB in 3.02 seconds = 72.96 MB/sec
    So no improvement -- but then again it says it turns off write caching when it really does not.
    However, I would like to point out that it is probably not write caching that is causing this slowdown, since on the livecd write caching is enabled, and others have not posted anything about disabling write caching to improve ssd performance.
    Last edited by wakkadojo (2009-05-31 23:55:24)

  • Performance Question : Swap File on SSD

    In the past I've stored my swap file on a non-system disk. Now, however, my system disk is an SSD and the competition between system swaps and Photoshop swaps should be reduced if not eliminated. At least, that's what I think. I have other physical hard drives, but I'm wondering if specifying one of them for the swap file buys anything over using the SSD.
    Also, what the heck does the Camera Raw cache do to increase performance? This is all over the 'net and nobody explains why but they say it will improve performance of Photoshop and Lightroom. How do you calculate the best size for the cache?
    Thanks
    Steve

    On the issue of killing the flash drive; at the risk of being redundant, allow me to quote myself:
    So, what is the current consensus on the feasibility/advisability of using flash memory for swap? I've read about the limited write cycles of flash being an argument against using it for swap. But recent reading indicates to me that the limited write cycles problem applies mostly to older, smaller-capacity flash memory. Some come right out and say that, for larger-capacity flash memory, the life of the device is likely to exceed the amount of time your current computer will be useful (I think I've seen estimates in the range of 3-4 years life--minimum--for newer, higher-capacity flash memory).
    Now that we've established that the life of the flash memory is not a significant issue in this discussion, we may move on to a consideration of why I am considering NOT buying RAM. I won't quote myself on that, but suffice to say that I already have pen drives and other flash memory laying around that could be easily pressed into service for such a project. Any discussion concerning issues I've not already addressed regarding RAM vs flash memory for the task at hand?
    James

  • SSD performance in the future with the T500

    SSDs apparently perform slower as they are filled up and blocks are rewritten. Here is an article on it.
    Will there be a firmware update to solve the problem on the T500's (2081-CTO) SSD?
    Will firmware updates require wiping the data on the drive?
    Lenovo shipped my Vista OS with automatic defrag, superfetch and prefetch enabled. Bad Lenovo! They should all be turned off for an SSD.

    Hi olddoc, and welcome to the Lenovo User Community!
    This is a very interesting question.
    As I understand it, Windows 7 will be the first version of Windows which has optimizations specifically for SSD's.
    The Trim operation mitigates the performance problem you mention. For it to work you obviously need Windows 7, and an SSD which supports Trim. 
    Some SSD manufacturers have made it clear their drive firmware will be updated to support Trim (e.g. Intel), some take their own approach to mitigating the problem, and some have not yet said what their approach will be.
    Windows 7 detects SSD's and automatically chooses the right settings for SSD operation (defrag, superfetch, prefetch, readyboost, etc.).
    As SSD prices fall and Windows 7 releases this is going to be of interest to more Community members...
    I don't work for Lenovo. I'm a crazy volunteer!

  • Expert advice on SSD performance bottleneck needed.

    Masters of storage, please step forward...
    I guess I need some technically advanced advice on SSD, SATA, in/out limits etc.
    I'm programming a video installation where I continously read and write quite a lot of data:
    To the first SSD, drive A, I write 50 frames per second (= 50 files per second) plus simultaneously read some other 50 fps (from the same drive!).
    From a second SSD, drive B, I read an additional 50 frames per second.
    The question is: Where is the bottle neck? I succeed running my program at 25 fps but not faster (and i would need to get it to 50 fps).
    The total amount of data is actually not that excessive: Each frame is 1,8 MB so each stream is about 90 MB/s which comes down to a total of 180 MB/s read plus another 90 MB/s write = 270 MB/s total data transfer in 150 read/write operations per second.
    What confuses me most is the following:
    - I get the same performance when I run the patch from two separate SSDs as when I run all three streams from a single one without using the second SSD at all.
    - The speed does not increase substantially if I make one or two of these streams smaller (and reduce the file size to 260 KB/file which equals only 13 MB/s per stream).
    - It does increase however (to 40-60 fps) if I switch all three streams to 260 KB/file.
    This makes me wonder what is going on... is it the drives? Is it SATA? Or some other stuff I maybe even never heard of? And what can I do about it?
    I use a Macbook Pro (Retina, early 2013, i7 2,6 GHz, 16 GB RAM, OS 10.9.2), drive A is a Samsung 840 pro 256 GB SSD connected via thunderbolt, drive B is the internal 512 GB Macbook SSD (built-in from Apple).
    My program is built in MAX/MSP and I use a an arbitrary, uncompressed binary data format (.jxf). I can rule out CPU-/GPU-load issues.
    Thanks for any help, it is appreciated a lot!
    Karl

    karlkusher wrote:
     ...do you happen to have any more tips like that? :  )...
    I'll pass this along because it seems to make sense. I only discovered noatime myself today and set it up using these instructions; it doesn't seem to break anything so far and should speed things up some as well as reduce wear. There's also mention of Chameleon which can activate TRIM but also turn noatime on. There's a discussion of atime and noatime here. Note that there's mention of relatime replacing atime as default though what the upside and downside of the variations are is above my paygrade.
    If you're going to tinker with .plists, I'd suggest using TextWrangler.

  • HDD and SSD partition Question

    Can I back up a HDD and a SSD using the same external hard drive by partitioning the drive? I have a 128 GB SSD and a 500 TB HDD
    and want to back up both using a 1 TB external hard drive.

    Hello,
    The Windows Desktop Perfmon and Diagnostic tools forum is to discuss performance monitor (perfmon), resource monitor (resmon), and task manager, focusing on HOW-TO, Errors/Problems, and usage scenarios.
    As the question is off topic here, I am moving it to the
    Where is the Forum... forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book:
    Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C406F75746C6F6F6B2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Aftermarket SSD Compatibility Question

    Hi Everyone,
    I will be purchasing a Macbook Pro 13" with the following specs:
    2.7GHz dual-core
    Intel Core i7                
    4GB 1333MHz                
    500GB 5400-rpm1 
    Intel HD Graphics 3000                
    Built-in battery (7 hours)2
    I am planning on purchasing an aftermarket Solid State Drive (SSD) from either Intel or OCZ technologies.
    My Question is this:
    Is an OCZ Vertex 3 SATA III 2.5" SSD compatible with the Macbook Pro? 
    the link is below:
    http://www.ocztechnology.com/ocz-vertex-3-sata-iii-2-5-ssd.html
    Many Thanks,
    Harrison

    Wow, they are referring to some really old Apple hardware. Some of the old Mac Performa series had video with sync on green only. Sync on green is somewhat common practice in the world of video buy I haven't heard of any computers relying on it exclusively for a long time. I believe some Macs still have this capability but it is rarely called for. The Sceptre should work fine over DVI.

  • Recovering SSD performance

    Two questions. If you're speculating, please be sure to say so. SSD threads are too often full of misinformation.
    Does Leopard support SSD extensions for deallocating SSD physical blocks, or does it treat SSD drives as standard physical SATA devices?
    If the latter, are any/all of the SSD drives Apple uses capable of recognizing and unmapping zeroed pages?
    I ask because I'm wondering if there's anything a user can do to release pages to the pre-erase pool to regain fast write performance on a drive that's seen heavy use. I'd hate to have to do a factory reset on the drive.

    Two questions. If you're speculating, please be sure to say so. SSD threads are too often full of misinformation.
    Does Leopard support SSD extensions for deallocating SSD physical blocks, or does it treat SSD drives as standard physical SATA devices?
    If the latter, are any/all of the SSD drives Apple uses capable of recognizing and unmapping zeroed pages?
    I ask because I'm wondering if there's anything a user can do to release pages to the pre-erase pool to regain fast write performance on a drive that's seen heavy use. I'd hate to have to do a factory reset on the drive.

Maybe you are looking for