750GB Drives in Raid 0

Hi,
I want to add 3 of the Seagate 750GB drives and set them as RAID 0.
Anything to watch for with the drives and how can I make this as inexpensive as possible?
B

Just connect the drives and format. That's all there is to it.
You might consider changing the block size depending on your
typical use.
Cost is mesured only by the cost of the drives themselves.
For price proformance I like the Maxtor Maxline III drives
the best by far. In a RAID0 they're pretty nice according
to both test-bed sites that have tested them and to my own
findings.
Have fun!

Similar Messages

  • Which 750GB drives used in an XServe RAID?

    Could somebody with an XServe RAID with 750GB drives please tell me, which drives are used in this configuration? You can look it up in the XServe RAID admin utility, under Arrays and Drives when selecting a drive in "drive" mode.
    Thanks a lot,
    Floh

    ST3750640 NA P
    Revision 3.BTF
    it is a seagate

  • Dropped frames on Xserve RAID upgraded w/750GB drives.

    I'm capturing in 10-bit uncompressed standard definition 720x480. I just upgraded one side of my Apple XRAID, (connected directly via fibre channel) with seven, 750GB hard drives from Apple, drives designed for the XRAID, (came with there own sleds.)
    Current Xserve RAID firmware version is at version 1.5.1/1.51c
    Before the upgraded drives, no issues with dropped frames, (drives 1-7) the other side of the same RAID has no problems, (currently running 250GB drives, drives 8-14.) All sleds are populated with drives of the same capacity, (respective to each side.)
    The drives were formated using the Xserve RAID Admin software and mounted using Apples Disk Utilities, Journaling turned off. The final formated capacity of the RAID on drives 1-7 is 4.1TB.
    When I capture to the drives on the upgrade side, I get dropped frames. When capturing to the other side, no issues, works great. I checked the settings for performance and they are the same for both sides, as follows:
    Use Controller Write Cashe - ON
    Allow Host Cashe Flushing - OFF
    Use Drive Write Cashe - ON
    Use Steady Streaming Mode - OFF
    Read Prefetch: 128 stripes (8 MB/disk)
    My capture box is an AJA I/o and has never given me dropped frames till now, but only on the side of the RAID that has the new 750GB drives. I also have another RAID that uses 500GB drives on both sides and it work just fine as well, (no dropped frames) using the same performance settings in XRAID utilities.
    Any suggestion on why these new Apple drives modules are dropping frame would be very much appreciate. Again 7 new 750GB drives from Apple w/sleds having dropped frame issues.

    I'm capturing in 10-bit to avoid loss of quality from analog sources like BetaCam SP as well as line 21 must be part of the captured file, because it contains line 21 for captioning. Eight bit has a banding issue that I can't have in the final product and 10-bit doesn't have this issue.
    Source quality is maintained completely when working in 10-bit uncompressed, as well as the captioning data that I need, which is stripped out of the captured video latter. Captioning software can read line 21 of the raster area, of the captured file.
    Source video is from BetaCam SP, DVC Pro and DVCam.
    The RAID is an Apple Xserve RAID, connected via Fibre channel directly, (no xsan.)
    I also create high quality QuickTime Streaming files from these source files, the higher the quality of the source video the the better the streaming files look when created.
    I have never had any problem capturing till I installed these new drives from Apple.
    The RAID is as stated above, is an Apple Xserve RAID, connect via a fibre channel card in a Mac Pro 3.0GHz tower, (no xsan is used, due to connecting directly to the XRAID.)

  • Upgrading 250GB drives to 750GB drives

    We have an XServe RAID that is a few years old. It has 8 drive modules with 250GB drives and 6 blanks. We'd like to upgrade all available slots to 750GB drives.
    Can I fill the blanks with 750GB drive modules and fill the existing modules with new 750GB drives and rebuild from there? I have read that I need to update the firmware to recognize the larger capacity. What about the controller module and fiber card. Will it work with the existing pieces or will I need to upgrade those as well?
    Any other suggestions are welcome.
    Thanks!
    Stephe.
    XServe G5 and Intel
    6 Cluster Nodes
    2 RAIDS
    Bunch o' G5s and Mac Pros
    HD Video, 3D Animation and Web Production House

    Camelot wrote:
    Any advice on types and brands for the drives?
    Apple.
    Apple doesn't manufacture drives. Last I checked only Seagate makes 750GB PATA drives, everyone else in the business seems to make only SATA drives that size or bigger.
    Unfortunately nobody makes 1TB drives that are PATA, so it looks like the current crop of the drives mean the end of the line for the XServe RAID in terms of capacity.
    While I have heard of people who've replaced the bundled drives with larger ones, with some success,
    the blank drive bays have simple blanking plates - i.e. do not have the electronics required to actually
    support a drive.
    Apple do not sell empty drive modules (with the electronics, but no drive), which makes it difficult
    to expand the array other than by getting drive modules from Apple.
    Correct. However, if you upgrade from a system with 14 250GB drives, you have all the mounting hardware you need, you just have to swap the drives in the carriers.
    If you just bought new drive modules for Apple for the current blanks, and opted to manually
    upgrade the existing drives on your own you'd end up with mixed drives (vendor, models and
    firmware) which, at best, will result in degraded performance.
    This has not been an issue in the past. After all, when you have a drive failure, you end up often with different drive models and firmware versions, too. E.g. Hitachi discontinued the 250GB drive model that was originally in the XServe RAID, and replaced it with a different 250GB drive in their line up.
    You had no choice but to mix models if a single drive failure required swapping drives. (To make matters worse, the new drive, while faster, was a tiny bit smaller, which really caused head aches!)
    Even though the drives sourced from Apple are more expensive than you can find similarly-sized
    drives elsewhere, you get server-grade drives, burned in, with known/standard firmware, and
    vendor support by taking the Apple route.
    Sorry, that's FUD without basis. No drive manufacturer will run a special assembly line for "Apple grade" drives. Worse, you buy a drive module from Apple, and you have at best a 1 year warranty. You buy the drives from Hitachi or Seagate directly, you have a three or five year warranty.
    If you try to use the manufacturer's warranty on Apple sourced drives, the manufacturers will not honor it, because the drives are OEM drives sold at a big discount to Apple, which means they don't give any warrantee beyond what Apple gives.
    In terms of what you get for the money, both in absolute price, and in terms of warranty, is much better if you get bare drives straight from the manufacturer than going with Apple's drives.
    Without knowing your business I can't tell you what to do, but for me I can't bear the thought of
    telling my clients that I lost their data because I opted to save $100 on a few hard drives.
    If you're selling to clients, of course, Apple drives are better: there's more profit in those, because they are not a commodity like bare bones drives, but a brand-name special order item.
    (that's not to say that third-party drives are necessarily less reliable than those sourced from
    Apple, more that being able to say you had a problem even though you were using vendor
    supported and provided parts goes a long way ).
    Apple's support in regards to drives that fail after a year is non-existent, except if you get the $999 AppleCare contract, but that means you can buy quite a few replacement drives to cover that, and it's still only 3 years coverage, vs. 5 years for the drives direct from the manufacturer...
    Obviously Apple needs to make a profit, and inventory, logistics, etc. for parts are expensive, so you pay in essence for the convenience of getting a module you can just shove into the unit and be done. No need to mess with screwdrivers, etc.
    Ronald

  • After upgrade of hard drives in RAID,  XSAN Admin sees wrong lun sizes?

    So I purchased new 750GB drives for our XS RAIDS and I upgraded firmware to 1.5.1 C on RAIDS, but XSAN Admin sees wrong lun sizes. I have tried running XSAN 1.4.2 and even uninstalled on MDC and reinstalled back to 1.4.1 still no luck. Weird thing is that when I uninstall XSAN and then reinstall XSAN the lun names are still there in XSAN Admin, it is like they are stored in a file that does not get uninstalled when I uninstall XSAN.
    I even upgraded the Firmware on the RAIDS again after the install of the new drives, the only drives that did not get upgraded were the 2 x 250GB set for Metadata Info on the first Raid Controller. I am running this on XSERVE non intel with OS X Server 10.4.11. I did not uninstall XSAN off any of the clients only the backup MDC.
    I hope this info helps cause I could really use some tips from all you masters.
    Thanks in advance
    Jesse

    Ok so I am a dumb *. All I had to do to fix my issue was delete the actual names of the Luns in XSAN Admin and hit enter and it updated the Lun sizes. Thanks everyone.

  • SSHD Hybrid drives and Raid 0 for Premiere

    I am looking for a little support and help as I would like to expand both the capacity and speed of my current editing setup.
    I intend on using 2x Momentus XT Hybrid SShD 750gb to give me 1.5tb of very fast storage, and buying a cheap 2tb drive to manually make backups.
    I edit mainly DSLR footage in cs5.5, but I 've noticed large slow down in applying effects and rendering as my current 1tb drive fills up.
    I've looked around the internet for some answers specifically about this scenario, and whether it would suit to enhance video editing.
    I am currently using windows 7 home premium.
    Gigabyte x58a-ud5
    intel i7 920
    12gb corsair ram
    gtx 470
    I am currently using:
    C:128 Corsair ssd force 3
    D:Seagate Barracuda 7200
    C:OS,Pagefile,Premiere and programs, media cache
    D:Everything else
    Thanks

    I did a quick render test.  It is rendering okay when I am applying basic colour correction.  The CPU is going between 20% and peaking at 80%, so this would indicate that the CPU is maybe not the bottleneck currently.
    The ram is using 6 gb at max, of 9 gb made available for use by Premiere in the preferences.
    I will definately clean up my drives and defrag very soon, as I am finishing off a short film project where I have been using 10 bit uncompressed footage.  That should free up some space!
    I am interested about your other suggestion about getting the hdd drives and raid 0.  I have done a little research recently on the forums, and had a look at the optimum disk configuration for various systems depending on the number of disks available.  Now, my understanding is that you want to try and spread the load over a number of disks.  I am noticing now that a few people are using SSD's, and submitting these as suggestions for disk configurations.
    I am considering the idea of buying a couple of small capacity SSD's, (maybe with the marvel controller, that Harm I believe mentioned is better than sandforce for robustness), and in Raid 0.  I would use this for my media cache/render preview/pagefile section.  Even 2x 60gb ssds striped to provide 120gb, would be more than enough for my standard workflow.  Also, if the array goes down, its just temporary cache anyway right?
    Why would I use a 1tb 7200rpm drive, when I could go this route instead?  I know the 1tb drive would be cheaper, but isn't speed of the essence here?  I read a comment that Harm mentioned that "most speed requirements is for scratch, temp and pagefile".  I think with a bit of housekeeping, I can delete old render previews, media cache and database entries, and have plenty of room left over with the kind of projects I work on.  I'm using around 60 gb for that section right now.
    I am thinking now of ditching the hybrid sshd plan, and going for 2xssd low capacity raid 0, and a 3 tb drive as an archive to free up my 1tb drive for footage, exports, and maybe projects.
    I am interested to hear whether I am being sily to bother striping 2ssds with my modest setup.  Will my motherboard bottleneck somewhere first?
    I could just get another couple of 1tb drives, but I feel like I want to challenge that approach as SSD technology and prices have come a long way.  SSDs support trim in Raid 0, so whats stopping us?
    Also, I may point out that I am looking to upgrade at some point the whole architecture of my PC.  I am thinking about the GTX 760 in the near future.

  • Upgrading from 400GB to 750GB Drives

    We're looking at upgrading our RAID to use all 750GB drives. Right now we have 3 separate RAIDs in the same Xserve RAID with 750s, 400s, and 250s. The 400s and 750s are LUNs in an Xsan and the 250s are a separate local hard drive storage.
    What would be the best way to get rid of the 250s and 400s and migrate them all to 750s?

    Well I guess my question was more pointed not so much at hardware purchasing but at how can I expand my 750GB LUN while reducing the size of my 400GB LUN. I can't really copy the data off of the 400GB LUN directly since it's part of an Xsan and data could be split amongst it and the 750.
    Basically I want to go from this:
    <pre>
    ( === RAID 5 ==== )( ====== RAID 5 ====== )
    | 400 | 400 | 400 | 750 | 750 | 750 | 750 |</pre>
    to this:
    <pre>
    ( =============== RAID 5 ================ )
    | 750 | 750 | 750 | 750 | 750 | 750 | 750 |
    </pre>

  • DIY Fusion Drive and RAID 5

    Hi everyone!
    I have spent several hours by reading various forums but haven’t found any definitive answers.
    I have a 12 Core Mac Pro with the following setup: one 1TB SATA hard drive that carries the system and applications. For the files and storage there are three 2TB SATA drives in RAID 5 controlled by Apple RAID card. I am going to install a 512 GB Samsung 840 Pro SSD drive in the optical bay and have initially planned to use it just for the system and applications, but am curious if the following is possible.
    1) Is it possible to combine the RAID 5 array with the SSD and create a Fusion drive?
    2) If yes, will it retain all the features of the RAID 5?
    3) Should TRIM be enabled?
    Thank you in advance!

    TRIM directly addresses the shortcomings of having only garbage collection available. SSD controller manufacturers and designers (including SandForce, the controller manufacturer for OWC's SSDs), recommends that TRIM be used with their products. So does Samsung. 
    For example, here's a 2011 article from OWC describing how you don't need TRIM on their SSDs and how it can in fact hurt performance or reliability.
    That article has been discussed here on MacInTouch before. In my opinion it's bad advice, and inaccurate in some of its assertions. It also ignores the recommendation made by SandForce to use TRIM with their SSD controllers. But even if one were to take that article at face value, applying that advice to SSDs other than OWC's makes little sense.
    The reason I'm advising against TRIM is simply that it's yet another driver-level modification of the OS, and these always carry potential risk (as all the folks with WD hard drives who lost data can attest to).
    Apples and oranges comparison, for a variety of reasons. The short of it is that TRIM is supported natively in all recent versions of OS X. The tools used to enable it for third party SSDs do not add a new kernel extension; they change the setting to allow Apple's native TRIM implementation to be used with SSDs other than those factory installed by Apple.
    This shows that the 840s do work slightly better with TRIM than without, but the differences are (in my opinion) trivial, a 9% increase at best.
    One of the major reasons for the skepticism that exists about TRIM is that so many people, the authors of both articles you linked to included, don't understand it.
    TRIM is not, strictly speaking, a performance-enhancement technology -- though it is plainly obvious that most people think it is.
    Though it can, in many circumstances, improve performance, there are also circumstances under which it will provide little or no noticeable benefit. Not coincidentally, a new SSD tested fresh out of the factory packaging is unlikely to show much (if any) benefit. Or rather, TRIM is providing a real benefit for new SSDs, but that benefit doesn't become measurable in terms of benchmark performance testing until every memory cell in the SSD -- including many gigabytes of cells hidden from visibility by the SSD controller -- have been written to at least once. Writing 128 GB of files to an SSD with a nominal capacity of 128 GB won't do it, as there are several gigabytes (exact number varies depending on the model) still unwritten.
    Under real-world use conditions, having TRIM disabled means eventually having noticeable write performance degradation due to write amplification. It is far greater than "9%" -- it can be a 50% or greater drop in write performance, depending on various factors. Defining "eventually" is difficult because it depends on how the SSD is used. But given enough time and write cycles, it can happen to all SSDs used without TRIM, no matter how sophisticated their garbage collection algorithms are.
    Under those same real-world use conditions, having TRIM enabled means that the SSD should almost never reach a state of having noticeable write performance degradation, as it should almost never get into a state where write amplification is happening.
    I will concede that it is possible to design a lab test in such a way as to defeat the benefits provided by TRIM, but such tests do not reflect any real-world usage scenario I can imagine. Furthermore, those same contrived tests would put an un-TRIMmed drive into an equally-addled state even more quickly.
    I would suggest reading through the rather lengthy previous discussions about TRIM. Here are a couple of my past posts that are most relevant to the current discussion:
    A description of what TRIM is here.
    I addressed some of OWC Larry's comments about TRIM use with OWC/SandForce SSDs here.
    http://www.macintouch.com/readerreports/harddrives/index.html#d09dec2013

  • Imac 24 aluminum with 750GB drive freezing

    I have a new 24inch - 3GB 2.8GHZ and 750GB drive iMAC. I have 5 other C2D systems. The new one locks up from time to time - odd as the mouse moves but nothing else. have to remove power cord sometimes the button wont do it either.
    I dont think its the drive - i have heard its the dog ATI video card that is doing it. The drivers are painfully slow compared to my other 24" prior generation 24inch imac with 7600 Nvida. not only are they slow it also looks like they are buggy. Why they would go to a slower and buggier graphics chip? Macs should be better in this area and this is a step back

    I had the same problem, I just purchased a new 2.8extreme 750gb iMac and it froze up on me!!
    The mouse pointer would work and the dock would enlarge but could not select anything....
    Now the scroll wheel(scroll down) has packed up on my wireless mouse....arrrghhh!!
    COME ON APPLE!!

  • Using two external USB drives in RAID 1?

    Is this acceptable? I initially set it up, copied files over only to see one slice degraded. Now I've been trying to rebuild the RAID but it's taking 40 +hours. Seems it would be easier/less time consuming to delete, recreate the RAID and recopy the files.
    But - I wonder if this an acceptable practice or even supported (many posts say IDE or SATA drives).

    Check out the Linksys articles below, if RAID 1 is not running properly then the RAID status would indicate "degraded" and disk health status "failed".
    Checking the Status of Hard Drives
    Creating RAID 1 on the Network Media Hub
    Cheers.

  • 2007 Jan 27th - UK - I need to add 3 more 750gb drives to my Mac Pro

    Hi
    Anyone have a good feeling for the best 3 750gb drives to add to my initial 500gb Hitachi HDS725050KLA360 that is fitted as standard.
    I have heard that Seagate barracudas have some problems.
    Any concrete benchmarks or actual users war stories would help.
    PS My initial drive is a bit 'noisy'...
    cheers

    The Apple OEM Hitachi has been noted as being noisy from day #1. I would stick with Hitachi or MaxLine Pro or WD RE2 to get where you want, ie, 2TB of storage.
    Barefeats did a "how to stuff 20 SATA drives in Mac Pro" recently. Otherwise, invest in external Port Multiplier or Infiniband controller + drive enclosures

  • Formatting a Hard Drive for RAID 5

    Hey everyone,
    I'm a bit new to RAID formatting. I understand how it works, but I want to make sure I'm formatting correctly.
    We are looking to format a 12TB drive for RAID 5 for optimal space, speed and safety.
    In Disk Utility, I am selecting 5 partitions, but  how to I format EACH partition? Should they all be Mac OS Extended (Journaled)?
    The manual that came with my drive (OWC Mercury Elite-AL Pro Qx2) wasn't much of a help. Maybe you are!
    Thanks in advance

    By the way, we are attempting to format RAID 5 to 4 disks.

  • Problems with using 3tb drives in RAID 1

    I'm using an MSi 870A-G54 mainboard
    Without going into major specifics at this moment here is the basic scenario:
    Added 2 new seagate baracuda 3tb drives in RAID 1 mode
    Updated the bios (AMI) to 7.20
    Updated windows 7 64 with the latest drivers.
    Updated RAID Xpert to the lastest version
    Everything worked. Added my data (movies and music), all was well with the world 
    Then, after a reboot windows no longer recognizes the drives, nor does RAID Xpert, however within the RAID config (BIOS) the drives are showing as being just fine and the RAID configuration is configured correctly.
    Anyone have any idea how I can retrieve my data from these drives (most important) and get it working again (important, but less than just getting the data recovered)?
    Thus far I haven't a clue what changed. There were no Windows, BIOS, configuration or other updates.
    oh what fun it is!!!

    Thanks for the info.. but...
    The motherboard recognized the drives (after upgrading the BIOS to 17.20)
    I managed to get it working (again!) last night. :-D
    I had to do some major upgrading by way of AMD (upgraded RAID drivers, chipset drivers, and AMD RAID Xpert) that were not available via the MSi site, but are directly compatible with the chipset I'm using. .. or ... so far appear to be. Too early to tell.
    So far, everything seems stable..
    Odds are I tripped over something, or missed a step during the initial set-up, when I was figuring out what needed to be done to get the drives recognized by the MB, and then then the RAID, and then the OS. 
    Right now I'm backing up all my data and, after verfiying that each drive in fact contains data, I'm planning on breaking the mirror via RAID Xpert, then see if I need to break it as well via the RAID console (this would provide confirmation that the windows based utility in fact has, or has not, correct connectivity and control to the MB RAID config). Then, do a low-level format of the drives and start over. ....
    By the way.. just so anyone reading this knows... the Live Update 5 utility is weak... very very weak. I pity the person that has minimal PC experience to try and figure out what it's doing (or not doing) and ... done..
    I'll post the final results of this adventure when I finalize the entire setup in about a month. :-D
     

  • How do I remove one failing hard drive from raid set and replace with new one

    Last Friday apparently one of my raid drives started failing.
    As I mentioned on this forum I started getting continous beeping.
    I was finally able to get the raid working at a degraded level.  I ordered a replacement hard drive which is arriving today.
    (In the meantime I made twice daily backups of my work.....)
    Below was the message I got from the browser based raid software:
    Blahblah 09    1000.2GB   RaidSet Member SamSung HD103SJ
    Blahblah 10    1000.2GB     Free                 SamSung HD103SJ
    Blahblah 11    1000.2GB   RaidSet Member SamSung HD103SJ
    Blahblah 12    1000.2GB   RaidSet Member SamSung HD103SJ
    (See this earlier thread if you wish!)
    http://forums.adobe.com/thread/727867?tstart=0
    At one point when I did the checked the browser interface I saw the message Failed and Degraded
    As I said, I was able to work over the weekend on the degraded system.
    This morning I got the beeping again and did the rescue and now I am running a "full raid" without the notice that one raid was "Free".
    In any case, the new hard drive is arriving today.
    What steps should I take to incorporate the new drive into the raid system.
    I have one OS drive
    and 4 tb raid drives.   One needs to be replaced with the new one that I am getting today.
    Thanks
    Rowby

    Harm,
    Regarding your comment:
    Re: How do I remove one failing hard drive from raid set and replace with new one
    Please tell me how to read the serial number from an individual drive rather easily, without un-installing them:
    If you select the proper drive to change out, you only need to remove one drive and look at its serial number...
    Step 1: Identify bad drive serial number using Areca's tools
    Step 2: Turn off the computer
    Step 3: Remove what you think is the bad drive based on following your numbered cable method, marked hot-swap bays, whatever
    Step 4: Verify that the serial number matches the "bad drive" serial number from step 1; if it does great, proceed; if it does not match, go back to step 3
    Step 5: Change out the CORRECT drive - that's the bottom line for this whole procedure
    Cheers,
    Jim

  • Does the NMH300 report broken harddrive whe using two drives in raid 1?

    Dear Support,
    I´m about to purchase a NMH300 and wonder if that unit will report if one drive brakes down in a raid 1 setup?
    If so, how do you find out? Is there a visual indication on the actual unit or som kind of error message in software?
    Thanks
    Kind regards
    Tom

    Check out the Linksys articles below, if RAID 1 is not running properly then the RAID status would indicate "degraded" and disk health status "failed".
    Checking the Status of Hard Drives
    Creating RAID 1 on the Network Media Hub
    Cheers.

Maybe you are looking for