Recommendations on hardware Raid 10 card

Would like it to, but only planning on using 8 drives to start. since each SFF-8087 is 4 drives, would like one with 6 ports but could do 2 or 4 if there's a huge price difference.

We're looking to replace our old SAN with an onboard card running Raid 10  and having multiple  SFF-8087.  We're planning on putting it in a Norco 24 drive case and slowly expanding. Anyone have recommendations on a good hardware card with RAM?  Also will I be able to add additional drives to the raid10 in future?
This topic first appeared in the Spiceworks Community

Similar Messages

  • Software or Hardware RAID for LVM

    Well, I've been looking into setting up a RAID for my home server.  I've been trying to decide between a software raid or buying a relatively cheap SATA controller and using it for a hardware RAID.  I'm trying to figure out the pros and cons of buying a cheap hardware card for use rather than simply using a software solution.
    Are there any negatives to using LVM with either setup versus the other?  Have software solutions become decent enough to be relied on?
    If you recommend a hardware solution, a card suggestion would be appreciated.  Preferably something under $50.. if that's even possible for a semi reliable card.
    I'm also curious as to whether there would be an issue with software raid if it is run across multiple SATA controllers.  Can you even run a single hardware RAID using 2 separate controllers?
    Appreciate any advice!
    EDIT: I'd also appreciate any information on the processor overhead of running a software RAID 5 with 5-6 disks.  The home server is just an old Core2 and doesn't have all that much power to it.
    Last edited by nedlinin (2012-01-05 00:35:03)

    Anntoin wrote:
    Stick with the software RAID, more flexible and probably more reliable. Hardware RAID only - arguably - becomes worth it if you have fancy stuff like a battery backed write cache, etc...
    I'd avoid using RAID 5 and go for something like RAID 1+0, look up 'RAID 5 write hole'.
    The processor overhead should be quite low even on a Core2.
    The wiki has a bunch on info on RAID and LVM:
    e.g. https://wiki.archlinux.org/index.php/So … ID_and_LVM
    Appreciate the info.  I've actually been reading over the RAID/LVM wiki over the past couple days making sure I get myself aligned to what needs to be done.
    As far as avoiding RAID 5, I was specifically choosing it as it allows for some form of backup while giving me a large amount of hard drive space.  I will be using 6x2TB drives in the array and using a RAID1 would end up giving me half of it usable, RAID5 giving me 10TB usable.  Right now each of the drives are on their own so even with the write hole issues I'd assume I'd be better off with the RAID 5 then simply having the drives separate no?
    I don't simply want to put all 6 drives into a LV as I'd be worried about one failing and losing more data than each drive on their own.

  • Using a Mac Pro w / Apple Internal Hardware RAID Card?

    Anyone using a Mac Pro with a Apple Hardware RAID Card 2010?
    ( I have a 12 Core )
    Is it worth the $600-700 ?
    How much faster than the software RAID 0?
    I see Hamm's RAID tips chart .. but it doesn't include such options .. plus it is based on a PC system.
    Any tips would be great ..
    I find many areas where the application is slow / unresponsive .. and I'm not sure where the bottleneck is ... Maybe the hardware RAID will solve it?
    I'm really am impressed with CS5 .. it would be even better with some adjustments .. but I am absolutely disappointed with sluggish performance on my top-of-the-line Mac Pro 12 Core.
    This is not what I expected for a $8,000 machine.
    Some users says that the port of CS5 for the Mac has taken a back seat to the Windows version at Adobe.
    I can't imagine that Adobe would not put 100% effort into Mac products.
    Go Team!!!

    I have a 2009 Mac Pro 3.33GHz Quad core w/ Apple RAID card, 16GB RAM from OWC, and Apple's Radeon HD 5870 GPU.
    I have RAID 0 set across 3x1TB drives internally, and the standard 640GB drive for OSX and all program files. I set all video assets, renders, previews and such on the 3TB RAID.
    This seems to work wonderfully. I built this system specifically to edit a feature film shot on P2 DVCProHD, and I've been impressed with how it handles it. This was all built prior to CS5, which took me by surprise. Had I known nVidia would become such a problem for Apple, I would have built a PC, but that's another story.
    I just started a new project in CS5 on the same system, this time using H.264 video from my Nikon D7000, and so far, it seems to play just as nicely, despite not having hardware acceleration via CUDA technology. Yellow bars on top, even. I haven't had any problems with clips taking a long time to populate on the timeline or any of that, so perhaps the RAID card helps there.
    All this aside, I've already decided to upgrade my RAID for another reason. Right now, my backup is performed via an eSATA-connected external drive through a PCI eSATA card. After every edit session, I dump everything on the RAID onto the external drive, and it goes much faster than the old FW800 transfer used to. I'm about to replace my Apple RAID card with an Areca card and set up a 4-bay RAID 3 via an SAS connection. This will allow for excellent data throughput while offering more security than my current RAID 0 / manual backup system, and free up the internal drives for backups, exports and render files.
    I believe in hardware RAID, but I'm not as knowledgeable as Harm and others are about it. I had my Mac built to order with the Apple RAID card, so I have no experience using Premiere with a software RAID. Due to my smooth experience using it, I think it was worth it, but plenty of people say the Apple RAID card is rubbish, and to go with Areca or Atto cards. I didn't know about them until after I built my system, and even though it will cost a couple thousand to upgrade my RAID from this point, I expect to have an even better system than I already have.
    I hope this helps, and feel free to ask any questions I didn't address.

  • Wanted: Apple Hardware RAID Card for Xserve G5

    I have some questions regarding the original internal Apple Hardware RAID Card for an Xserve G5.
    1) Does it support native hardware RAID level 5?
    2) Does anyone know where I can find one of these cards? I can't even find them on eBay.
    3) I heard this card is based on an LSI board. Which model? Has anyone had success using the non-Apple version of the same model board in an Xserve G5 with internal Apple Drive Modules?
    4) How do you hook up this card? Do I put it in the PCI slot and move the SATA cables to it or does the presence of the card just automagically take over the drives?
    Thanks for all your help!

    I heard this card is based on an LSI board. Which model?
    Camelot is correct that this card is based on (is) the LSI Logic MegaRAID SATA 150-4 card:
    http://www.lsi.com/storagehome/products_home/internal_raid/megaraid_sata/megaraid_sata1504/index.html
    However, it has custom Apple Firmware to support booting by a PPC Xserve G5 rather than a PC. Apple also provides a ported version of the LSI Logic "megaraid" program to manage the card.
    There is a bug (well, at least one) in the Apple Firmware version that was fixed by LSI Logic in its PC version subsequent to Apple's branch off of the LSI Logic code tree for the firmware. Specifically, the card sometimes doesn't fully flush its write caches before it disconnects from the drive buses on graceful power down (doesn't happen on restart). Only known workaround is to turn off the write caches for all LUNs. I turned in a RADAR report (RADAR ID 4350243) under our support agreement back in November 2005, but no action since. I follow up every so often, but clearly this EOL product is not a priority.
    Has anyone had success using the non-Apple version of the same model board in an Xserve G5 with internal Apple Drive Modules?
    Won't work without the Apple firmware. Apple doesn't distribute the firmware. Been there, done that. Our Apple Hardware RAID card failed in an odd way shortly after installation - it still functioned as a RAID controller once booted from another drive, but you couldn't boot from it. believe that the firmware became corrupted, and that it could have been fixed by flashing the firmware using the "megaraid" program, but Apple Support didn't have a file to flash it with, so it was handled by an RMA exchange.
    RAID 5 performance is degraded with the write caches off, but our load is not heavy. I guess you could script the turning off and flushing of the write caches prior to shutdown, and turn them back on again during the boot sequence, but you'd really have to be careful to handle things like restart to CD boot, etc. We value our data more than speed.
    This was a very difficult bug to troubleshoot, because the RAID 5 made the mystery garbage blocks from space very hard to find repeatably. Finally came up with a repeatable test case. Fortunately, discovered while testing the Xserve prior to deployment, so no valuable data was lost.
    Russ

  • Hardware raid card in a G5 tower

    Does anyone know if there are any problems using the Hardware RAID card, 661-3174 : M9699G/A, in a regular G5 tower instead of a xserve?
    Thanks,
    Scott

    The G5 tower has two designated 3.5 inch drive bays although you may be able to stuff a couple of more drives in out of the way places.
    You will need to run cables from wherever the drives are installed to the connectors on the RAID card. Keeping them out of the way may be a challenge.
    The Apple card requires Mac OS X server 10.3.4 or higher. No mention was made if there were any problems with Tiger (10.4) or Leopard (10.5) server.
    You will need to backup or image the existing drives prior to creating the RAID.
    You may want to take a look at the LSI MegRAID controllers. Word on the street is that the Apple Xserve G5 RAID card was just a rebranded LSI with Apple firmware and drivers. But the product documentation may give you some ideas on the path ahead.
    Good Luck

  • Xserve G5 Hardware Raid Cable - Cable, SATA RAID Card

    Anyone know were I can pick up 3 of the hardward raid cable's that are for the internal raid card? Cable, SATA RAID Card

    When ours failed (became intermittent when disturbed), it was replaced by Apple Support under our Xserve AppleCare Maintenance Agreement - they overnighted one to me.
    In case it helps, the Apple Support part number is 922-6343. the one that you DON'T want (the short cable that is removed when you have the Apple Hardware RAID) has the part number 922-6329. Any Apple service provider ought to be able to order one for you.
    Russ
    Xserve G5 2.0 GHz 2 GB RAM   Mac OS X (10.4.8)   Apple Hardware RAID, ATTO UL4D, Exabyte VXA-2 1x10 1u

  • XServe with hardware RAID card and headless install

    Today, while trying to remotely install Mac OS X Server 10.4 ("Tiger") on an Apple XServe G5 with hardware RAID card, I discovered that it did not report the IP address of the server (booted from CD) with the sa_srchr command as it did with Mac OS X Server 10.3. With Mac OS X Server 10.3, I could boot the server from the CD, use the sa_srchr command from a remote machine to find the IP address, ssh into the server and user megaraid to create or recreate the RAID volumes using the internal drives in the server.
    When I boot the XServe from the Mac OS Server 10.4 CD and run the sa_srchr command from a remote computer, I get "No Longer Used" in the place where the IP address should be (as it was in 10.3 Server and as it is described in the current, 10.4 Server, "Command Line Administration" manual).
    Example: localhost#2xCPUFormat#No Longer Used #xx:xx:xx:xx:xx:xx#Mac OS X Server 10.4#RDY4PkgInstall#3.0#512
    Just as an experiment, I threw the Mac OS X Server 10.4 DVD into an iMac G5 and booted it. Using the sa_srchr command, I get the same "No Longer Used" where the IP address should be.
    Does anyone know:
    Is this a problem with the original "Mac OS X Server 10.4.0" installation CDs? Do I need 10.4.1, 10.4.2 or 10.4.3 CD/DVD media?
    Has this functionality been removed from Tiger install discs? Is there another way to create the RAID volumes?
    I know that I can use the Mac OS X 10.3 Server disks to work around this, but it doesn't seem like I should have to...
    Justin Sako
    Leander ISD

    Are you sure you have the 10.4 Admin Tools installed on the 'client' machine? I've seen this happen when trying to set up a 10.4 server from a machine with 10.3 Admin Tools installed. Try deleting the Admin Tools from your client machine, and re-install from a 10.4 DVD.
    iBook G4   Mac OS X (10.4.3)  

  • Systemd-fsck complains that my hardware raid is in use and fail init

    Hi all,
    I have a hardware raid of two sdd drives. It seems to be properly recongnized everywhere and I can mount it manually and use it without any problem. The issue is that when I add it to the /etc/fstab My system do not start anymore cleanly.
    I get the following error( part of the journalctl messages) :
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Found device /dev/md126p1.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Starting File System Check on /dev/md126p1...
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: /dev/md126p1 is in use. <--------------------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: e2fsck: Cannot continue, aborting.<----------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: fsck failed with error code 8.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: Ignoring error.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Started File System Check on /dev/md126p1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Mounting /home1...
    Jan 12 17:16:22 biophys02.phys.tut.fi mount[530]: mount: /dev/md126p1 is already mounted or /home1 busy
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: home1.mount mount process exited, code=exited status=32
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Failed to mount /home1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Dependency failed for Local File Systems.
    Does anybody undersand what is going on. Who is mounting the  /dev/md126p1 previous the systemd-fsck. This is my /etc/fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sda1
    UUID=4d9f4374-fe4e-4606-8ee9-53bc410b74b9 / ext4 rw,relatime,data=ordered 0 1
    #home raid 0
    /dev/md126p1 /home1 ext4 rw,relatime,data=ordered 0 1
    The issue is that after the error I'm droped to the emergency mode console and just pressing cantrol+D to continues boots the system and the mount point seems okay. This is the output of 'system show home1.mount':
    Id=home1.mount
    Names=home1.mount
    Requires=systemd-journald.socket [email protected] -.mount
    Wants=local-fs-pre.target
    BindsTo=dev-md126p1.device
    RequiredBy=local-fs.target
    WantedBy=dev-md126p1.device
    Conflicts=umount.target
    Before=umount.target local-fs.target
    After=local-fs-pre.target systemd-journald.socket dev-md126p1.device [email protected] -.mount
    Description=/home1
    LoadState=loaded
    ActiveState=active
    SubState=mounted
    FragmentPath=/run/systemd/generator/home1.mount
    SourcePath=/etc/fstab
    InactiveExitTimestamp=Sat, 2013-01-12 17:18:27 EET
    InactiveExitTimestampMonotonic=130570087
    ActiveEnterTimestamp=Sat, 2013-01-12 17:18:27 EET
    ActiveEnterTimestampMonotonic=130631572
    ActiveExitTimestampMonotonic=0
    InactiveEnterTimestamp=Sat, 2013-01-12 17:16:22 EET
    InactiveEnterTimestampMonotonic=4976341
    CanStart=yes
    CanStop=yes
    CanReload=yes
    CanIsolate=no
    StopWhenUnneeded=no
    RefuseManualStart=no
    RefuseManualStop=no
    AllowIsolate=no
    DefaultDependencies=no
    OnFailureIsolate=no
    IgnoreOnIsolate=yes
    IgnoreOnSnapshot=no
    DefaultControlGroup=name=systemd:/system/home1.mount
    ControlGroup=cpu:/system/home1.mount name=systemd:/system/home1.mount
    NeedDaemonReload=no
    JobTimeoutUSec=0
    ConditionTimestamp=Sat, 2013-01-12 17:18:27 EET
    ConditionTimestampMonotonic=130543582
    ConditionResult=yes
    Where=/home1
    What=/dev/md126p1
    Options=rw,relatime,rw,stripe=64,data=ordered
    Type=ext4
    TimeoutUSec=1min 30s
    ExecMount={ path=/bin/mount ; argv[]=/bin/mount /dev/md126p1 /home1 -t ext4 -o rw,relatime,data=ordered ; ignore_errors=no ; start_time=[Sat, 2013-01-12 17:18:27 EET] ; stop_time=[Sat, 2013-
    ControlPID=0
    DirectoryMode=0755
    Result=success
    UMask=0022
    LimitCPU=18446744073709551615
    LimitFSIZE=18446744073709551615
    LimitDATA=18446744073709551615
    LimitSTACK=18446744073709551615
    LimitCORE=18446744073709551615
    LimitRSS=18446744073709551615
    LimitNOFILE=4096
    LimitAS=18446744073709551615
    LimitNPROC=1031306
    LimitMEMLOCK=65536
    LimitLOCKS=18446744073709551615
    LimitSIGPENDING=1031306
    LimitMSGQUEUE=819200
    LimitNICE=0
    LimitRTPRIO=0
    LimitRTTIME=18446744073709551615
    OOMScoreAdjust=0
    Nice=0
    IOScheduling=0
    CPUSchedulingPolicy=0
    CPUSchedulingPriority=0
    TimerSlackNSec=50000
    CPUSchedulingResetOnFork=no
    NonBlocking=no
    StandardInput=null
    StandardOutput=journal
    StandardError=inherit
    TTYReset=no
    TTYVHangup=no
    TTYVTDisallocate=no
    SyslogPriority=30
    SyslogLevelPrefix=yes
    SecureBits=0
    CapabilityBoundingSet=18446744073709551615
    MountFlags=0
    PrivateTmp=no
    PrivateNetwork=no
    SameProcessGroup=yes
    ControlGroupModify=no
    ControlGroupPersistent=no
    IgnoreSIGPIPE=yes
    NoNewPrivileges=no
    KillMode=control-group
    KillSignal=15
    SendSIGKILL=yes
    Last edited by hseara (2013-01-13 19:31:00)

    Hi Hatter, I'm a little confused about your statement not to use raid right now. I'm new to the Mac, awaiting the imminent delivery of my first Mac Pro Quad core with 1tb RAID10 setup. As far as I know, it's software raid, not the raid card (pricey!). My past understanding about raid10 on any system is that it offers you the best combination for speed and safety (backups) since the drives are a striped and mirrored, one drive dies, quick replacement and you're up and running a ton quicker than if you had gone RAID5 (20 mins writes per 5G data?)Or were you suggesting not to do raid with the raid card..?
    I do plan to use an external drive for archival backups of settings, setups etc, because as we all know, if the best fool proof plans can be kicked in the knees by Murhpy.
    My rig is destined to be my video editing machine so the combo of Quad core, 4G+ memory and Raid10 should make this quite the machine.. but I'm curious why you wouldn't suggest raid..
    And if you could explain this one: I see in the forums a lot of people are running Bootcamp Parralels(sp) which I assume is what you use to run mulitple OS on your Mac systems so that you can run MacOS and Windblows on the same machine.. but why is everyone leaning towards Vista when thems of us on Windblows are trying to avoid it like the plague? I've already dumped Vista from two PCs and installed XP for a quicker less bloated PC. Is vista the only MSOS that will co-exist with Mac systems? Just curious..
    Thanks in advance.. Good Holidays

  • X2100 hardware RAID support

    Got a new X2100 server with two SATA disks.
    I upgraded the BIOS using the Supplemental 1.1 disk. The BIOS revision reported is now 1.0.3.
    Have defined a hardware RAID mirror of two SATA disks using the BIOS utility. Looks Ok and is reported as "Healty Mirror" when the machine is turned on.
    If I boot Fedora Core 4 (which is not officially supported by Sun) I see two disks instead of one as expected.
    If I boot Solaris 10 1/96 (which IS supported) then no disks are found!
    So my question is: Does anyone know if the HW RAID system on the X2100 is supported at all? And if it is, how?

    Have you looked the procedure in this manual?
    http://www.sun.com/products-n-solutions/hardware/docs/html/8 19-3720-11/Chap2.html
    Also there seems to be a lack of drivers available with some operating systems ( W2003 ) on the X2100, this could be the case with FC4 ( although I use FC4 on a notebook and it seems very close to RHEL 4, but without RHEL graphics, clustering and other tools ). In the InfoDoc section of the spectrum handbook for the X2100, there is a document that discusses configuring SVM on x64 systems, at the begining the author recommends using the hardware RAID utility, this may suggest that it is supported. You should take a look through the system documentation to see if you have missed something, I'd be interested to know if you set the BIOS up for the various OS types!

  • Best protocol for dual booting on a hardware RAID 0 array?

    Hi folks. I would like to dual boot Windows 7 and Arch. I'll append the specs. I have a Terrabyte to split evenly between the two drives - each is 500G. Unless someone can come up with a reason and convince me otherwise, I want to do away with the RAID array. There's no redundancy anyhow and the speed I would lose breaking the array is negligible, therefore irrelevant.
    My issue is that I have a RAID 0 hardware array with Intel Rapid Storage Technology as the controller. The computer did NOT come with a Windows disk, but rather a recovery partition. It is my understanding that if I break the array, I will lose the recovery partition and will not be able to reinstall Windows - which I need. IF....the recovery partition can be unphased by breaking the array and I can use it to reinstall Windows, I would prefer that since I may need to recover Windows in the future. It's not a deal breaker if I can't keep the recovery partition, since I have the Windows key.
    Is this the ideal protocol?:
    1. Backup - I plan on using Alienware Respawn or Clonezilla to backup to a CD and will also backup to an external drive.
    2. Break array, but do not alter BIOS to AHCI - leave as RAID.
    3. Restore Windows on one drive.
    4. Install Arch on second drive.
    5. Configure GRUB.
    6. Smoke stogie or alternatively weep because I turned my computer into a brick.
    At which stage does the partitioning come in? Before or after breaking the array? Is there a better method, than the one I listed. I have spent days scouring Google and the forums and while it's easy to find info on breaking a hardware RAID, there isn't much on doing this with the recovery partition and Alienware Respawn aspects involved. Any help would be appreciated. Please don't kill me or shred my diameter.
    ==================================================================================
    Specs:
    Processor: Intel(R) Core(TM) i7 CPU Q 740 @ 1.73GHz 1.73 Ghz: 8 Intel(R) Core (TM) i7 CPU q740 @ 1.73GHz
    Installed Memory: 8.00 GB RAM
    64 Bit Operating System
    Alienware M17X10
    Windows 7 Home Premium
    Performance Options: DEP turned on. Virtual Memory: 8180 MB
    Architecture: AMD64 Intel64 Family 6 Model 30 Stepping 5, GenuineIntel
    Computer: ACPI x64-based PC
    Display Adapters: 2 ATI Mobility Radeon HD 5800 Series
    DVD/CD-ROOM: HL-DT-ST DVDRWBD CA10N
    IDE ATA/ATAPI controllers: Ricoh PCIe Memory Stick Host Controller, Ricoh PCIe SD/MMC Host Controller, and Ricoh PCIe xD-Picture Card Controller
    IEEE 1394 Bus host controllers: Ricoh 1394 OHCI Compliant Host Controller
    Imaging devices: Integrated Webcam
    Mice; 3 HID-Compliant Mouse and Synaptics PS/2 Port Toch Pad
    Monitor: Generic PnP Monitor
    Sound, video and game controllers: AMD High Definition Audio Device and High Definition Audio Device
    Storage Controllers: Intel(R) Mobile Express Chipset SATA RAID Controller

    I think the big thing will be backing up. I don't know anything about the two programs you would use, but I know that if I dd copy the disks, I would have to change the size of the partitions to match the size of the new partitions. IE: I have Arch installed on a RAID0 of 32GB each, and if I wanted to break my RAID and install on just one disk, I would have to shrink the size of my dd'ed copy to match the smaller drive.
    Otherwise, it looks like you have the right idea, or at least the right direction.

  • Updating a Hardware RAID array

    I'm running a recent Mac Pro model with Hardware RAID and four 1TB SATA drives.
    Seagate 1.5TB drives are now available, and they're cheaper than the 1TB drives were just a few months ago.
    My Infrant/Netgear ReadyNAS NV+ implements a proprietary form of RAID that they called X-RAID, similar to RAID-5 but with the feature that you can swap larger drives into the array one at a time, and when you're done, the RAID will be re-synced to make use of the additional storage without requiring that you dump and restore the contents of the array. The final re-sync is an admittedly long process (it took three days for mine to update from 4x750GB to 4x1TB) but it's still a very convenient feature.
    Does Hardware RAID on Mac OS X have anything like this, i.e. can I replace 1TB drives one at a time with 1.5TB drives and preserve my data? And when I'm done, will the system recognize and use the new storage?
    If not, could someone please describe the procedure that I would need to go through in order to achieve this? I presume that I would need to recreate the RAID array entirely, re-install the OS, and then restore the system from my Time Machine backup. But I'm not sure of the details.
    I'd appreciate any help with this. Thanks.
    An aside: I was more than a little upset a few months ago when I called Apple Support for help with initial setup of my RAID array (I have Apple Care) and was told "Sorry, we don't support Hardware RAID." So Apple was quite happy to sell me an $800 piece of hardware for my lavish new system but didn't bother to tell me at time of purchase that my configuration was "unsupported." It seems a little outrageous to me.
    Given Apple's refusal to help me I need to ask the community for help with problems like this.

    Hi rrgomes;
    Never having heard of X-RAID but going by your comment that is similar to RAID5 I would have to say that you will not be able to upgrade by replacing one disk at a time.
    As to no support for the RAID card, I personally would not have taken his word for that. Instead I would have escalated it up through Customer Service just to be sure before I gave up on that point. To me the response you got there sounds bogus.
    Allan

  • Solaris Volume Manager or Hardware RAID?

    Hi - before I build some new Solaris servers I'd like thoughts on the following please. I've previously built our Sun servers using SVM to mirror disks and one of the reasons is when I O/S patch the server I always split the mirrors beforehand and in the event of a failure I can just boot from the untouched mirror - this method has saved my bacon on numerous occasions. However we have just got some T4-1 servers that have hardware RAID and although I like this as it moves away from SVM / software RAID and to hardware RAID I'm now thinking that I will no longer have this "backout plan" in the event of issues with the O/S updates or otherwise however unlikely.
    Can anyone please tell me if I have any other options?
    Thanks - Julian.

    Thanks - just going through the 300 page ZFS admin guide now. I want to ditch SVM as it's clunky and not very friendly whenever we have a disk failure or need to O/S patch as mentioned. One thing I have just read from the ZFS admin guide is that:
    "As described in “ZFS Pooled Storage” on page 51, ZFS eliminates the need for a separate volume
    manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of
    logical volumes, either software or hardware. This configuration is not recommended, as ZFS
    works best when it uses raw physical devices. Using logical volumes might sacrifice
    performance, reliability, or both, and should be avoided."
    So looks like I need to destroy my hardware RAID as well and just let ZFS manage it all. I'll try that, amend my JET template and kick of an install and see what it looks like.
    Thanks again - Julian.

  • Seeking advice for backing up Xserve G5 w/ RAID PCI card

    Hello all!
    I'm a newbie to Macs and server admin, and I have inherited the job of setting up a server at work for file storage. I'll do my best to give a concise description of our set-up - I'm looking for some advice on the last few odds and ends... mainly how we can backup the system.
    We bought an Xserve G5 with an option RAID PCI card. We have 3 500GB drives in the Xserve, configured to RAID 5 (giving us effectively 1TB of storage space). We will be using the server for data storage. About 20 computers will access the server over a network; we are using an assortment of Macs and PCS.
    I am seeking advice on backup systems for the server. In the event that the RAID5 fails, we don't want to lose our data. We just need a snapshot of the server; we don't need to archive data or to take the HD offsite. Our old server just used Retrospect to run incremental backups every night (with a complete clean backup once a month). Our current thought is to attach large external hard drive to our admin computer (not the server directly) and run nightly backups as before.
    The major points I have are:
    -Any thoughts on reliable 1 TB external drives?
    -Any recommendations on software that can backup from a RAID over a network? I found info for Retrospect Server 6.0 - it seems to do what we want, but is rather pricey.
    Thanks in advance for any advice! I really appreciate it!
    Xserve G5 Mac OS X (10.4.2)

    Greetings-
    We all started out as newbies at one time or another-no worries. That is why we are here.
    I personally use the Lacie branded drives. They are sturdy and reliable. My only thoughts here are to have at least three external drives-one for today's backup, one for a backup stored securely nearby, and one to be securely stored off-site. Rotate the drives daily so that your worst-case scenario will be a catastrophe that requires you to use a three-day old backup, depending upon how old your off-site backup is. Not ideal but better than the alternative. External drives are cheap enough these days to allow you to do this at a reasonable cost. Plus it is easy enough to throw one in a briefcase and tote it home for safety (just don't lose it!)
    I would stay away from Retrospect. If you search these forums you will find several instances of folks having some serious issues with the program. I use a program called Carbon Copy Cloner that does the job nicely for my basement server. There are ways to do the backups via the command line interface as well but I am not so familiar with those commands. You may have to dig a little deeper to find something that works for you.
    One of the other advantages of the FW external drive is that you can share it with other users, so perhaps you can set things up to have your network backups to go to that drive. Tis a thought.
    Luck-
    -DaddyPaycheck

  • Premiere Pro CS4 Unusable With Hardware RAID 0

    Greetings, This is a mysterious problem I seem to be having, let me tell my tale...
    I was using a software RAID0 while editing with Premiere.  Even using HD clips, it was very fast! The RAID was only to serve up the footage. Scratch disk and OS disks are on separate drives.
    We then upgraded to a Hardware RAID card (3ware) and migrated the data from the old to new drives, everything going fine. Finally get around to opening permiere and the project used to take 2 minutes or so to open, now takes an excruiciating 10-15 minutes!  Plus the timeline that I was able to scrub through at a decent speed is basically unusable.  I cannot play clips in the timeline either unless I render the preview, where on the software RAID I could play unrendered timelines with no stutters.
    Went back to the software raid just to check, and it performed just as fast as before.
    Of course I assumed it was a hardware issue. So I contacted 3Ware support and basically they were stumped.  Then I moved onto running a whole slew of tests from HDtune on the RAID array. I was getting around 290MBps and very nice random I/O speeds, write speeds were great too.  Weird. I could also open up any of my source clips in Quicktime and they would play perfect, and i could seek anywhere in the clip with no hesitation.
    Still not satisified, I opened up about 250 clips in Avid MC5, (using AMA linking to make sure it's not some kind of codec thing) and made a lot of short cuts in the timeline.  And lo and behold, it is playing like it is a rendered clip, and I can scrub through the timeline perfectly.
    This weird HDD activity was then confirmed using a disk monitor. Take a look at the graph:  http://www.bubble-engine.com/HDtune.jpg
    The low evenly space out bars are activity from using Premiere Pro, when the timeline is unusable. The big solid areas are me using Avid. You can see for some mysterious reason that Premiere isn't accessing the drive properly. But again, on the software RAID Premiere behaves properly.....
    Any Ideas?? This is one of the weirdest tech support things I've ever encountered...

    when I click the record button, the screen goes to a blue screen with Playing on video hardware
    This is normal for HDV.
    How do I also get the Scene Detect to work?
    With HDV?
    Upgrade to CS5.

  • Can I make a hardware raid with the on board Marvell SE9128 chip on p67a-gd65?

    Hi,
    as you all know the RAID 0/1/10/5/JBOD with the p67 chipset are pure software raids or "fake" raids. I saw some sata-raid pcie 2.0 x1 cards on the sell which have the Marvell SE9128 chip. So my question is: Can I make a hardware raid with the onboard Marvell SE9128 chip on the p67a-gd65?
    Thanks
    --pepe

    Quote from: Stu on 06-November-11, 02:31:26
    Hardware RAID explained:
    http://www.pcguide.com/ref/hdd/perf/raid/conf/ctrlHardware-c.html
    What do you want to say with this? I know what's the difference between a software (fake) and hardware raid? Your post doesn't answer the question. The p67-chipset has integrated software raid. The marvell se9128 is also found on some hardware raid cards. That's why I'm still wondering if it is for a hardware raid on this board or not.

Maybe you are looking for