Asm mirroring vs. hardware raid

We are planning a new installation of Oracle 10g Standard Edition with RAC.
What is best to use: asm mirroring or hardware raid?
Thank you,
Marius

I found this link http://www.revealnet.com/newsletter-v6/0905_D.htm which has an interesting comparation.
We have an iSCSI.
Porzer: I'm thinking the same, but I don't have experience with Oracle and I want to know from someone with more experience :)
Thank you,
Marius

Similar Messages

  • I need help on how to setup hardware raid for ASM.

    In the « Recommendations for Storage Preparation” section in the following documentation: http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmprepare.htm
    It mentions:
    --Use the storage array hardware RAID 1 mirroring protection when possible to reduce the mirroring overhead on the server.
    Which is a good raid 1 configuration considering my machine setup?
    “I put my Machine info below.”
    Should I go for something like:
    5 * raid 1 of 2 disks in each raid: disk group DATA
    5 * raid 1 of 2 disks in each raid: disk group FRA
    Then ASM will take care of all the striping between the 5 raids inside a disk group right?
    OR, I go for:
    1 * raid 1 of 10 disks: disk group DATA
    1 * raid 1 of 10 disks: disk group FRA
    In the second configuration, does ASM recognize that there are 10 disks in my raid configuration and stripes on those disks? Or to use ASM striping, I need to have lots of raid in a disks group?
    Here is my Machine Characteristics:
    O/s is Oracle Enterprise Linux 4.5 64 bit
    Single instance on Enterprise Edition 10g r2
    200 GIG database size.
    High "oltp" environment.
    Estimated growth of 60 to 80GIG per year
    50-70GIG archivelogs generation per Day
    Flashback time is 24 hours: 120GIG of flashback space in avg
    I keep a Local backup. Then push to another disk storage, then on tape.
    General Hardware Info:
    Dell PowerEdge 2950
    16 GIG RAM
    2 * 64 bit dual core CPU's
    6 * local 300G/15rpm disks
    Additional Storage:
    Dell PowerVault MD1000
    15 * 300G/15rpm Disks
    So I have 21 Disks in total.

    I would personally prefer the first configuration and let ASM stripe the disks. Generally speaking, many RAID controllers will stripe then mirror (0+1) when you tell it to build a striped and mirrored RAID set on 10 disks. Some will mirror then stripe (1+0) which is what most people prefer. That's because when a 1+0 configuration has a disk failure, only a single RAID 1 set needs to be resync'd. The other members of the stripe won't have to be resynchronized.
    So, I'd prefer to have ASM manage 5 luns and let ASM stripe across those 5 luns in each disk group. It also increases your ability to reorganize your storage if you need 20% more info in DATA and can afford 20% less in FRA, you can move one of your RAID 1 luns from FRA to DATA easily.
    That's my 0.02.

  • Solaris 10 X86 - Hardware RAID - SMC/SVM question...

    I have gotten back into Sun Solaris System Administration after a five year hiatus... My skills are a little rusty and some of the tools have changed, so here is are my questions...
    I have installed Solaris 10 release 1/06 on a Dell 1850 with an attached PowrVault 220v connected to a Perc 4 Di controller. The RAID is configured via BIOS interface to my liking, Solaris is installed and see's all my partitions which I created during install.
    For testing purposes, the servers internal disk is used for the OS, the PowerVault is split into 2 RAID's - one is a mirror, one is a stripe...
    The question is; do I manage the RAID using Sun Management Console and the tools OR do I use SMC?
    When I launch SMC and go into Enhanced Storage... I do not see any RAID's... If I select "Disks" I do see them, but when I select them, it wants to run "FDISK" on them... now this is OK since they are blank but I want to ensure I am not doing sometinhg I should not be concerned with...
    If the PERC controller is controlling the RAID, what do I need SMC for?

    You can use SMC for other purposes but it won't help your with RAID.
    Sol 10 1/06 has raidctl which handles LSI1030 and LSI1064 RAID�enabled controllers (from raidctl(1M)).
    Some of the PERCs (most?) are LSI but I don't know if they are chipsets used by your PoweEdge (I doubt it).
    Generally you can break it down like this for x86:
    If you are using hardware RAID with Solaris 10 x86 you have to use pre-Solaris (i.e. on the RAID controller) managment or hope that the manufacturer of the device has a Solaris managment agent/interface (good luck).
    The only exception to this that I know of is the RAID that comes with V20z, V40z, X4100, X4200.
    Otherwise you will want to go with SVM or VxVM and manage RAID within Solaris (software RAID).
    SMC etc are only going to show you stuff if SVM is involved and VxVM has its own interface, otherwise the disks are controlled by PERC and just hanging out as far as Solaris is concerned.
    Hope this helps.

  • Systemd-fsck complains that my hardware raid is in use and fail init

    Hi all,
    I have a hardware raid of two sdd drives. It seems to be properly recongnized everywhere and I can mount it manually and use it without any problem. The issue is that when I add it to the /etc/fstab My system do not start anymore cleanly.
    I get the following error( part of the journalctl messages) :
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Found device /dev/md126p1.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Starting File System Check on /dev/md126p1...
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: /dev/md126p1 is in use. <--------------------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: e2fsck: Cannot continue, aborting.<----------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: fsck failed with error code 8.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: Ignoring error.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Started File System Check on /dev/md126p1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Mounting /home1...
    Jan 12 17:16:22 biophys02.phys.tut.fi mount[530]: mount: /dev/md126p1 is already mounted or /home1 busy
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: home1.mount mount process exited, code=exited status=32
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Failed to mount /home1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Dependency failed for Local File Systems.
    Does anybody undersand what is going on. Who is mounting the  /dev/md126p1 previous the systemd-fsck. This is my /etc/fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sda1
    UUID=4d9f4374-fe4e-4606-8ee9-53bc410b74b9 / ext4 rw,relatime,data=ordered 0 1
    #home raid 0
    /dev/md126p1 /home1 ext4 rw,relatime,data=ordered 0 1
    The issue is that after the error I'm droped to the emergency mode console and just pressing cantrol+D to continues boots the system and the mount point seems okay. This is the output of 'system show home1.mount':
    Id=home1.mount
    Names=home1.mount
    Requires=systemd-journald.socket [email protected] -.mount
    Wants=local-fs-pre.target
    BindsTo=dev-md126p1.device
    RequiredBy=local-fs.target
    WantedBy=dev-md126p1.device
    Conflicts=umount.target
    Before=umount.target local-fs.target
    After=local-fs-pre.target systemd-journald.socket dev-md126p1.device [email protected] -.mount
    Description=/home1
    LoadState=loaded
    ActiveState=active
    SubState=mounted
    FragmentPath=/run/systemd/generator/home1.mount
    SourcePath=/etc/fstab
    InactiveExitTimestamp=Sat, 2013-01-12 17:18:27 EET
    InactiveExitTimestampMonotonic=130570087
    ActiveEnterTimestamp=Sat, 2013-01-12 17:18:27 EET
    ActiveEnterTimestampMonotonic=130631572
    ActiveExitTimestampMonotonic=0
    InactiveEnterTimestamp=Sat, 2013-01-12 17:16:22 EET
    InactiveEnterTimestampMonotonic=4976341
    CanStart=yes
    CanStop=yes
    CanReload=yes
    CanIsolate=no
    StopWhenUnneeded=no
    RefuseManualStart=no
    RefuseManualStop=no
    AllowIsolate=no
    DefaultDependencies=no
    OnFailureIsolate=no
    IgnoreOnIsolate=yes
    IgnoreOnSnapshot=no
    DefaultControlGroup=name=systemd:/system/home1.mount
    ControlGroup=cpu:/system/home1.mount name=systemd:/system/home1.mount
    NeedDaemonReload=no
    JobTimeoutUSec=0
    ConditionTimestamp=Sat, 2013-01-12 17:18:27 EET
    ConditionTimestampMonotonic=130543582
    ConditionResult=yes
    Where=/home1
    What=/dev/md126p1
    Options=rw,relatime,rw,stripe=64,data=ordered
    Type=ext4
    TimeoutUSec=1min 30s
    ExecMount={ path=/bin/mount ; argv[]=/bin/mount /dev/md126p1 /home1 -t ext4 -o rw,relatime,data=ordered ; ignore_errors=no ; start_time=[Sat, 2013-01-12 17:18:27 EET] ; stop_time=[Sat, 2013-
    ControlPID=0
    DirectoryMode=0755
    Result=success
    UMask=0022
    LimitCPU=18446744073709551615
    LimitFSIZE=18446744073709551615
    LimitDATA=18446744073709551615
    LimitSTACK=18446744073709551615
    LimitCORE=18446744073709551615
    LimitRSS=18446744073709551615
    LimitNOFILE=4096
    LimitAS=18446744073709551615
    LimitNPROC=1031306
    LimitMEMLOCK=65536
    LimitLOCKS=18446744073709551615
    LimitSIGPENDING=1031306
    LimitMSGQUEUE=819200
    LimitNICE=0
    LimitRTPRIO=0
    LimitRTTIME=18446744073709551615
    OOMScoreAdjust=0
    Nice=0
    IOScheduling=0
    CPUSchedulingPolicy=0
    CPUSchedulingPriority=0
    TimerSlackNSec=50000
    CPUSchedulingResetOnFork=no
    NonBlocking=no
    StandardInput=null
    StandardOutput=journal
    StandardError=inherit
    TTYReset=no
    TTYVHangup=no
    TTYVTDisallocate=no
    SyslogPriority=30
    SyslogLevelPrefix=yes
    SecureBits=0
    CapabilityBoundingSet=18446744073709551615
    MountFlags=0
    PrivateTmp=no
    PrivateNetwork=no
    SameProcessGroup=yes
    ControlGroupModify=no
    ControlGroupPersistent=no
    IgnoreSIGPIPE=yes
    NoNewPrivileges=no
    KillMode=control-group
    KillSignal=15
    SendSIGKILL=yes
    Last edited by hseara (2013-01-13 19:31:00)

    Hi Hatter, I'm a little confused about your statement not to use raid right now. I'm new to the Mac, awaiting the imminent delivery of my first Mac Pro Quad core with 1tb RAID10 setup. As far as I know, it's software raid, not the raid card (pricey!). My past understanding about raid10 on any system is that it offers you the best combination for speed and safety (backups) since the drives are a striped and mirrored, one drive dies, quick replacement and you're up and running a ton quicker than if you had gone RAID5 (20 mins writes per 5G data?)Or were you suggesting not to do raid with the raid card..?
    I do plan to use an external drive for archival backups of settings, setups etc, because as we all know, if the best fool proof plans can be kicked in the knees by Murhpy.
    My rig is destined to be my video editing machine so the combo of Quad core, 4G+ memory and Raid10 should make this quite the machine.. but I'm curious why you wouldn't suggest raid..
    And if you could explain this one: I see in the forums a lot of people are running Bootcamp Parralels(sp) which I assume is what you use to run mulitple OS on your Mac systems so that you can run MacOS and Windblows on the same machine.. but why is everyone leaning towards Vista when thems of us on Windblows are trying to avoid it like the plague? I've already dumped Vista from two PCs and installed XP for a quicker less bloated PC. Is vista the only MSOS that will co-exist with Mac systems? Just curious..
    Thanks in advance.. Good Holidays

  • X2100 hardware RAID support

    Got a new X2100 server with two SATA disks.
    I upgraded the BIOS using the Supplemental 1.1 disk. The BIOS revision reported is now 1.0.3.
    Have defined a hardware RAID mirror of two SATA disks using the BIOS utility. Looks Ok and is reported as "Healty Mirror" when the machine is turned on.
    If I boot Fedora Core 4 (which is not officially supported by Sun) I see two disks instead of one as expected.
    If I boot Solaris 10 1/96 (which IS supported) then no disks are found!
    So my question is: Does anyone know if the HW RAID system on the X2100 is supported at all? And if it is, how?

    Have you looked the procedure in this manual?
    http://www.sun.com/products-n-solutions/hardware/docs/html/8 19-3720-11/Chap2.html
    Also there seems to be a lack of drivers available with some operating systems ( W2003 ) on the X2100, this could be the case with FC4 ( although I use FC4 on a notebook and it seems very close to RHEL 4, but without RHEL graphics, clustering and other tools ). In the InfoDoc section of the spectrum handbook for the X2100, there is a document that discusses configuring SVM on x64 systems, at the begining the author recommends using the hardware RAID utility, this may suggest that it is supported. You should take a look through the system documentation to see if you have missed something, I'd be interested to know if you set the BIOS up for the various OS types!

  • Hardware Raid 1 with 890GXM-G54

    I have just built a new system with Win 7 Pro and the 890GXM-G65. I have a SSD for the boot drive, I'm using ACHI for the SDD. Trying to get two identical HDDs setup in hardware RAID 1 for data storage. I have loaded the AMD RAID drivers from the MSI site. I setup the array from the BIOS by switching out the ACHI option to RAID. I switched back to ACHI so the system would boot off the SSD.
    In Win7 Pro, I formatted the drives and indicated a mirror. Am I only getting a software non-bootable RAID solution? I would like to use a hardware RAID if at all possible.
    Any suggestions would be greatly appreciated.
    Thanks,
    drob9876

    Quote
    I switched back to ACHI so the system would boot off the SSD.
    When you switch back to AHCI, the controller is no longer in RAID Mode...
    Quote
    I would like to use a hardware RAID if at all possible.
    Well, first of all, 99% of all chipset integrated RAID Suppport solution are soft-RAIDs and no true hardware RAID Solutions.  In any case, your controller needs to remain in RAID mode to properly support the volume your created.

  • T2000 -Solaris 10 -  Hardware Raid - Hot Spare

    Hi,
    i am setting up a t2000 server with Solaris 10. I have booted to single user mode through the network and am setting up the mirroring. (ie #raidctl -c c0t0d0 c0t1d0).
    This has worked fine.
    How does one go about setting up a disk to be a hot spare to this mirror. I found some docs on running raidctl -a set -g c0t3d0 c0t0d0 but the -a option no longer seems to be valid????
    Is it possible to set up a hot spare like this?
    Thanks,
    Darren

    The hardware raid supports either mirrors or stripes. I don't believe hot spare is an option. Typically you would use a hot spare for a raid 5 configuration. Check out this doc page for more info:
    http://docs.sun.com/source/819-7990-10/ontario-volume_man.html#pgfId-1000514

  • Solaris Volume Manager or Hardware RAID?

    Hi - before I build some new Solaris servers I'd like thoughts on the following please. I've previously built our Sun servers using SVM to mirror disks and one of the reasons is when I O/S patch the server I always split the mirrors beforehand and in the event of a failure I can just boot from the untouched mirror - this method has saved my bacon on numerous occasions. However we have just got some T4-1 servers that have hardware RAID and although I like this as it moves away from SVM / software RAID and to hardware RAID I'm now thinking that I will no longer have this "backout plan" in the event of issues with the O/S updates or otherwise however unlikely.
    Can anyone please tell me if I have any other options?
    Thanks - Julian.

    Thanks - just going through the 300 page ZFS admin guide now. I want to ditch SVM as it's clunky and not very friendly whenever we have a disk failure or need to O/S patch as mentioned. One thing I have just read from the ZFS admin guide is that:
    "As described in “ZFS Pooled Storage” on page 51, ZFS eliminates the need for a separate volume
    manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of
    logical volumes, either software or hardware. This configuration is not recommended, as ZFS
    works best when it uses raw physical devices. Using logical volumes might sacrifice
    performance, reliability, or both, and should be avoided."
    So looks like I need to destroy my hardware RAID as well and just let ZFS manage it all. I'll try that, amend my JET template and kick of an install and see what it looks like.
    Thanks again - Julian.

  • Partitoning a Hardware Raid

    OK, I have already searched the net to no avail, im hoping someone here will be able to help me.
    I currently have an external 300gb HD with 2 partitions, one is a bootable backup and the other is storage. I have recently realized that if this drive were to fail I would lose a ton of important data. I therefore have decided to create a raid setup. I am considering purchasing a Buffalo DriveStation Duo with 2 500GB drives in it. I would set this up in a RAID 1 array so I will have 500gb of usable space with a constant mirror image backup. My question is, can I partion the RAID into 2 partitions (as I have now with my 300 gb drive) one for a bootable backup and one for storage and still maintain the mirroring. In essence I want to have 1 drive with 2 partitions on it, and have the drive still be mirrored to the second drive.
    First-Is this possible?
    Second-Will I still be able to boot from the boot partition?
    Three-If so, can having more than one partition on a raid create problems, such as increased disk failure, slower speeds, etc?
    Thanks in advance to anyone that can help me with this!

    Well, the reason that I wanted to do this raid array is because I store allot of files on my external drive that are not on my internal drive, therefore if my external drive were to fail my files would be gone for good. I also keep a bootable backup that I constantly recreate on a seperate partition of the external drive. I thought if I create the RAID then I would avoid loss of data in the event of a disk failure.
    As far as using my current drive, I was going to upgrade the amount of storage that I had and set up the raid system at the same time, If I buy 2 500gb drives I will have 500gb of usable space instead of 300gb.
    I am however unfamilier with OS X RAID software, is this the same as a "software raid" as I have heard that a software raid is much slower than a hardware raid because it uses you computers processer to mirror data rather than using a raid card in the enclosure.
    As for cost, I can buy this drive station duo (does raid 0 and 1, has firewire 800, 400, and usb, and an internal hardware raid) with 2 preinstalled 500gb sata drives for $300. I thought that this was a pretty good price for what I am getting but I am very uneducated in all of this and could be very wrong.
    If, like you said earlier, I can partiton the drives and then create a raid (with the harware raid and not need a "dual raid card" or whatever) I would do that. Are you saying that I would only be able to have the 2 raids required if I use the OS X software.
    I guess my questions boil down to this
    1. Is the OS X raid software considered a "software raid" and is it slower than a hardware raid? If so, by how much?
    2. If its not any slower, What materials would you recomend to build this considereing I want 500gb of usable space rather than the old 300gb?
    Also, if its not slower where is this software in the OS, Disk Utility? And how would I use it?
    3. If this software way is slower then I would still want to use the hardware method that I have been pursuing, can I still make the partitions in advance and then make 2 raids or will this not work with the hardware method?

  • PCI ATA 66 Hardware RAID, follow up.

    http://discussions.apple.com/thread.jspa?threadID=1015242&tstart=0
    Just to recap, the problem that I was having in this archived post booting OS X from my 120GB+120GB hardware PCI ATA66 RAID0 drive, was in fact due to the <8GB partition limitation. XPostFacto doesn't enforce this on the PCI card, and it worked for a while with no helper, until after the 10.4.10 software update, which must have written some files past the 8GB portion of the drive. I can now still boot from my hardware RAID, I just have to use a helper in XPostFacto, like if you were booting a FW drive.

    Just to clarify, I was talking about the ACARD ATA66RAID hardware RAID card, it works in OS9 or OSX, when you set the dip switch on the card to RAID 0 striping, the OS sees it as large SCSI drive, and you can optionally install the OS 9 drivers with Disk Utility in OS X, or initialize it in OS 9 with Drive Setup. First you have to initialize each drive individually in normal mode before setting the striping switch. Booting OS 9 has a volume size limitation, though, (<190GB or 200GB, not sure exactly) and my 120GB+120GB volume exceeds that.
    I am using two individually cabled Master drives, I think you can also stripe two Slaves with this card, so then you could set up a software mirrored RAID with the two striped RAIDS, there's just not any more room in the G3 Desktop for any more drives though.
    Probably not too many folks use those cards, the ACARD 6860M, they were a lot more expensive when they first came out, but you might find a good deal on one now. You can probably get better i/o peformance and larger max drive size from ATA-100, ATA-133 or SATA PCI cards, not sure of the latest price comparison on those.
    From memory, the built-in ATA gets around 16MB/sec, the PCI66 can get around 40MB/sec and so can external FW, the ATA66RAID can get up to 60MB/sec. There's lots of variation in those numbers depending on usage. Not sure what numbers you get with ATA 100/133/SATA...those are also more $$$.

  • K8N Neo2-F Hardware RAID question

    Hi guys,
    Question for you. I have a Hitrachi 160GB SATA drive and I installed x64. After all is good and well I went out and got a second identical drive. I would now like to have these two drives Raided with the hardware raid. When I go into the RAID setup after POST, I can see both, add both, and mirror both. When I reboot, all I get is a post and it just stops after showing that box where it lists everything in the computer and what has what IRQ's. No error messages saying no boot drive, no system disk, etc. I switch it back and I can boot with the one drive. I also noticed during the RAID config, it asks me if I want to erase the drive. I always say no because I'm scared that it will erase the primary drive that I need to keep and not the new one for mirroring. Can anyone help me or confirm I'm doing this right. Thank you very much in advance.
    Matt
    -MSI K8N Neo2-F Socket 939 NVIDIA nForce3 Ultra ATX AMD Motherboard
    -AMD 64 4200+ and fan/heatsink that came with it - NO OVERCLOCK (YET)
    -2 - HITACHI Deskstar 7K250 HDS722516VLSA80 -13G0254 160GB 7200 RPM 8MB Cache Serial ATA150 Hard Drive
    -CORSAIR XMS 1GB (2 x 512MB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Memory
    -ASPIRE ATX-AS520W BLACK ATX 520W Power Supply 115/230 V CB IEC 950/ TUV EN 60950/ UL 1950/ CSA 950
    -Windows XP x64 SP1
    -BFG GeForce 6800 GT AGP

    You windows installation don't have the nvraid drivers loaded sou you won't be able to boot that windows installation
    with the xp64 disk put in a enabled raid array .
    But perhaps you can fool the system by doing this .
    Leave the xp64 HD as is (just as a regular sata say at sata 1 )
    Connect the new disk to the other sata controller , say sata 3 ( as they are controlled in pairs )
    Set sata 3 as raid enabled and create a raid-1 array in the nvidia raidbios with just this single drive in it .
    Boot windows xp64 from the normal enabled xp-HD and it should detect new hardware is added and ask for drivers .
    Install the nvidia raid drivers and reboot when asked .
    Then check if you have raid manager and raid drivers enabled in the windows installation .
    After this point you can proceed to do the thing you first tried .(but without clearing the disk)
    Delete the workaround fake raid you made with the new disk .
    Shut down and move the hd-connectors again so that you have the two drives either at sata 1+2 or sata 3+4 .
    Make a new array that includes the xp installation with only this disk in it , leave the other one free .
    Try to boot the system .
    If it boots you use nvraid manager in windows to add the second drive and let the array rebuild (sync drives )
    Note that rebuilding take several hours to complete .
    Look in the nvraid user guide for details .
    To fully understand the nvidia raid it's wise to read this from start to finish a couple of times ,
    and to have a hardcopy of it printed out.
    A copy of the nvraid userguide rev.20 here :
    ftp://ftp.tyan.com/manuals/m_NVRAID_Users_Guide_v20.pdf

  • P6N Hardware raid problems (sata 6,7)

    I didn't find the sig things which i find odd but anyway...
    here my rig
    P6n Diamond
    2 asus 8800gts in sli
    i have 4 HDD WD 200 gigs sata2
    (will go into detail about those lovely HDDS)
    dvd burning sata (currently unplug)
    dvd rom IDE plug in
    2 gigs of ram 4x512
    core duo 2.66 ghz
    OK here the problem i wanted 4 HDDS in raid 0 or raid 1 or the 1+0 since i got 4 ... works GREAT fast machines until something went wrong which blew out my original 4 gigs ram 4x1 gig ram sticks and no matter what ram i put in it did not work...sent it in for RMA got it back ever since i got it back RAID BEEN ACTING UP!!! my raid would FAIL in 1+0(i assume it means stripeing and mirroring) and 0 my desktop would hang for a few moments doing gaming regular uses email internet music video uses OR IDLEING after it recovers i get a balloon saying that access fail er to X.X.M drive it use to be one drive now when one acts up the other three follows it examples that was in raid 0 and 1+0 same issues once in a blue moon wait that everyday when i have it in raid 1+0 i found a brand new harddrive in my computer the raid had degraded ? what that means well i know what but HOW ? how did it degraded? so i rebuild the array and geese what compleat fail er it dies and i loose all my stuff on it...now be for someone comes in such as MSI saying that my HDD may be at fault it not...infact it is currently in raid 0 just 2 of them on my other desktop working great and the BEST PART it is FASTER on the P6N i get 200MB burst and when it was in raid 0 i get like 120 continuous read and in raid 1+0 i believe half that speed or 90mbs in the older desktop i get 330mb BURST and i forgot what i got for continuous ready again probably 90 and it is stable plus i ran forever hour es test on the HDD surface scans errors and such and it in 100% good shape and ready to rock....so you guys know the HDDS were in sata 1-4 sata 5 was my dvd burner and 6-7 blank....so i was using the SOFTWARE raid and it not doing so well at all i did try using 2 drives i didn't do a proper test yet but i think it also failed...so seeing that I'm having allot of software raid problems decided to get from bad to worse i cant get anywhere with it so here the new set up now...
    2 hdds ONLY and PHYSICALY installed since the other 2 are in the other computer i have an IDE dvd rom plug  and drive 1 and 4 are plug into the sata 6 and 7 the black and red sata heads...NO BLUE LIGHTS AT ALL NEVER SEEN A BLUE LIGHT... as we know Jp1 and JP2 i think ? there are 4 pins there are 4 possible ways to set that up i tryed all four and no blue lights not even a blink i look close I CANT FIND THE LIGHT PHYSICALLY INSTALLED or i dint recognize it ( i hope i am being blind) what i mean there four ways i tryed jumping one pin from jp1 to jp2 instead of two pins on jp1 and i even put 2 jumpers on both jp1 and 2 and both jumpers the other way from jp1 pin two jp2 pin actually that 6 ways my bad no blue lights...
    the BIOS is set to IDE didn't work in raid as i read from other places if things go well light come on and the BIOS sees it as a single drive which means no floppy to installed 3rd party drives AWESOME!!! cant get it working :( .. it not detecting it at all it saying i have NO HDDS installed at all
    i sent the board in for a second rma..guess what they did they pulled it out and put it in a box and send it right back to me....the moment it got there i get an email saying it being shiped back and i was like oh nice they replace it...they didn't TOUCH it...so I'm calling them up the next day...to find out why they did that and to try to set up the SIL raid....
    now what I'm asking is in case MSI don't help how you guys get yours to work or tell me how to get it working ....and if the HARDWARE raid is a pain then why is my raid failing too much in software raid dose anyone know how to keep the raid alive in software ? i originally wanted software so it supports 4 drives but it failing so I'm dropping it to two drives in the hardware raid and i cant get it to even w/e it is it need to do....any thing will help right now I'm stump i went to this forum spent all day looking through it and tryed some stuff no luck :(
    I'm using an ASUS board on my older desktop for 2 years straight more or less it still in raid 0 NO ERRORS it is sata1 that the HDD only maxi um speed with 2 sata1 in raid 0 it acts like 1 sata2 drive and i never had to reinstalled my OS for 2 years and it gets better the ORGINAL motherboard for that old desktop was an MSI and right now that original board is on my self causes a lil device fryed ( don't think it msi fault but got fed up with that board with allot of bugs it was a NEO F or something really old lol....
    awesome a spell checker :D

    i dont know how two memory stick whould make a diffrent but to answer your question no i havent the goal is to have all 4 sticks and yes they work great i put them in my old system as well and was able to see incrase of performance even though it two gigs all my other hardwares are working find cause when the mainboard was sent in for repaires i move some parts over so i can play SOME games while i wait...now for the power supply it has MORE then enough power it is a 1200 watt thermaltake with 4 12v rails 2 are 20 amps and the other 2 or 36 amps.. 5v rail is 30 or more but the 12 v rail are acurate i was studding it so i can properly connect the 36 amp to my video cards all power cord are plug in like i said all hardwares are working fine causes they were tested befor and none of them were at faults..
    however i will try the memtest86 now dose it need to have an OS installed (dont have it causes it wont see my HDDS in sata 6 and 7) i have to head out in a bit but when i come back ill check it out causes the orginal rams were working find and everythign was happy untill the motherboard blew out and took out 4 ram sticks so i brought a new one from a diffrent brand wich fell into the compatible list of ram for this board the other one was not "comptiable" thank you be back shortly...
    i forgot to mention befor i whould like a good list of directions beside the one in my manuel of how to set up the sata 6 and 7 raid if wores come to wores im going to back up my old HDDS wich have been working great and move it over to the HARDWARE raid and see if it works i will post what i find out but in the mean time the proper set up of the raid hardware will be nice in case i did something wrong thank you

  • Using a Mac Pro w / Apple Internal Hardware RAID Card?

    Anyone using a Mac Pro with a Apple Hardware RAID Card 2010?
    ( I have a 12 Core )
    Is it worth the $600-700 ?
    How much faster than the software RAID 0?
    I see Hamm's RAID tips chart .. but it doesn't include such options .. plus it is based on a PC system.
    Any tips would be great ..
    I find many areas where the application is slow / unresponsive .. and I'm not sure where the bottleneck is ... Maybe the hardware RAID will solve it?
    I'm really am impressed with CS5 .. it would be even better with some adjustments .. but I am absolutely disappointed with sluggish performance on my top-of-the-line Mac Pro 12 Core.
    This is not what I expected for a $8,000 machine.
    Some users says that the port of CS5 for the Mac has taken a back seat to the Windows version at Adobe.
    I can't imagine that Adobe would not put 100% effort into Mac products.
    Go Team!!!

    I have a 2009 Mac Pro 3.33GHz Quad core w/ Apple RAID card, 16GB RAM from OWC, and Apple's Radeon HD 5870 GPU.
    I have RAID 0 set across 3x1TB drives internally, and the standard 640GB drive for OSX and all program files. I set all video assets, renders, previews and such on the 3TB RAID.
    This seems to work wonderfully. I built this system specifically to edit a feature film shot on P2 DVCProHD, and I've been impressed with how it handles it. This was all built prior to CS5, which took me by surprise. Had I known nVidia would become such a problem for Apple, I would have built a PC, but that's another story.
    I just started a new project in CS5 on the same system, this time using H.264 video from my Nikon D7000, and so far, it seems to play just as nicely, despite not having hardware acceleration via CUDA technology. Yellow bars on top, even. I haven't had any problems with clips taking a long time to populate on the timeline or any of that, so perhaps the RAID card helps there.
    All this aside, I've already decided to upgrade my RAID for another reason. Right now, my backup is performed via an eSATA-connected external drive through a PCI eSATA card. After every edit session, I dump everything on the RAID onto the external drive, and it goes much faster than the old FW800 transfer used to. I'm about to replace my Apple RAID card with an Areca card and set up a 4-bay RAID 3 via an SAS connection. This will allow for excellent data throughput while offering more security than my current RAID 0 / manual backup system, and free up the internal drives for backups, exports and render files.
    I believe in hardware RAID, but I'm not as knowledgeable as Harm and others are about it. I had my Mac built to order with the Apple RAID card, so I have no experience using Premiere with a software RAID. Due to my smooth experience using it, I think it was worth it, but plenty of people say the Apple RAID card is rubbish, and to go with Areca or Atto cards. I didn't know about them until after I built my system, and even though it will cost a couple thousand to upgrade my RAID from this point, I expect to have an even better system than I already have.
    I hope this helps, and feel free to ask any questions I didn't address.

  • Problem with Installation on DELL Poweredge with Hardware RAID 1

    Hello Arch LInux community, 
    I am a newbee with good linux knowledge of working on linux but not much of systems administration. I am very much interested to install Arch Linux on my new desktop which is a Dell poweredge having hardware RAID 1(PERC .... controller). It has windows 7 OS on its first partition.
    I saw on the controller's BIOS menu that there are 2 1tb hard drives with something like --:--:00 and --:--:01 labels. And they were partitioned into two logical volumes which are visible once I boot into Arch linux live CD as, /dev/sda (about 250GB, and has windows OS on it) and /dev/sdb (about 700GB).
    Firstly, I am confused with the hard disk labels: even though they are logical partitions (i.e: combined by RAID 1, they are seen as /dev/sda and /dev/sdb). In the arch linux Beginner's wiki, there is some description on configuring for RAID, which included mdadm or mdadm_udev module specification. I did include these modules, and followed the installatin instructions carefully. I was trying to install on /dev/sdb, with the following partitioning:
    Sector map of partitions on /dev/sdb  : 1542848512 : 735.7 GiB
    Disk identifier (GUID) : 967BF308-6E5E-43AD-AB2E-94EB975C3603
    First usable sector = 34 Last usable sector  = 1542848478
    Total  free space is 2023 sectors
    Number        Start    End                  Size             Code    Name
                      34    2047                     FREE    
    1               2048    2099199        1024.0 MiB    EF00            EFI System
    2          2099200    2103295         2.0 MiB    EF02      BIOS boot partition
    3          2103296    2615295         250.0 MiB    8300      Linux filesystem
    4          2615296    212330495         100.0 GiB    8300     Linux filesystem
    5       212330496    317188095      50.0 GiB    8300     Linux filesystem
    6       317188096    1541924863    584.0 GiB    8300            Linux filesystem
         1541924864     1542848478                         FREE    
    The instruction on the Beginner's wiki is somewhat difficult to understand for beginners. Especially I wasn't sure whether to make EFI System partition and BIOS boot partition or just EFI system partition. So I made both as per the instructions.
    Is using UEFI compulsory, on UEFI based systems?
    A little bit more sub-headings or division of instructions, in the Beginner's wiki,  based on the usage scenarios can be very beneficial for newbees like me.
    I am finally getting the following error when I select the Arch Linux from the GRUB menu. Why it can't find the device?  I shall wait for your initial responses, and will give more specifics of my installation to find out any wrong step that I may have made.
    [ 0.748399] megasas: INIT adapter done
    ERROR: device 'UUID=40e603e9-7285-4ec8-8a06-a579358a52a0' not found. Skipping fsck
    ERROR: Unable to find root device 'UUID=40e603e9-7285-4ec8-8a06-a579358a52a0' .
    You are being dropped to a recovery shell
    Type 'exit' to try and continue booting
    Sh: can't access tty; job control turned off
    [rootfs /]#

    Well, you can see your RAID partitions and you're getting GRUB to load, and you are even being dropped to the recovery shell which means that the /boot partition is being found and is accessible. All vary, vary good things.
    I would boot into the Arch live CD/USB again. Then check the UUID of the / Root partition. To do this I just check the log listing of /dev/disk/by-uuid
    ls -l /dev/disk/by-uuid
    Hopefully the UUID for the /root partition in the /boot/grub/grub.cfg is incorrect, and all you need to do is change it from 'UUID=40e603e9-7285-4ec8-8a06-a579358a52a0' to the correct UUID.
    If the UUID is correct... then maybe the correct driver module is not complied into the initramfs. You could try adding that mdadm to the MODULES= list in /etc/mkinitcpio.conf then rebuild the initramfs again.
    mkinitcpio -p linux
    Hum..., the whole /dev/sda and /dev/sdb.... hum, You know... Maybe you need to change the line in /boot/grub/grub.cfg
    set root='hd1,msdos1'
    The hd1 is the equivalent to /dev/sda and the msdos1 is the equivalent of the First Partition i.e. /dev/sda1.. (note this is my disk yours may correctly have different numbers and not be a msdos partition)... owe wait... hum, you know I am fairly sure that like is really the root disk where the /boot partition is and has no relation to the / Root partition... Someone clear that up please.... It is hard for me to recall and I have my / Root parition encrypted, so It is hard for me to make heads or tails of that one right now, but that could be the problem too i.e. try changing hd1 to hd2 or hd0 or something.
    Owe, and it may be faster if you just make temporary corrections to the GRUB menu by hitting the "e" key on the menu entry you want to change then hit.. I think F10 to boot the modified entry. That way you don't need to keep booting into the Live CD/USB
    Last edited by hunterthomson (2012-09-20 05:52:33)

  • My hardware RAID 1 only showing up in disk utility and not finder.

    Last week I finally was able to take my early 2011 iMac in to have my Seagate 1TB hard drive replaced for the recall.
    Late 2010 model iMac
    OS X 10.6.8
    OWC Mercury Elite Pro 4TB x 4TB Hardware RAID
    When I took it in the RAID 1 was working just fine. When I finally got the iMac back and booted everything up the RAID 1 was getting the dialog box that read "The disk you inserted was not readable by this computer."
    I have tried going into disk utility to see if I can verify the disk and then saw it is coming up as two different volumes... Along with those two different volumes, I can not go to click verify as it is greyed out.
    I contacted OWC and they said they have never heard of this and had two solutions, to try a software like diskwarrior, or if that didnt work I can take the drives out and try another external dock to see if it is the RAID hardware.
    If anyone has any ideas as to what could help with this situation that would be great as I was under the impression (and told by a number of other people) that having a RAID 1 as my backup was good and should not need anything else. I just want to be able to get all the data off then I will rebuild it or send it into OWC.
    Thanks!!!!

    So a fault in the Hardware Raid controller could make it not read properly? I am planning on having a good backup set up after this!
    As for the screenshot, the original was not set up with the SoftRaid. When I got the computer back I installed that to see if that may pick it up and make it at least readable to save the data on it. The two in question are the highlighted 4TB drives, which befor also read as one drive as it was set up in the OWC enclosure as a hardware raid.

Maybe you are looking for