Dmraid with raid5 on 2.6.24

Hi there,
I have a raid 5 array on a rocket raid 2600, I also have ICh9 so I can run dmraid. Im really not fussed about performance too much so Im happy to run fake raid.
I have found the rocket raid drivers under linux to be pretty unreliable (they just released a new version that also give me nothing but trouble, it does not work on 2.6.24 even though it says it does)
So Im kind of happy to move away from rocket raid. Anyway for the question, has anyone had success with dmraid and raid 5 under 2.6.24? Last time I tried I got a complaint about a missing raid45 function in the kernel or something (it has been renamed to raid456)
Thanks
Sam

Looks like there is a kernel patch, http://people.redhat.com/~heinzm/sw/dm/dm-raid45/
I'll try it out and see how i go

Similar Messages

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Replacing Xserve, how to deal with RAID5+0 array

    We're finally looking at replacing our 8+ year old Xserve, which has served us faithfully, with a new Nehalem model. The question is what to do about the two XRAID arrays that we have attached via fibrechannel. Both are fully stocked with 14 disks apiece, configured as RAID5 but then combined into a single logical drive via software RAID0.
    The question is - if we replace the old XServe with a new one, will we have to completely rebuild the RAID5+0? If so, can this be done without data loss?
    We do have the means to migrate all the data off and then back on after reconstruction but that is, of course, a lengthy and thus unappealing option.
    Any help will be most appreciated.
    Thanks,
    Cary Talbot
    Vicksburg, MS

    Hi Tony -
    Many thanks for the reply but unfortunately neither thread you referenced addressed the issue at hand.
    What I've got is two XRAID boxes with 14 disks each. All four of the RAID arrays (2 per box of course) are RAID5. However, rather than having 4 separate logical volumes, we combined the RAID5 arrays together with a software RAID0 on the Xserve host. I understand that switching out the Xserve box shouldn't have any effect on the data stored in the XRAID disks themselves but the question becomes can we simply reconstruct the software RAID0 on the new XServe and not loose all the data stored on the four RAID5 arrays? Has anyone tried this?
    Thanks for any advice/tips/experiences anyone can offer.
    Cary

  • KA 790 GX M, have problem with RAID5

    Pls help me, i  have 3'd time problem with RAID. First two time's i simply reinstall vista. Now, i can't.
    Maybe was something with my datacable on 1 of 3 HD's in SATA RAID5 ,
    i change this cable & power cables on disks (maybe were broken).
    So, now all 3-re HDD works well. They shows when i set up RAID option to IDE.
    So all three HDD's then shows in BIAS post.
    When i try to set up RAID to RAID, array loads ... but RAID bios shows one of this Three disks as standalone disk, not connected to RAID!
    I.e. it is inside the array, but is not initialised. I can'd describe this in few words :( Maybe i need make screenshoot if anybody can't help me.
    What i need... i need HELP!
    have 1.0 Tb of information on this raid array, so i can't simply reboot & reinstall all.
    Maybe there is simple way to correct state of this array disk? I.e. like it is changed to new one?
    Now initialise this disk?
    Pls help me.

    Hello!
    Please list your computer things: processor, memory, and PSU for example.
    I'm not very good at RAID. Have you possibly changed SATA port? 1-5 should work well together.

  • Raid5 and LVM recovery with single hdd failure

    Hi,
    I have 5 hard drives and an uefi motherboard, and my gpt partition table is something like this
    1 - 100MB Uefi partition ef00 # needed for booting and stuff
    2 - 100MB Linux partion 8300 # /boot
    3 - 2T Linux partition 8300 # lvm with all other stuff
    My raid5 array is on /dev/sd[abcde]3  and on that have LVM which is partitioned to /, /home, /swap.
    Now one hard drive died (/dev/sdd) and I would just like to get my data from the remaining 4 drives, which should be doable with raid5.
    booting doesn't work, and so i boot from a live uefi arch usb, and I can't find the LVM Volume array.
    could you guys please help me out here, there's some stuff on those drives i would reall like to get back.
    or if you know where else i could ask such a question so that others might help?
    Thank you, Zidar
    PS: I have created a simmilar setup in virtualbox so I can try varios things.
    Last edited by zidarsk8 (2013-03-29 23:15:54)

    Can you please clarify what you mean by "booting doesn't work"? What sort of errors, etc...
    Similarly, what does "can't find the LVM Volume array" mean? What errors do you get activating it?
    As a general rule, posting problems without specific error messages is only going to solicit guesswork at best. Posting the errors up front saves everyone time and effort.
    https://wiki.archlinux.org/index.php/Fo … ow_to_Post

  • Install Solaris 5.10 64 bit with Raid 1

    Hi Oracle Support
    At present I have two server X2270 with information below :
    S erver 1 have 4 h ard d isk ( 1 h ard d isk = 1TB)
    S erver 2 have a h ard d isk ( 1 h ard d isk = 1TB)
    I want to Move one HDD from server list order no.1 to server list order no.2. Mean 1 of 4 HDD of server1 will move to server2
    And it should be: server1 has 1TB x3 HDD and server2 has 2TBx2
    After , Server1 configure with RAID5, server2 configure with RAID1 .
    I configured Raid 1 on Server 2 but when I install Solaris , I don’t see Hard disk configured Raid 1 .
    Thanks
    LuyệnTV

    Just because you have two disks doesn't mean that Solaris will automatically use them as mirrors. You have to set this up, using SVM, ZFS VxVM, or the hardware RAID controller. Sometimes you can do this as part of the install, often it's done afterwards.
    You haven't said what type of root file system your Solaris is on, so it's not possible to give you a procedure to mirror the boot drive.
    If you use hardware RAID, it will zero any data on the drives, so that's probably out. I suspect you are using hardware management on your first system, as Solaris can't boot from a software striped or RAID5 device. This had to be setup before Solaris was installed on that system.

  • 2010 Mac Pro with Apple RAID card - "Drive carrier 00:03 removed"

    I've got a 2010 Mac Pro with the Apple RAID card and 4 internal identical Seagate ST32000641AS 2TB drives inside.
    I've never had a problem with the RAID card (either battery or in operation) except for one thing:
    Almost every time I shut the machine down (which I try to do as rarely as possible), it keeps losing Drive 3, the spare.
    (See RAID Utility event log snippet below.)
    [I wish I could understand why this Apple RAID card keeps losing the spare across power-cycles/reboots, but I digress.]
    So I just adopted it as the spare and it spawns the rebuild and many hours later, everything's fine again.
    ... until the last time I rebooted (Jan. 2nd).
    After the usual "I lost the spare" message upon login, it began the rebuild.
    But this time, about 11 1/2 hours later the rebuild stopped and RAID Utility reported "Drive carrier 00:03 removed".
    Bay 3 is now missing in the Controller view.
    I powered off the system last night, and left it off overnight.  Today I took all the drives out, blew the dust off and reseated them.
    Same thing.  Drive 3 is still missing, and now the rebuild task aborted.
    The drive bay location shouldn't matter, right?  Can I swap the Bay 3 drive with the Bay 4 drive?
    I figure one of two things would happen:
    (1) It will report Bay 3 as being there but Bay 4 as being missing, same situation as is now.  Meaning the problem is in the Bay 3 spare drive.
    (2) It will report Bay 3 as being missing (despite a 'good' drive being present) and Bay 4 will be the (unattached) spare, and the RAID set will be unviable.  Meaning the problem is with the drive bay itself, not whatever drive is plugged into it.
    If "Drive carrier removed" is trying to tell me the disk is bad, why do I not see any log messages about it?  I suppose a spare that never gets used could go bad from lack of use, but ... no messages at all?
    Hesitant to plunk down $160 on a new spare disk if the present one isn't actually dead ...
    % sudo raidutil list raidsetinfo
                                         Total     Avail
    Raidsets      Type       Drives       Size      Size  Comments
    RS1           RAID 5     1,2,4      5.23TB    0.00MB  Rebuild: 0% complete            
    % sudo raidutil list driveinfo
    Drives  Raidset       Size      Flags
    Bay #1  RS1             2.00TB   IsMemberOfRAIDSet:RS1 IsReliable
    Bay #2  RS1             2.00TB   IsMemberOfRAIDSet:RS1 IsReliable
    Bay #4  RS1             2.00TB   IsMemberOfRAIDSet:RS1 IsReliable
    Event log snippet:
    Saturday, January 11, 2014 8:15:19 AM PT
    Background task aborted: Task=Rebuild,Scope=DRVGRP,Group=RS1
    informational
    Friday, January 3, 2014 3:50:28 AM PT
    Drive carrier 00:03 removed
    informational
    Thursday, January 2, 2014 4:29:37 PM PT
    Marked drive in bay 3 as a global spare
    informational
    Thursday, January 2, 2014 4:29:28 PM PT
    Adopted drive in bay 3
    informational
    Thursday, January 2, 2014 4:28:34 PM PT
    Degraded RAID set RS1 - No spare available for rebuild
    critical
    Saturday, December 7, 2013 5:35:17 PM PT
    Marked drive in bay 3 as a global spare
    informational
    Saturday, December 7, 2013 5:35:08 PM PT
    Adopted drive in bay 3
    informational
    Saturday, December 7, 2013 5:34:11 PM PT
    The "RedundancyScrub" command could not be executed. (Invalid request or invalid parameter in the request.)
    warning
    Saturday, December 7, 2013 5:34:03 PM PT
    Degraded RAID set RS1 - No spare available for rebuild
    critical
    Friday, December 6, 2013 4:56:47 AM PT
    Battery finished conditioning
    informational
    Thursday, December 5, 2013 9:53:44 PM PT
    Battery started scheduled conditioning cycle (write cache disabled)
    informational
    Friday, September 6, 2013 10:53:11 PM PT
    Battery finished conditioning
    informational
    Friday, September 6, 2013 3:41:33 PM PT
    Battery started scheduled conditioning cycle (write cache disabled)
    informational
    Saturday, June 8, 2013 3:41:35 PM PT
    Battery finished conditioning
    informational
    Saturday, June 8, 2013 8:37:49 AM PT
    Battery started scheduled conditioning cycle (write cache disabled)
    informational
    Thursday, May 30, 2013 4:09:51 PM PT
    Marked drive in bay 3 as a global spare
    informational
    Thursday, May 30, 2013 4:09:42 PM PT
    Adopted drive in bay 3
    informational
    Thursday, May 30, 2013 4:08:59 PM PT
    Degraded RAID set RS1 - No spare available for rebuild
    critical
    Thursday, May 30, 2013 3:32:29 PM PT
    Degraded RAID set RS1
    warning

    Power Pig wrote:
    "Hesitant to plunk down $160 on a new spare disk if the present one isn't actually dead ...
    % sudo raidutil list raidsetinfo
                                         Total     Avail
    Raidsets      Type       Drives       Size      Size  Comments
    RS1           RAID 5     1,2,4      5.23TB    0.00MB  Rebuild: 0% complete     
    I was talking about this.
    This looks like a RAID 5 setup without the parity to me. I would said it basically a RAID 0.
    Also you metioned "I guess that previous spare must really have gone 'bad', despite having never been used! "
    I wiped everything out, I selected the 3 disks and clicked on "Create RAID Set" with RAID5 selected and the "Use unassigned drives as spares" option checked.  The 3rd disk has always been marked as the spare ever since then.  You do not have a choice to create a "RAID5 setup without the parity".  If I wanted a RAID0 I would have chosen a RAID0!  raidutil now says
    % sudo raidutil list raidsetinfo
                                         Total     Avail
    Raidsets      Type       Drives       Size      Size  Comments
    RS1           RAID 5     1,2,3,4    5.23TB    0.00MB  No tasks running
    Every time the system would 'lose' Drive 3 on boot, I would just keep rebooting until it was 'found', and then manually re-assign the now-'floating' drive as the spare - and the RAID would rebuild.
    There was never a problem until the day that prompted this post - when it would not find Drive 3 no matter what I did.  I turned the Mac Pro off for several hours until the drives had all cooled down and it still did not find the drive.  I left the machine on but unmounted the degraded volume until I got the new replacement drive.
    I really don't understand what you are getting at. It's like you are trying to tell me I set it up as a 4-drive RAID5 with no parity(!) and that I was in grave danger because one of the disks was gone.  The actual RAID contents (spread across Drives 1, 2 and 4) were never in danger, unless a 2nd disk had failed while the spare was not seen by the Apple RAID card.  I wasn't too worried about that happening.

  • Changing disk on RAID5

    Hello!
    Im about to change a disk on our RAID5 Xserve RAID.
    The RAID is currently protected, but im changing the disk because of a previous disk error.
    I have read the RAIDAdmin1.2 and the XserveRAID_UsersGuide from www.apple.com/server/documentation. However these manuals contain a lot of contradicting information...
    The manual says that the system is hotswappable and one should be able to change both disks and other components while the system is running.
    Then we are told to not remove drives that are part of the RAID, as that could result in dataloss..
    If some one could clarify this for me that would be great:
    * Can I remove a disk from an Xserve RAID with RAID5 while it is running?
    * When I replace the disk with the new disk I received from Apple do I need to format it?
    * Or will the disk automatically be prepared and included in the RAID?
    Any help with this would be greatly appreciated!
    Thanks!

    The manual says that the system is hotswappable and one should be able to change both disks and other components while the system is running.
    Then we are told to not remove drives that are part of the RAID, as that could result in dataloss..
    You should be cautious before pulling any drive from an array. Pulling a drive could result in data loss since there are many factors to consider.
    For example, RAID 5 can tolerate a disk failure (or removal) RAID 0 can not, yet they are both 'arrays'.
    Even in the case of RAID 5, pulling the wrong disk could be disastrous. It will also take several hours to rebuild the array and if another drive fails during that time window you'll be in trouble, too.
    I think that's the sentiment behind the second statement - it could result in data loss depending on the particular circumstances.
    In this case, if you're sure you have the right disk set, and there are no other failures on the RAID 5 array, you should be OK.
    As for swapping the disk itself, yes, they are hot-swappable, as are certain other components such as the power supply and fan trays (but not the controller cards).
    You can just pull the drive and replace it with another. The XServe RAID will automatically prepare the drive and start the rebuild process.

  • RAID5 for oracle database

    Hi,
    Can you please confirm if RAID5 configuration is suitable for oracle database,?database size:50Gb OS:HP/UX version:10gr2
    Sincerely
    Shaimaa

    user8913464 wrote:
    what is the impact of this heavy disk write penalty, let's say that each disk size is 300Gb and number of disks is 4,database size:50GbThat depends entirely on the I/O system and I/O fabric layer.
    Newer systems can receive the RAID 5 write into memory cache on the storage system (no parity block calculation overhead, no parity block write overhead). It then does this (the parity "stuff') when flushing that memory buffer to disk. Kind of like an Oracle delayed block cleanout where the changer of block data does not pay the penalty for cleaning and writing the block.
    How well this actually works in practise, you need to ask your storage vendor. They should be able to provide white papers and test cases on the subject.
    There's also issue with RAID5 ito data integrity and recovery and same may even suggest using an additional parity disk.
    That said. Oracle recommends (as already commented) +SAME+ - Stripe And Mirror Everything. With is basically RAID10.
    Keep in mind that storage architecture is changing fast - SSD and flash technology providing large storage caches. Even complete storage systems without a single disk platter of spinning rust. Then there's storage virtualisation, storage protocols and I/O fabric layers... It is not a simple topic as in the past and RAID is no longer as clean cut and straightforward as it used to be.

  • External hard drives - what brand do you use

    I'm looking to buy an external hard drive, maybe I'm a bit paranoid, but I'm thinking about getting a RAID1 setup. I have a lot of valuable pictures in I-Photo, a lot of music in I-Tunes, and would not be a happy camper if my internal HD fails.
    Would you give me some ideas as to the external HD brand/model you use - anybody use a RAID setup for redundancy ?
    Thanks,
    Bob

    Hi Bob, I wouldn't go RAID at all, though Raid1 is pretty good you still have problems with RAID, the whole reason they came up with RAID5, RAID10, etc...
    I think two independant drives are better than 2 tied together, and there are many options for backups.
    Almost more important than the External Drive itself is the Chipset in the External, Oxford being good & Prolific & others not so.
    My Books have had far less than glowing reviews for Macs...
    http://discussions.apple.com/thread.jspa?threadID=2191614&tstart=0
    http://x704.net/bbs/viewtopic.php?f=12&t=3219&p=42023&hilit=western+digital#p420 23
    http://x704.net/bbs/viewtopic.php?f=6&t=2939&p=34120&hilit=western+digital#p3412 0
    If you try LaCie then have a few spare Power Supplies on hand.
    I'd stay away from Bus Powered drives & 2.5" drives also, here's some choices, I'd go Firewire & drives with a 5 year warranty over 3 year warranty...
    http://eshop.macsales.com/shop/firewire/
    Lately I use the Voyager...
    http://discussions.apple.com/thread.jspa?messageID=10618536&#10618536
    Also see the last post there for the OP's review.

  • Add a disk to RAC

    Hello,
    I have a RAC database with 2 nodes and a cluster.
    In my Cluster I have 3 disks with RAID5. I need to add 2 disks RAID0 where I will put my redolog files because I have a big waiting time in "log file parallel write"
    I' m new in RAC and cluster can I add disksto a cluster with different RAID?

    You could mix any kind of RAID (of course not recommended). Indeed, redologs with their high write activity should not be put on RAID5.
    Werner

  • DB Installation Design Decision

    I am planning to install Orcale 9i on a solaris platform. I need to know the best way to install it on a RAID system, I guess I am going to go with RAID5 for the Oracle Home but what kind of RAID should I use for installing datafiles, or should I even go for RAID for data files and system space.
    Give me the best approach for installing Oracle on solaris. I am intrested in the kind of RAID that is best suited for Orcale 9i under solaris. Also let me know if even RAID is the best way to go for it.
    Any I mean any help/insight will be really appreciated.
    thank you.

    hI
    Skip this step and continue the installation and after completion the setup from db13 run update statistics
    Regards
    Vishal

  • Raid utility

    My new Mac Pro with OSX Snow Leopard Server arrived a few weeks ago and since then I am trying to configure teh RAID (5) Set via RAID Utility but it keeps freezing on final percentage of the blue bar - Initializing... I tryed to reboot several times, also to delete the RAID Set or the Volume but still get no answer or evolution in the process. Any suggestions?

    Although Apple advertises the RAID utility as a way to turn multiple disks into a single RAID without data loss, you cannot consolidate or expand the capacity of an existing volume.
    My guess as to what happened on my machine is that the RAID utility miscalculated the available space in the RAID set and failed to automatically create my new, second volume.
    So you have a choice: either make a second volume out of the remaining space in your RAID set or else delete the volume, create a new volume with all remaining space and either reinstall or restore from backup. Since setting up the RAID was the first thing I did after opening the box, I chose to just reinstall the OS. I don't see the benefit in having multiple volumes if they are all from the same RAID set.
    In either case with RAID5 you're only going to have 2.5-3TB from four 1TB drives due to the parity data. I got a single 730GB volume from four 300GB drives, but it's worth it to me to get the increased stability of RAID5 over simple striping (since four drives are four times as likely to have a failure than a single drive by itself).

  • HW RAID Config for Oracle DB on Windows

    We are currently running Oracle 8.0.5 on Windows 2000 Advanced Server. This is a DELL PowerEdge 6600 server with 8 73GB Ultra SCSI Hard Drives. We will be upgrading to Oracle 9i in the future. Question: What would be the recommended Hardware RAID Configuration? We've heard that RAID 10 would be a good configuration to implement. What would be the best way for us to configure our server? We would like a high fault tolerance, good performance system. Our database isn't very large in size (10GB). By the way, switching to a different operating system is not an option for us.

    Hi
    In we use 9i on Win 2k3 with RAID5, it is fault tolerance and i think RAID5 is the best configuration for performance and security and fault tolerance. but keep in mind that RAID has the small write penalty.
    And it is suggested by many consultants that if you're using RAID you should put the control files and the redo logs on disks outside the RAID array, but this configuration is for very large DBs.
    RAID5 is working perfect with me, i have dual XEON 2.4 4 x 15000 x 34 GB SCSI HDD and 4 GB Ram ofcourse our DB is almost 30 GB.
    Regards

  • How many disk groups for +DATA?

    Hi All,
    Does oracle recommends having one big/shared asm disk group for all of the databases?
    In our case we going to have 11.2 and 10g rac running against 11.2 ask...
    Am I correct in saying that I have to set asm’s compatibility parameter to 10 in order to be able to use the same disk?
    Is this is a good idea? Or should i create another disk group for the 10g db’s?
    I’m assuming there are feature that will not be available when the compatibility is reduced to 10g...

    Oviwan wrote:
    what kind of storage system do you have? nas? what is your protocol between server and storage? tcp/ip (=>nfs)? fc?....
    if you have a storage with serveral disks then you create mostly more than one lun (raid 0, 1, 5 or whatever). if the requirement is, that you need a 1 TB diskgroup, then I would not create 1 1TB lun, I would create 5x200GB lun's for example, just for the case that you have to extend the diskgroup with a same lun size. if its 1 TB then you have to add another 1TB lun, if there are 5x200GB luns then you can simply add 200GB.
    I have nowhere found a document that says: if you have exactly 16 lun's for a diskgroup it's best. it depends on os, storage, etc...
    so if you create a 50gb diskgroup I would create just one 50gb lun for example.
    hthyes its NAS, connectd using Iscsi. it has 5 disks 1TB each and configued with RAID5. I found below requirments on asm ... it indicates 4luns as minimum per diskgroup, but it doesnt clearify whether its for external redundancy or as mredundancy types.
    •A minimum of four LUNs (Oracle ASM disks) of equal size and performance is recommended for each disk group.
    •Ensure that all Oracle ASM disks in a disk group have similar storage performance and availability characteristics. In storage configurations with mixed speed drives, such as 10K and 15K RPM, I/O performance is constrained by the slowest speed drive.
    •Oracle ASM data distribution policy is capacity-based. Ensure that Oracle ASM disks in a disk group have the same capacity to maintain balance.
    •Maximize the number of disks in a disk group for maximum data distribution and higher I/O bandwidth.
    •Create LUNs using the outside half of disk drives for higher performance. If possible, use small disks with the highest RPM.
    •Create large LUNs to reduce LUN management overhead.
    •Minimize I/O contention between ASM disks and other applications by dedicating disks to ASM disk groups for those disks that are not shared with other applications.
    •Choose a hardware RAID stripe size that is a power of 2 and less than or equal to the size of the ASM allocation unit.
    •Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant. However, there are situations where certain multipathing or third party cluster solutions require an LVM. In these situations, use the LVM to represent a single LUN without striping or mirroring to minimize the performance impact.
    •For Linux, when possible, use the Oracle ASMLIB feature to address device naming and permission persistency.
    ASMLIB provides an alternative interface for the ASM-enabled kernel to discover and access block devices. ASMLIB provides storage and operating system vendors the opportunity to supply extended storage-related features. These features provide benefits such as improved performance and greater data integrity.
    one more question about fdisk partitioning. is it correct that we should only create one partition per luns (5x200Gb luns in my case) is it because this way i will have more consistent set of luns( in term of performance)?

Maybe you are looking for