San vs asm raid which to adopt

Hi,
we have a system using ASM and San drive. San has a hardware raid, while ASM has also do the same thing. Since according to my understand its based on SAME. Strip and Mirror everything. do you think we have multiple raids in the system and we should disable hardware raid. Also i thing hardware raid is raid 4. please correct me if i am wrong
regards
Nick

RAID 1+0 or 0+1 is an implementation of striping and mirroring.
RAID 5 is an implementation of striping with parity (where the parity stripe is scattered among the disks, with RAID 4 the parity stripes are on specific disks)
see: http://www.baarf.com
please mind the baarf website emphasizes on the (write) performance penalty of using RAID levels with parity. Even with SAN's this is still very true. (it's inherent on the way parity is implemented)
ASM normal redundancy is essentially a mirror implementation. The implementation of ASM normal redundancy differs subtly from the way RAID mirroring works.
This means having RAID 1+0/1+0 on the SAN level is a way of mirroring, and having ASM normal redundancy is another way of having mirror copies of blocks, which would mean that having RAID mirroring on the SAN level and normal redundancy on the ASM level means you have got 4 copies on the same storage box.
So I would say there's little benefit of having ASM normal redundancy on top of a mirrored stripeset.
As an advise what to do: most ASM implementations use external redundancy which means the redundancy of the SAN is used. I think this makes sense.
Using normal redundancy makes sense when using local (non RAID) disks, or when having multiple SAN's.

Similar Messages

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

  • Repair Raid which was created under Panther

    Recently a RAID 1 on one of our servers degraded and one of the discs is now shown as "spare". I tried to use "diskutil repairMirror" as described in the Apple Knowledge Base, but the command says that this is a 1.x RAID (which is correct, since it was created under Panther) and has to be converted first.
    After some googling it seems to me that it is not a good idea to do the "convertRAID" command on a live system. So what should I do now? I think I have two options:
    1) Boot with an old Panther DVD and repair the mirror, then boot from a Tiger DVD and convert the RAID.
    2) Boot from Tiger DVD, convert the RAID and then repair it.
    What do you think is the better option? Or do you have better ones? I'd love to hear them!
    Thanks,David

    If you have the means to do option 1, I would suggest that one. Repairing any damage before the conversion makes the most sense. However, since the server will be down, you can look at this as an opportunity to simply start from scratch. If you have the means, you can capture your volume to a disk image. Then, destroy the old mirror and recreate a new one under Tiger. Once the new one is created (and you have no OS 9 drivers), then restore the disk image back to the new mirror. Obviously, if your disk is both boot and data, you may need a lot of space.
    Good luck and hope it helps.

  • R12 - HP intel servers -Linux- SAN ... Which RAID !

    Hello,
    We are about to put everything together, r12 on HP linux intel server, HP 8000 SAN storage (mix of 16 and 32 GB Hardisks).
    Now the r12 is out of the box, and the issue is how to chose which Raid to go with???
    any recommendations? shouldnt we not reinvent the wheel? :)
    Firas

    Firas,
    Have a look at the following thread.
    RAID 1 Vs RAID 0+1
    RAID 1 Vs RAID 0+1
    Regards,
    Hussein

  • ASM RAID 6

    We might have to break the current OCFS2 implementation (on SAN) and replace it with ASM on raw devices. What is the difference when it comes to storage consumption? Do we gain or lose disk space, either case - how much?
    SLES9
    10.1.0.5
    Appreciate your thoughts

    -Did external redundancy for ASM exist in 10.1?Yes, it exists in 10.1 also.
    -ASM vs. OCFS2 from a file system perspective - apart
    from the manageability issue, why would ASM be
    recommended over OCFS2?You can say that there are many new features which are being added into ASM but there has not been much development in OCFS.
    -Amit
    http://askoracledba.wordpress.com/

  • Backup SAN to JBOD RAID on same switch?

    I've got a two-controller, one-client, and two Xserve RAID SAN configuration, newly upgraded and healthily running 10.4.6/xsan 1.3...
    And now, we've upgraded our disk to disk backup, and I need to find the fastest* way to copy around 5TB of SAN onto a third Xserve RAID unit.
    Ideally, I would NOT do this over ethernet (our best performance for a straight rsync/scp is around 1TB/day, too slow!), but would take advantage of the two open fiber ports on our SAN switch, and just plug in the JBOD Xserve to do a "local" disk to disk backup.
    My question is simple: is it possible for xSan and a "normal" RAID to co-exist on the same switch? I know having multiple SANs is bad/impossible/frowned upon, but will this config work, and more importantly, how* do I make it work, and MOST importantly, will this be any faster than simply copying over a single Gig-E link (my guess is yes).
    Thanks!
    -deano

    Just my gut - while we do more-than-desired regular maintenance on the san itself, and have accumulated a lot of Xsan knowledge that way, the same cannot be said for our fiber switching... Still, after checking things out on the switch and docs, it seems that the zoning has little/nothing to do with the actual fiber addressing scheme (no changes needed to the individual fiber-attached devices), but sits on top* of all that.
    I guess the VLAN thing threw me a bit - when we switched our VLAN setup for networking, it required a ton of client and server-side reconfiguration. This looks* a lot less dangerous, but if anyone has experience zoning an emulex 355, tips would be appreciated.
    Thanks! I'm marking this one solved for now, even though I won't be able to get to it until the weekend.

  • How to "boot from SAN" from a LUN which belongs to other solaris Machine.

    Hi all
    I have installed solaris on a lun ( boot from SAN).
    Then i had assigned the same os lun to another machine ( the hardware is exactly the same) but now the new machine had detected the os but it reboots and kicksoff.
    I have tried changing vfstab setting,
    can someone help me??
    Thanks in advance.
    sidd.

    disable IDE RAID and ensure SATA is enabled in BIOS
    disconnect any other IDE HDDs before you install Windows to your SATA drive; they can be reconnected again afterwards.
    make sure that the SATA drive is listed first in 'Hard Disk Priority' under 'Boot Order' in BIOS

  • Iscsi or SAN NFS+ASM

    hi
    my questions look stupid.but its a vary simple question.
    Do i need any iscsi or SAN to install clusterware 11gR2 in OEL 5.4 x86_64?
    regards

    Gagan Arora wrote:
    Try as root on
    node1
    # /etc/init.d/oracleasm createdisk DATA <node2 nfs share>
    #/etc/init.d/oracleasm createdisk FRA <node1 nfs share>
    on node2
    #/etc/init.d/oracleasm scandisks
    #/etc/init.d/oracleasm listdisks
    DATA
    FRA
    if you dont want to use nfs you can create iscsi targets on each node and create iscsci initiator on each nodei dnt want iscsi.
    >
    DID you exactly min this?
    [root@rac-1 ~]# mount /dev/sdb10 /u02
    [root@rac-1 ~]# vi /etc/exports
    [root@rac-1 ~]# exports -a
    bash: exports: command not found
    [root@rac-1 ~]# export -a
    [root@rac-1 ~]# srvice nfs restart
    bash: srvice: command not found
    [root@rac-1 ~]# service nfs restart
    Shutting down NFS mountd:                                  [  OK  ]
    Shutting down NFS daemon:                                  [  OK  ]
    Shutting down NFS quotas:                                  [  OK  ]
    Shutting down NFS services:                                [  OK  ]
    Starting NFS services:                                     [  OK  ]
    Starting NFS quotas:                                       [  OK  ]
    Starting NFS daemon:                                       [  OK  ]
    Starting NFS mountd:                                       [  OK  ]
    [root@rac-1 ~]# mount -t nfs rac-1:/u02 /mnt/rac-1/nfs1
    [root@rac-1 ~]# oracleasm listdisks
    VOLUME1
    VOLUME2
    VOLUME3
    VOLUME4
    [root@rac-1 ~]# oracleasm createdisk VOLUME5 /mnt/rac-1/nfs1
    File "/mnt/rac-1/nfs1" is not a block device
    [root@rac-1 ~]# oracleasm createdisk VOLUME5 rac-1:/u02
    Unable to query file "rac-1:/u02": No such file or directory
    [root@rac-1 ~]# i got only:
    [root@rac-1 ~]# oracleasm createdisk VOLUME5 /mnt/rac-1/nfs1
    File "/mnt/rac-1/nfs1" is not a block device
    Edited by: you on Feb 22, 2010 5:35 AM

  • Placing Rac db redologs members on san disks ASM and local scasi disks

    Hi
    Kindly advice if should I face a performance problem case I placed db redologs members one on ASM and the 2nd on Local server scasi disk.
    Thanks

    As long as you would have the local disk not being into contention due to any other files being present or just being slow, you should be fine. But make sure that you do have the local disk in such a way that it's visible to the other node as well because that would be required in the case of recovery.
    HTH
    Aman....

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • 10g ASM on Logical Volumes vs. Raw devices and SAN Virtualization

    We are looking at setting up our standards for Oracle 10g non-rac systems. We are looking at the value of Oracle ASM in our environment.
    As per the official Oracle documentation, raw devices are preferred to using Logical Volumes when using ASM.
    From here: http://download.oracle.com/docs/cd/B19306_01/server.102/b15658/appa_aix.htm#sthr
    ef723
    "Note: Do not add logical volumes to Automatic Storage Management disk groups. Automatic Storage Management works best when you add raw disk devices to disk groups. If you are using Automatic Storage Management, then do not use LVM for striping. Automatic Storage Management implements striping and mirroring."
    Also, as per Metalink note 452924.1:
    "10) Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant."
    The issue is: if we use raw disk devices presented to ASM, the disks don't show up as used in the unix/AIX system tools (i.e. smit, lspv, etc.). Hence, when looking for raw devices on the system to add to filesystems/volume groups/etc., it's highly possible that a UNIX admin will grab a raw device that is already in use by Oracle ASM.
    Additionally, we are using a an IBM DS8300 SAN with IBM SAN Volume Controller (SVC) in front of it. Hence, we already have storage virtualization and I/O balancing at the SAN/hardware level.
    I'm looking for a little clarification to the following questions, as my understanding of their responses seem to confict:
    QUESTION #1: Can anyone clarify/provide additional detail as to why Logical volumes are not preferred when using Oracle ASM? Does the argument still hold in a SAN Virtualized environment?
    QUESTION #2: Does virtualization at the software level (ASM) make sense in our environment? As we already have I/O balancing provided at the hardware level via our SVC, what do we gain by adding yet another level of I/O balancing at the ASM level? Or as in the
    arguments the Oracle documentation makes against using Lvm, is this an unnecessary redundant striping (double-striped or in our case triple-striped/plaid)?
    QUESTION #3: So does SAN Virtualization conflict or compliment the virtualization provided by ASM?

    After more research/discussions/SR's, I've come to the following conclusion.
    Basically, in an intelligent storage environment (i.e. SVC), you're not getting a 100% bang for the buck by using ASM. Which is the cat's meow in a commodity hardware/unintelligent storage environment.
    Using ASM in a SVC environment potentially wastes CPU cycles having ASM balance i/o that is already balanced on the backend (sure if you shuffle a deck of cards that are already shuffled you're not doing any harm, but if they're already shuffled - then why are you shuffling them again??).
    That being said, there may still be some value for using ASM from the standpoint of storage management for multiple instances on a server. For example, one could better minimize space wastage by being able to share a "pool" of storage between mulitiple instances, rather than having to manage space on an instance-by-instance (or filesystem by filesystem) level.
    Also, in the case of having a unfriendly OS where one is unable to dynamically grow a filesystem (i.e. database outage required), there would be a definite benefit provided by ASM in being able to dynamically allocate disks to the "pool". Of course, with most higher-end end systems, dynamic filesystem growth is pretty much a given.
    In the case of RAC, regardless of the backend, ASM with raw is a no-brainer.
    In the case of a standalone instance, it's a judgement call. My vote in the case of intelligent storage where one could dynamically grow filesystems, would be to keep ASM out of the picture.
    Your vote may be different....just make sure you're putting in a solution to a problem and not a solution that's looking for a problem(s).
    And there's the whole culture of IT thing as well (i.e. do your storage guys know what you're doing and vice versa).....which can destroy any technological solution, regardless of how great it is.

  • Difference between asm and san

    Dear all Guru's,
    I have a question what is difference between san and asm because san also provide mirroring like asm,why we shuld go for asm.
    Regards,
    Jam

    926840 wrote:
    but i want to know if i don't use asm or use asm storage layer for oracle dbf file?.what is the benefit of using asm storage because some of organization don't use asm storage.There are two basic types of storage for Oracle databases.
    "Cooked" file system. This means the disk is formatted and managed via a specific file system driver. On Windows, that would be ntfs. On Linux, that typically would be something like ext3.
    This file system is managed by the file system driver loaded into the kernel. It does directory and file management, service I/O calls to read and write files, provides caching, and so on.
    ASM does not support cooked file system as it already has a driver that manages that formatted file system drive/partition.
    Raw disks is the second storage type. This means it is unformatted - and as such, you cannot (via standard o/s commands) use the drive to create/update/delete/etc directories and files.
    ASM supports managing such raw (non-cooked) devices. As these devices are directly used by the database, the database can better control and managed device access.
    Simple example. The database writes 10 8KB database blocks to a cooked file system. It has no idea whether those 10 blocks were actually written to disk.. or whether the file system driver has cached that write and all 10 blocks still sit in memory. The database itself also has a cache. So there are now 2 memory caches.. a physical database I/O may actually be a logical file system I/O.. These factors make managing and determining performance complex.
    A raw non-cooked device does not have such a secondary cooked file system cache. If the database does a physical write to disk, the data is actually written to disk. There is a single memory cache that is managed by the database. Less complexity and less ambiguity. And it can also mean better performance.
    Manually managing raw device is however complex. And in the past, problematic as the database directly used the raw device without providing any type of management layer for raw devices. ASM addresses that issue - and addresses it very well.
    Using SAN LUNs as cooked file systems for an Oracle database does not make much sense IMO. SAN LUNs should be used as raw devices for the database - and ASM used as management layer for these devices.

  • How can I use strip by ASM

    Hello buddy
    I am puzzle about strip.
    Do I need to create PV->VG->stripped Raw LV and set up ASM base on it or just use ASM strip.
    How can I use ASM strip not LVM strip ?
    Thanks

    Hi,
    A quote from MoS
    - ASM & RAID striping are complimentary to each other. When a SAN or disk array provides striping, that can be used in a manner which is complementary to ASM.
    - Oracle ideally suggest that the RAID stripe size at the SAN layer should match ASM stripe size (1MB by default). However, if the above is not possible (1MB stripe at storage level),then a stripe size of 256K/128K/512k should be ok. As long ASM 1MB stripe size is a multiple of hardware stripe size, I/O is aligned at hardware level. Otherwise, a single I/O can be split into multiple disks and cause multiple read writes and excessive i/o operations.
    - ASM mirroring has a small overhead on the server (specially on write performance) where external hardware mirroring performs the function on the storage controller.
    With external mirroring, you need to reserve disks as hot spares.  With ASM, hot spares are not necessary and therefore, more efficient use of the storage capacity.
    - ASM reduces the chance of mis-configuation and human error because of failure groups. With external RAID, you have to carefully plan your redundant controllers and paths which requires higher admin overhead.
    Cheers

  • ASM Basics

    I have Oracle 10gR2 software installed in one machine and all datafiles kept in ASM disk group. Can anyone explain the physical side(like Hub, SCSI controller,..) of how the machine which has the Oracle software installed is connected to the ASM Disk group?

    Hi,
    Your question is quick to ask, but has a complex answer. You need to have a really good "hardware" and computer design knowledge..
    I'll try to explain it shortly and simply.
    There are different components in your setup.
    1- physical disks: There you can have local storage (disks, raid array, ...) or shared storage (SAN, NAS,...). The computer on which you're going to run ASM must have physical access to the disk devices. He must see the LUNs if you're using a SAN, or he must see the raid array, or ...
    2- the host(s): It must be configured to use ASM. On Unix platforms, the raw devices must be owned by the software owner (oracle), belong to the dba group (dba) and be in chmod 660. The SAN/RAID/... drivers must be loaded into kernel.
    3- ASM: ASM is a filesystem. The host O.S. can't see what's in this kind of filesystem (except with Linux/asmlib and it's not so straightforward). Only Oracle binaries can use ASM filesystems. When ASM sees a raw disk which it can access (owned by oracle:dba mode 660 on unix, remember?) it reads the disk header to see what'sin the disk.
    4- the diskgroups: they are made of different "disks" (1). When you tell ASM to create a diskgroup, it writes some information at the beginning of the "disk" in order to mark it as an ASM "disk". Then ASM manages redundancy, ..., as you told it it sould be done.
    A little sample:
    Disk1\                         / PARTITION 1 \
    Disk2\\                        / PARTITION 2 - ASM DISKGROUP 1 \
    Disk3--- RAID 5 ARRAY <-> HOST - PARTITION 3 \                 - ASM Instance <-> DB Instance
    Disk4//                        \ PARTITION 4 - ASM DISKGROUP 2 /
    Disk5/In the sample I quickly graphed, you can see there are 5 physical disks in a RAID 5 array. A Raid controller would certainly manage the array, and the controller would show one volume to the OS. It's possible to have a software raid manager (in linux for example) and in that case, the host O.S. is the raid controller.
    At O.S. level, the raid volume is split into partitions, or can be used as-is, and the ASM instance is configures to use a set of partitions and organize them into disk groups. the ASM instance is the filesystem manager, the DB processes access this filesystem in order to read the files as an other software would read an NTFS volume via O.S. interfaces.
    To give a more logical sample, here's the setup I have for my main standby database:
    Disk1 \
    Disk2 \        - RawDisk1a
    Disk3  - SAN 1 - RawDisk2a
    Diskn /
                                 HOST <-> ASM DISKGROUP (RawDisk1a,RawDisk2a mirror on RawDisk1b,RawDisk2b) <-> ASM Instance <-> DB Instance
    Disk1 \
    Disk2 \        - RawDisk1b
    Disk3  - SAN 2 - RawDisk2b
    Diskn /Here the DB Instance uses the ASM filesystem to access a data diskgroup. The diskgroup is mirored on 2 SANs via ASM FAILGROUPs usage. ASM sees the RawDisk(n) via kernel API, but writes directly to it. The ASM software if in charge of the mirroring at ASM level (cf DISKGROUP REDUNDANCY LEVEL in the doc). Each RawDisk is set in a different failgroup, so if one SAN fails, ASM can still work using the other SAN.. If a disk in either SAN fails, ASM isn't even informed.. The SAN is in charge of the RAID redundancy.
    Well, it's a complex matter.
    I hope I cleared some of your doubts, and I'm sure I raised some more concerns.
    Ask away if you want some more explanations.
    Regards,
    Yoann.

  • Please Help - When I try to add ASM Disk to ASM Diskgroup it crashes Server

    We are using a Pillar SAN and have LUNS Created and are using the following multipath device: (I'm a DBA more then anything else... but I am rather familiar with linux .... SAN Hardware not so much)
    Device Size Mount Point
    /dev/dpda1 11G /u01
    The Above device is working fine... Below are the ASM Disks being Created
    Device Size Oracle ASM Disk Name
    /dev/dpdb1 198G ORCL1
    /dev/dpdc1 21G SIRE1
    /dev/dpdd1 21G CART1
    /dev/dpde1 21G SRTS1
    /dev/dpdf1 21G CRTT1
    I try to create to the first ASM Disk
    /etc/init.d/oracleasm createdisk ORCL1 /dev/dpdb1
    Marking disk "ORCL1" as an ASM disk: [FAILED]
    So I check the oracleasm log:
    #cat /var/log/oracleasm
    Device "/dev/dpdb1" is not a partition
    I did some research and found that this is a common problem with multipath devices and to work around it you have to use asmtool
    # /usr/sbin/asmtool -C -l /dev/oracleasm -n ORCL1 -s /dev/dpdb1 -a force=yes
    asmtool: Device "/dev/dpdb1" is not a partition
    asmtool: Continuing anyway
    now I scan and list the disks
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    # /etc/init.d/oracleasm listdisks
    ORCL1
    Here is whats going on in /var/log/messages when I run the oracleasm scandisks command
    # date
    Fri Aug 14 13:51:58 MST 2009
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    cat /var/log/messages | grep "Aug 14 13:5"
    Aug 14 13:52:06 seer kernel: dpdb: dpdb1
    Aug 14 13:52:06 seer kernel: dpdc: dpdc1
    Aug 14 13:52:06 seer kernel: dpdd: dpdd1
    Aug 14 13:52:06 seer kernel: dpde: dpde1
    Aug 14 13:52:06 seer kernel: dpdf: dpdf1
    Aug 14 13:52:06 seer kernel: dpdg: dpdg1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: printk: 30 messages suppressed.
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: sda : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sda : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: Dev sda: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdb: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdb: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdb: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdb: sdb1
    Aug 14 13:52:06 seer kernel: SCSI device sdc: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdc: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdc: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdc: sdc1
    Aug 14 13:52:06 seer kernel: SCSI device sdd: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdd: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdd: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdd: sdd1
    Aug 14 13:52:06 seer kernel: SCSI device sde: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sde: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sde: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sde: sde1
    Aug 14 13:52:06 seer kernel: SCSI device sdf: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdf: sdf1
    Aug 14 13:52:06 seer kernel: SCSI device sdg: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdg: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdg: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdg: sdg1
    Aug 14 13:52:06 seer kernel: SCSI device sdh: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdh: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdh: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdh: sdh1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sdi, logical block 0
    Aug 14 13:52:06 seer kernel: sdi : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdi : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdi: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdi: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdi: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdi:end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer last message repeated 4 times
    Aug 14 13:52:06 seer kernel: Dev sdi: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdj: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdj: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdj: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdj: sdj1
    Aug 14 13:52:06 seer kernel: SCSI device sdk: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdk: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdk: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdk: sdk1
    Aug 14 13:52:06 seer kernel: SCSI device sdl: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdl: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdl: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdl: sdl1
    Aug 14 13:52:06 seer kernel: SCSI device sdm: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdm: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdm: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdm: sdm1
    Aug 14 13:52:06 seer kernel: SCSI device sdn: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdn: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdn: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdn: sdn1
    Aug 14 13:52:06 seer kernel: SCSI device sdo: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdo: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdo: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdo: sdo1
    Aug 14 13:52:06 seer kernel: SCSI device sdp: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdp: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdp: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdp: sdp1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: sdq : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdq : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdq: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdq: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdq: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdq:end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdq: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdr: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdr: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdr: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdr: sdr1
    Aug 14 13:52:06 seer kernel: SCSI device sds: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sds: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sds: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sds: sds1
    Aug 14 13:52:06 seer kernel: SCSI device sdt: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdt: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdt: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdt: sdt1
    Aug 14 13:52:06 seer kernel: SCSI device sdu: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdu: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdu: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdu: sdu1
    Aug 14 13:52:06 seer kernel: SCSI device sdv: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdv: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdv: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdv: sdv1
    Aug 14 13:52:06 seer kernel: SCSI device sdw: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdw: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdw: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdw: sdw1
    Aug 14 13:52:06 seer kernel: SCSI device sdx: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdx: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdx: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdx: sdx1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: sdy : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdy : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdy: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdy: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdy: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdy:end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdy: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdz: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdz: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdz: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdz: sdz1
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdaa: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaa: sdaa1
    Aug 14 13:52:06 seer kernel: SCSI device sdab: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdab: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdab: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdab: sdab1
    Aug 14 13:52:06 seer kernel: SCSI device sdac: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdac: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdac: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdac: sdac1
    Aug 14 13:52:06 seer kernel: SCSI device sdad: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdad: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdad: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdad: sdad1
    Aug 14 13:52:06 seer kernel: SCSI device sdae: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdae: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdae: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdae: sdae1
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdaf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaf: sdaf1
    Aug 14 13:52:06 seer kernel: scsi_wr_disk: unknown partition table
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdy, sector 0
    Here's some extra info:
    # /sbin/blkid | grep asm
    /dev/sdc1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdk1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sds1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdaa1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/dpdb1: LABEL="ORCL1" TYPE="oracleasm"
    I have learned that by excluding devices in the oracleasm configuration file I eliminate those I/O errors in /var/log/messages
    # cat /etc/sysconfig/oracleasm
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver. It is generated
    # By running /etc/init.d/oracleasm configure. Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=oinstall
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dp sd"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sdc sdk sds sdaa sda"
    # ls -la /dev/oracleasm/disks/
    total 0
    drwxr-xr-x 1 root root 0 Aug 14 10:47 .
    drwxr-xr-x 4 root root 0 Aug 13 15:32 ..
    brw-rw---- 1 oracle oinstall 251, 33 Aug 14 13:46 ORCL1
    Now I can go into dbca to create the ASM instance, which starts up fine...  create a new diskgroup, I see ORCL1 as a provision ASM disk I select it ...  Click OK
    CRASH!!!  Box hangs have to reboot it....
    I have gotten myself to exactly the same point right before clicking OK and here is what is in the ASM alertlog so far
    Fri Aug 14 14:42:02 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.6.0.
    Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfile+ASM.ora
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    diagnostic_dest = "/u01/app/oracle"
    Fri Aug 14 14:42:04 2009
    PMON started with pid=2, OS id=3300
    Fri Aug 14 14:42:04 2009
    VKTM started with pid=3, OS id=3302 at elevated priority
    VKTM running at (20)ms precision
    Fri Aug 14 14:42:04 2009
    DIAG started with pid=4, OS id=3306
    Fri Aug 14 14:42:04 2009
    PSP0 started with pid=5, OS id=3308
    Fri Aug 14 14:42:04 2009
    DSKM started with pid=6, OS id=3310
    Fri Aug 14 14:42:04 2009
    DIA0 started with pid=7, OS id=3312
    Fri Aug 14 14:42:04 2009
    MMAN started with pid=8, OS id=3314
    Fri Aug 14 14:42:04 2009
    DBW0 started with pid=9, OS id=3316
    Fri Aug 14 14:42:04 2009
    LGWR started with pid=6, OS id=3318
    Fri Aug 14 14:42:04 2009
    CKPT started with pid=10, OS id=3320
    Fri Aug 14 14:42:04 2009
    SMON started with pid=11, OS id=3322
    Fri Aug 14 14:42:04 2009
    RBAL started with pid=12, OS id=3324
    Fri Aug 14 14:42:04 2009
    GMON started with pid=13, OS id=3326
    ORACLE_BASE from environment = /u01/app/oracle
    Fri Aug 14 14:42:04 2009
    SQL> ALTER DISKGROUP ALL MOUNT
    Fri Aug 14 14:42:41 2009
    At this point I don't want to click the OK until I am sure someone is in the office to reboot the machine manually if I do hang it again....  I hung it twice yesterday, however I did not have the devices excluded in the oracleasm configuration file as i do now
    Edited by: user10193377 on Aug 14, 2009 3:23 PM
    Well Clicking OK hun it again and I am waiting to get back into it, to see what new information might be gleened
    Does anyone have any ideas on what to check or where to look?????    Will update more once I can log back in

    Hi Mark,
    It looks like something is not correct with your raw device partition based on the error messages:
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    It could be a number of things. I would check with your vendor and Oracle support to see if the multipath software drive is supported and if there is a potential workaround for ASM. Sorry this is not quite the solution, but its what jumps to mind based on issues with multipath software and storage vendors for ASM with Linux and Oracle. Have you checked the validation matrix available on Metalink?
    Cheers,
    Ben

Maybe you are looking for