Recommended Number LUNs for ASM Diskgroup

We are installation Oracle Clusterware 11g, Oracle ASM 11g and Oracle Database 11g R1 (11.1.0.6) Enterprise Edition with RAC option. We have EMC Clariion CX-3 SAN for shared storage (All oracle software will reside on locally). We are trying to determine the recommended or best practice number of LUNs and LUN size for ASM Diskgroups. I have found only the following specific to ASM 11g:
ASM Deployment Best Practice
Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.
1) Recommended number of LUNs?
2) Recommended size of LUNs?
3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?
4) Should the number of physical spindles in LUN be even numbered? Does it matter?

user10437903 wrote:
Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.This means that the LUNs (disks) should be created over multiple SCSI adapters in the storage box. EMCs have multiple SCSI channels to which disks are attached. Best practice says that the disks/luns that you assing to a diskgroup should be spread over as many channels in the storage box as possible. This increases the bandwidth and therefore, performance.
1) Recommended number of LUNs?Like the best practice says, if possible, at least 4
2) Recommended size of LUNs?That depends on your situation. If you are planning a database of 100GB, then a LUN size of 50GB is a bit overkill.
3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?LUNs, spindles if you have only access to physical spindles
4) Should the number of physical spindles in LUN be even numbered? Does it matter?If you are using RAID5, I'd advise to keep a 4+1 spindle allocation, but it might not be possible to realize that. It all depends on the storage solution and how far you can go in configuring it.
Arnoud Roth

Similar Messages

  • "Best" Allocation Unit Size (AU_SIZE) for ASM diskgroups when using NetApp

    We're building a new non-RAC 11.2.0.3 system on x86-64 RHEL 5.7 with ASM diskgroups stored on a NetApp device (don't know the model # since we are not storage admins but can get it if that would be helpful). The system is not a data warehouse--more of a hybrid than pure OLTP or OLAP.
    In Oracle® Database Storage Administrator's Guide 11g Release 2 (11.2) E10500-02, Oracle recommends using allocation unit (AU) size of 4MB (vs. a default of 1MB) for a disk group be set to 4 MB to enhance performance. However, to take advantage of the au_size benefits, it also says the operating system (OS) I/O size should be set "to the largest possible size."
    http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm
    Since we're using NetApp as the underlying storage, what should we ask our storage and sysadmins (we don't manage the physical storage or the OS) to do:
    * What do they need to confirm and/or set regarding I/O on the Linux side
    * What do they need to confirm and/or set regarding I/O on the NetApp side?
    On some other 11.2.0.2 systems that use ASM diskgroups, I checked v$asm_diskgroup and see we're currently using a 1MB Allocation Unit Size. The diskgroups are on an HP EVA SAN. I don't recall, when creating the diskgroups via asmca, if we were even given an option to change the AU size. We're inclined to go with Oracle's recommendation of 4MB. But we're concerned there may be a mismatch on the OS side (either Redhat or the NetApp device's OS). Would rather "first do no harm" and stick with the default of 1MB before going with 4MB and not knowing the consequences. Also, when we create diskgroups we set Redundancy to External--because we'd like the NetApp device to handle this. Don't know if that matters regarding AU Size.
    Hope this makes sense. Please let me know if there is any other info I can provide.

    Thanks Dan. I suspected as much due to the absence of info out there on this particular topic. I hear you on the comparsion with deviating from a tried-and-true standard 8K Oracle block size. Probably not worth the hassle. I don't know of any particular justification with this system to bump up the AU size--especially if this is an esoteric and little-used technique. The only justification is official Oracle documentation suggesting the value change. Since it seems you can't change an ASM Diskgroup's AU size once you create it, and since we won't have time to benchmark using different AU sizes, I would prefer to err on the side of caution--e.g. first do no harm.
    Does anyone out there use something larger than a 1MB AU size? If so, why? And did you benchmark between the standard size and the size you chose? What performance results did you observe?

  • GI installtion gives error while executing root.sh for ASM diskgroup

    Dear Gurus,
    We are implementing a 2 node RAC configuration with ASM on vmware and openfiler on LINUX RHEL 6.2. We started our installation with grid infrastructure. While executing root.sh on node 1 it gives error diskgroup cannot be mounted and no alterntions perfomed as below.
    +[main] [ 2012-10-04 05:38:33.150 PDT ] [UsmcaLogger.logException:173] SEVERE:method oracle.sysman.assistants.usmca.backend.USMDiskGroupManager:mountDiskGroups+
    +[main] [ 2012-10-04 05:38:33.151 PDT ] [UsmcaLogger.logException:174] ORA-15032: not all alterations performed+
    ORA-15017: diskgroup "CRS" cannot be mounted
    ORA-15088: diskgroup creation incomplete
    +[main] [ 2012-10-04 05:38:33.338 PDT ] [UsmcaLogger.logException:175] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15032: not all alterations pe+
    rformed
    ORA-15017: diskgroup "CRS" cannot be mounted
    Note:- we are not using ASMLib. We presented the LUN's to oracle binaries with Multipathing.

    Put here relevant info of alertlog of ASM and CRS.
    Please format your text by using tag at begin and end of output.                                                                                                                                                                                                                                                           

  • Should I partition the disks using fdisk (300 GB lun) for ASM?

    We've configured oracle 11g and asm on SLES10sp2 systems prior to the one I'm working on now. We use disks that are luns from SAN storage. The SAN devices vary (Tier 1 and tier 2). In earlier installs, we did not need to partition the disks using fdisk (luns up to 40GB in size). We did need to write a label using fdisk and 'w'. The asm that is being configured now is using 300 GB luns (on sles10 sp2 oracle 11g). I've needed to create a new primary partition using fdisk and then have asm 'look' as the partitioned device. If I did not do that, ASM would not look at the unpartitioned device.
    The database in now built and it seems to be working ok.
    Does this config sound ok?
    Thanks,
    Eleanor

    I think this is right. I also always had to partition disks and let asm use the partitions because for some reason it did not accept the full device.
    Bjoern

  • Creating DAS or SAN Disk Partitions for ASM

    Oracle® Database Installation Guide 11g Release 2 (11.2) for Linux (http://docs.oracle.com/cd/E11882_01/install.112/e24321/oraclerestart.htm#CHDBJGEB)
    *3.6.3 Step 2:* Creating DAS or SAN Disk Partitions for Oracle Automatic Storage Management
    In order to use a DAS or SAN disk in Oracle ASM, the disk must have a partition table. Oracle recommends creating exactly one partition for each disk.
    My question: why does Oracle recommend creating exactly one partition for each disk? For a disk to be used on Suse Linux, I normally need to at the least 3 partitions, i.e., boot, swap, and root.
    Scott

    Please read the entire manual - before starting setup and configuration.
    From the very same manual:
    Do not specify multiple partitions on a single physical disk as a disk group device.
    Oracle ASM expects each disk group device to be on a separate physical disk.So why use partitioning? If you need to subdivide a LUN into smaller LUNs and use these instead of the larger LUN. Reasons could range from LUNs bigger than 2TB in size (ASM only support up to 2TB size LUNs), to wanting partition 1's of x size for ASM diskgroup 1 and partition 2's of y size for ASM diskgroup 2.
    If you do partition, take the extra precaution of marking the partition as a non-file system (partition type <i>da</i>) to safeguard against someone (like a sysadmin looking for space) from mounting it as a file system.
    Generally though - I would not partition LUNs for ASM use.

  • Adding luns to asm on solaris

    Hi Experts,
    How to add luns to asm on solaris ? In linux i know with oracleasm command with options like scandisk, adddisk etc.. In solaris those rpms will not be there i guess.
    Thanks in advance,
    Mahesh.G

    Hi,
    I got the answer. It's " luxmadm probe " command that gives you the information about attached SAN Luns. You can add these luns to asm diskgroups.
    Cheers..
    Mahesh.G

  • How to prepare LUN (in raid 1) for ASm

    Normally, if a hard drive has NO RAID on it and only one partition, I would use:
    # /etc/init.d/oracleasm createdisk ASM01 /dev/sda1
    # /etc/init.d/oracleasm createdisk ASM01 /dev/sdb1
    Now, if I place both sda and sdb in a external/hardware based RAID 1, i.e., sda has only one partition (sda1), so is sdb (sdb1).
    Both sda and sdb will be presented to ASM as LUNs (if I understand correctly, and the ASM can not "see" the partition of the disks directly).
    What command should I use to createdisk for the ASM?
    Next, a slight complex scenario: sda has multiple partitions (sdc1 for /boot, sdc2 for /root, sdc3 for swap, and sdc4 for /u01); similarly, sdb also has multiple partitions (sdd1 for /boot, sdd2 for /root, sdd3 for swap, and sdd4 for /u01); Both of the disks are placed in RAID 1. I only want/need to prepare the sdc4 and sdd4 partitions for the ASM to use.
    What command should I use to createdisk for the ASM?
    After the above work is done, I'll be able to present the following LUNs to the ASM in one diskgroup:
    disk array 1: sda1 and sdb1 (both on RAID 1)
    disk array 2: sdc4 and sdd4 (both on RAID 1)
    Thanks to help.
    Scott
    Edited by: scottjhn on Jun 20, 2012 4:05 PM

    I would not use a partition of a disk as raw device for an ASM diskgroup*. Never mind sharing a disk between ASM and cooked file systems.
    Disks in an ASM diskgroup should provide the same basic performance. An essential requirement for the diskgroup's performance to be stable.
    If the disks in that diskgroup is also used by other s/w (for cooked file systems for example), one disk can be slower than the other at times, as that s/w is hitting its partitions particularly hard.
    This results in inconsistencies in performance of the disks in an ASM diskgroup. This will impact database performance. Cause very weird run-time performance anomolies. Will be hard to diagnose.
    There's also an ownership issue. For other s/w to own partitions on that disk, together with ASM, requires multiple parties ownership of that same disk. Which means that ASM has access to other partitions it does not own, and those owners have access to ASM partitions.
    This is not a safe, secure and robust approach.
    <i>* unless for testing purposes in a pure R&D environment</i>

  • When you add a LUN to a an ASM diskgroup, does any process start internally

    Version: 10.2, 11.2 on Unix/Linux platformsWhen you add a LUN to an ASM diskgroup, does any process start internally ?

    Hello;
    I'm thinking Oracle ASM may start a process to rebalance after a Lun is added.
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm
    Best Regards
    mseberg

  • Please Help - When I try to add ASM Disk to ASM Diskgroup it crashes Server

    We are using a Pillar SAN and have LUNS Created and are using the following multipath device: (I'm a DBA more then anything else... but I am rather familiar with linux .... SAN Hardware not so much)
    Device Size Mount Point
    /dev/dpda1 11G /u01
    The Above device is working fine... Below are the ASM Disks being Created
    Device Size Oracle ASM Disk Name
    /dev/dpdb1 198G ORCL1
    /dev/dpdc1 21G SIRE1
    /dev/dpdd1 21G CART1
    /dev/dpde1 21G SRTS1
    /dev/dpdf1 21G CRTT1
    I try to create to the first ASM Disk
    /etc/init.d/oracleasm createdisk ORCL1 /dev/dpdb1
    Marking disk "ORCL1" as an ASM disk: [FAILED]
    So I check the oracleasm log:
    #cat /var/log/oracleasm
    Device "/dev/dpdb1" is not a partition
    I did some research and found that this is a common problem with multipath devices and to work around it you have to use asmtool
    # /usr/sbin/asmtool -C -l /dev/oracleasm -n ORCL1 -s /dev/dpdb1 -a force=yes
    asmtool: Device "/dev/dpdb1" is not a partition
    asmtool: Continuing anyway
    now I scan and list the disks
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    # /etc/init.d/oracleasm listdisks
    ORCL1
    Here is whats going on in /var/log/messages when I run the oracleasm scandisks command
    # date
    Fri Aug 14 13:51:58 MST 2009
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    cat /var/log/messages | grep "Aug 14 13:5"
    Aug 14 13:52:06 seer kernel: dpdb: dpdb1
    Aug 14 13:52:06 seer kernel: dpdc: dpdc1
    Aug 14 13:52:06 seer kernel: dpdd: dpdd1
    Aug 14 13:52:06 seer kernel: dpde: dpde1
    Aug 14 13:52:06 seer kernel: dpdf: dpdf1
    Aug 14 13:52:06 seer kernel: dpdg: dpdg1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: printk: 30 messages suppressed.
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: sda : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sda : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: Dev sda: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdb: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdb: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdb: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdb: sdb1
    Aug 14 13:52:06 seer kernel: SCSI device sdc: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdc: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdc: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdc: sdc1
    Aug 14 13:52:06 seer kernel: SCSI device sdd: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdd: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdd: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdd: sdd1
    Aug 14 13:52:06 seer kernel: SCSI device sde: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sde: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sde: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sde: sde1
    Aug 14 13:52:06 seer kernel: SCSI device sdf: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdf: sdf1
    Aug 14 13:52:06 seer kernel: SCSI device sdg: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdg: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdg: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdg: sdg1
    Aug 14 13:52:06 seer kernel: SCSI device sdh: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdh: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdh: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdh: sdh1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sdi, logical block 0
    Aug 14 13:52:06 seer kernel: sdi : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdi : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdi: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdi: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdi: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdi:end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer last message repeated 4 times
    Aug 14 13:52:06 seer kernel: Dev sdi: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdj: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdj: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdj: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdj: sdj1
    Aug 14 13:52:06 seer kernel: SCSI device sdk: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdk: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdk: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdk: sdk1
    Aug 14 13:52:06 seer kernel: SCSI device sdl: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdl: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdl: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdl: sdl1
    Aug 14 13:52:06 seer kernel: SCSI device sdm: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdm: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdm: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdm: sdm1
    Aug 14 13:52:06 seer kernel: SCSI device sdn: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdn: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdn: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdn: sdn1
    Aug 14 13:52:06 seer kernel: SCSI device sdo: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdo: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdo: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdo: sdo1
    Aug 14 13:52:06 seer kernel: SCSI device sdp: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdp: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdp: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdp: sdp1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: sdq : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdq : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdq: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdq: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdq: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdq:end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdq: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdr: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdr: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdr: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdr: sdr1
    Aug 14 13:52:06 seer kernel: SCSI device sds: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sds: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sds: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sds: sds1
    Aug 14 13:52:06 seer kernel: SCSI device sdt: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdt: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdt: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdt: sdt1
    Aug 14 13:52:06 seer kernel: SCSI device sdu: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdu: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdu: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdu: sdu1
    Aug 14 13:52:06 seer kernel: SCSI device sdv: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdv: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdv: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdv: sdv1
    Aug 14 13:52:06 seer kernel: SCSI device sdw: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdw: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdw: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdw: sdw1
    Aug 14 13:52:06 seer kernel: SCSI device sdx: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdx: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdx: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdx: sdx1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: sdy : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdy : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdy: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdy: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdy: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdy:end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdy: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdz: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdz: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdz: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdz: sdz1
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdaa: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaa: sdaa1
    Aug 14 13:52:06 seer kernel: SCSI device sdab: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdab: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdab: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdab: sdab1
    Aug 14 13:52:06 seer kernel: SCSI device sdac: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdac: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdac: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdac: sdac1
    Aug 14 13:52:06 seer kernel: SCSI device sdad: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdad: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdad: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdad: sdad1
    Aug 14 13:52:06 seer kernel: SCSI device sdae: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdae: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdae: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdae: sdae1
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdaf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaf: sdaf1
    Aug 14 13:52:06 seer kernel: scsi_wr_disk: unknown partition table
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdy, sector 0
    Here's some extra info:
    # /sbin/blkid | grep asm
    /dev/sdc1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdk1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sds1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdaa1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/dpdb1: LABEL="ORCL1" TYPE="oracleasm"
    I have learned that by excluding devices in the oracleasm configuration file I eliminate those I/O errors in /var/log/messages
    # cat /etc/sysconfig/oracleasm
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver. It is generated
    # By running /etc/init.d/oracleasm configure. Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=oinstall
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dp sd"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sdc sdk sds sdaa sda"
    # ls -la /dev/oracleasm/disks/
    total 0
    drwxr-xr-x 1 root root 0 Aug 14 10:47 .
    drwxr-xr-x 4 root root 0 Aug 13 15:32 ..
    brw-rw---- 1 oracle oinstall 251, 33 Aug 14 13:46 ORCL1
    Now I can go into dbca to create the ASM instance, which starts up fine...  create a new diskgroup, I see ORCL1 as a provision ASM disk I select it ...  Click OK
    CRASH!!!  Box hangs have to reboot it....
    I have gotten myself to exactly the same point right before clicking OK and here is what is in the ASM alertlog so far
    Fri Aug 14 14:42:02 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.6.0.
    Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfile+ASM.ora
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    diagnostic_dest = "/u01/app/oracle"
    Fri Aug 14 14:42:04 2009
    PMON started with pid=2, OS id=3300
    Fri Aug 14 14:42:04 2009
    VKTM started with pid=3, OS id=3302 at elevated priority
    VKTM running at (20)ms precision
    Fri Aug 14 14:42:04 2009
    DIAG started with pid=4, OS id=3306
    Fri Aug 14 14:42:04 2009
    PSP0 started with pid=5, OS id=3308
    Fri Aug 14 14:42:04 2009
    DSKM started with pid=6, OS id=3310
    Fri Aug 14 14:42:04 2009
    DIA0 started with pid=7, OS id=3312
    Fri Aug 14 14:42:04 2009
    MMAN started with pid=8, OS id=3314
    Fri Aug 14 14:42:04 2009
    DBW0 started with pid=9, OS id=3316
    Fri Aug 14 14:42:04 2009
    LGWR started with pid=6, OS id=3318
    Fri Aug 14 14:42:04 2009
    CKPT started with pid=10, OS id=3320
    Fri Aug 14 14:42:04 2009
    SMON started with pid=11, OS id=3322
    Fri Aug 14 14:42:04 2009
    RBAL started with pid=12, OS id=3324
    Fri Aug 14 14:42:04 2009
    GMON started with pid=13, OS id=3326
    ORACLE_BASE from environment = /u01/app/oracle
    Fri Aug 14 14:42:04 2009
    SQL> ALTER DISKGROUP ALL MOUNT
    Fri Aug 14 14:42:41 2009
    At this point I don't want to click the OK until I am sure someone is in the office to reboot the machine manually if I do hang it again....  I hung it twice yesterday, however I did not have the devices excluded in the oracleasm configuration file as i do now
    Edited by: user10193377 on Aug 14, 2009 3:23 PM
    Well Clicking OK hun it again and I am waiting to get back into it, to see what new information might be gleened
    Does anyone have any ideas on what to check or where to look?????    Will update more once I can log back in

    Hi Mark,
    It looks like something is not correct with your raw device partition based on the error messages:
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    It could be a number of things. I would check with your vendor and Oracle support to see if the multipath software drive is supported and if there is a potential workaround for ASM. Sorry this is not quite the solution, but its what jumps to mind based on issues with multipath software and storage vendors for ASM with Linux and Oracle. Have you checked the validation matrix available on Metalink?
    Cheers,
    Ben

  • Add LUN into asm in rac (Solaris)

    I have a 2 node  oracle rac with asm . I have to add  new LUN  to asm  to increase space in asm disk group     I have  done the following on node1
    Ran format
    Created the partition on slice4 on the new  device file corresponding to the LUN
    Created the  block and character device file like  RAC_ORACLE_data_05 under /dev/dsk/oracle and /dev/rdsk/oracle (where the other  device files used by ASM also reside)  using the  major and minor number of the  new LUN
    Changed the permissions to oracle:dba
    The new LUN is now visible  through the ASM on node 1 as RAC_ORACLE_data_05
    My question is what steps I need to carry out on node 2  of the cluster
    The disk is visible  on node 2 as well
    The  disk is already showing up with the partitions as created through node 1
    Do I have to create the device file under /dev/dsk/oracle  with the major minor numbers as shown on node 2  for the LUN
    TIA
    Ravinder

    Almost never use Solaris (last Oracle db I managed on Solaris was 10 years ago). And as this is not a Solaris o/s forum space, you should not expect quick answers to your Solaris o/s questions (which has little to do with ASM).
    A comment though on what needs to be done at o/s level to satisfy RAC/ASM requirements.
    ASM does not care for device names. Device names do not need to be persistent before and after reboots. Device names do not need to be persistent across RAC nodes in a cluster.
    ASM simply needs to
    a) be able to discover the device
    b) be able to open the device
    This means that the device name needs to match the ASM disk discovery string (or vice versa) across all RAC nodes/ASM instances. And that the device permissions need to allow access read/write access to ASM.
    ASM enumerates devices it can I/O using its discovery string. For each device, ASM opens the device. It then reads the ASM disk label of the disk. If there is a valid label, ASM knows the disk name, status, and failgroup and diskgroup this disk belongs to. If no label, ASM treats it  as a candidate disk.
    So from an o/s perspective - a cluster LUN/disk needs to be visible on all nodes, in order to use that disk for RAC/grid storage. Actual device name is not important.
    So whatever you did on node 1 to make that new LUN available to ASM, you need to do on node 2 and others.
    I have 2 bash scripts I feel are essential to managing a RAC. The 1st script enables me to execute a command on all RAC nodes. The 2nd script enables me to copy/distribute a local file to all other RAC nodes.
    So using these scripts and dealing with Linux as o/s, I would determine whether the new disk/LUN is seen by all RAC nodes, and whether the permissions are correct for ASM usage.
    If not, I will use the 2nd script to distribute the config file(s) needed to configure the other RAC nodes with the same changes (on Linux this typically would be /etc/multipath.conf). And then use the 1st script to enable those changes on all RAC nodes (e.g. by restarting multipathd).

  • How to identify ASM DiskGroup attached to which Disks ???

    Hi Guys,
    In 11gR2 RAC, How to identify which ASM Diskgroup is attached to which Disks...( OS is RHEL 5.4).
    We could list ASM Diskgroups by,
    *#oracleasm listdisks* but this command doesn't show the disks assigned to ASM DiskGroup.
    Even for checking location of OCR and Voting Disks only show Diskgroup name and not the actual disks.
    $ocrcheck
    $crsctl query css votedisk
    ( like in 10gR2 RAC, We do entry in /etc/rules.d/udev/60-raw-rules file for raw mapping of OCR, Voting Disk and Other ASM Diskgroup)
    Plz help me, As one of the client place, I could see so many LUNs assigned to the Server and not getting exact idea which Disks have been used for OCR, Voting Disk and DATA Diskgroup.
    Thanks,
    Manish

    Well for this you can use oracleasm querydisk.Using this you can identify which device as marked for asm or not. for example you can see this below example.
    [oracle@localhost init.d]$ sqlplus "/as sysdba"
    SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jun 3 11:52:12 2010
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select path from v$asm_disk;
    PATH
    /dev/oracleasm/disks/VOL2
    /dev/oracleasm/disks/VOL1
    SQL> exit;
    [oracle@localhost init.d]$ su
    Password:
    [root@localhost init.d]# /sbin/fdisk -l
    Disk /dev/sda: 80.0 GB, 80000000000 bytes
    255 heads, 63 sectors/track, 9726 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1        1305    10482381   83  Linux
    /dev/sda2            1306        9401    65031120   83  Linux
    /dev/sda3            9402        9662     2096482+  82  Linux swap / Solaris
    /dev/sda4            9663        9726      514080    5  Extended
    /dev/sda5            9663        9726      514048+  83  Linux
    Disk /dev/sdb: 80.0 GB, 80026361856 bytes
    255 heads, 63 sectors/track, 9729 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1               1        4859    39029886   83  Linux
    /dev/sdb2            4860        9729    39118275   83  Linux
    [root@localhost init.d]# ./oracleasm querydisk /dev/sdb1
    Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
    [root@localhost init.d]# ./oracleasm querydisk /dev/sdb2
    Device "/dev/sdb2" is marked an ASM disk with the label "VOL2"
    [root@localhost init.d]# ./oracleasm querydisk /dev/sda1
    Device "/dev/sda1" is not marked as an ASM disk
    [root@localhost init.d]#Also in windows :
    C:\Documents and Settings\comp>asmtool -list
    NTFS                             \Device\Harddisk0\Partition1           140655M
    ORCLDISKDATA1                    \Device\Harddisk0\Partition2             4102M
    ORCLDISKDATA2                    \Device\Harddisk0\Partition3             4102M
    NTFS                             \Device\Harddisk0\Partition4           152617M
    C:\Documents and Settings\comp>answered by chinar.
    refer:-how to identify which rawdevice Disk Is named as VOL1 IN ASM from os level
    Happy New Year.
    regards,

  • Oracle 11gR2 (2 node) RAC Architecture requirements for ASM implementation

    My architect is the following:
    * RAC1 server of Red Hat Linux Enterprise 5.5 64bit
    * RAC2 server of Red Hat Linux Enterprise 5.5 64bit
    * NFS Server (SAN) with Red Hat Linux Enterprise 5.5 64bit
    - Exported files systems for Shared Data, OCR and Voting Disks to nodes RAC1 and RAC2
    I've installed the ASM packages on to my NFS Server (SAN) and then realized that I still need a way to share the storage. It is my understanding that Oracle ASM for Red Hat Linux Enterprise 5.5 64bit DOES NOT PROVIDE Shared Storage to the other nodes (RAC1 and RAC2). It seems that I need something else???
    I've implemented Oracle RAC 11gR2 using NFS and wanted to try building Oracle RAC using ASM in my playground (home server's).
    Does anyone have any ideas on how I might be able to use ASM with out having a true network shared storage. I'm using NFS because it is part of the Red Hat Enterprise 5.5.
    Any ideas are appreciated !!!

    Hi,
    I would't recommend NFS for RAC in a enterprise solution. However if you are doing this for your playpan, then this is what you can do.
    Once the NFS shares are presented to the servers, you need to mount your filesystems on the RAC servers to access the NFS shares.
    e.g, If the the following NFS shares are presented to rac nodes:
    /mnt/disk1
    /mnt/disk2
    And let say you want to have your ocr and voting disks on /mnt/disk1 and database on /mnt/disk2.
    Firstly, you need to mount the shared on one of the rac nodes as follows. I will mount /mnt/disk1 as /u01/shared for my ocr and voting disks and /mnt/disk2 as /u01/asmdata by updating my fstab file as follows:
    nfs_server:/mnt/DISK1 /u01/shared nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 0 0
    nfs_server:/mnt/DISK2 /u01/asmdata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
    Mount the FS on both servers and then create the block devices into each filesystem. You don't need to run oracleasm to create disks on it but rather change your ASM discovery path to the location where you have the block devices created and mount your asm diskgroups.
    That is how I got my 11g RAC setup.
    Hope that helps.
    Pranilesh

  • Best way to have resizeable LUNS for datafiles - non RAC system

    All,
    (thanks Avi for the help so far, I know its a holiday there so I wait for your return and see if any other users can chip in also)
    one of our systems (many on the go here) is being provided by an external vendor - I am reviewing their design and I have some concerns about the LUN's to house the datafiles:
    they dont want to pre-assign full size luns - sized for future growth - and want more flexibility to give each env less disk space in the beginning and allocate more as each env grows
    they are not going to use RAC (the system has nowhere near the uptime/capacity reqs - and we are removing it as it has caused enormous issues with the previous vendors and their lack of skills with it - we want simplicity)
    They have said they do not want to use ASM (I have asked for that previously I think they have never used it before - I may be able to change their minds on this but they are saying as not RAC not needed)
    but they are wondering how they give smaller luns to the env and increase size as they grow - but dont want to forever be adding /u0X /u0Y /u0Z extra filesystems (ebusiness suite rapidclone doesn't like working with many filesystem anyway and I find it unelegant to have so many mount points)
    they have suggested using large ovm repo's and serving the data filesystems out of that (i have told them to use the repo's just for the guest OS's and use direct phy's attached luns for datafiles (5TB of them)
    now they have suggested creating a large LUN (large enuogh for many envs at the same time [dev / test1 / test2 etc]) .... and putting OCFS2 on it so that they can mount it to all the domU/guest's and they can allocate space as needed uot of that:
    so that they have guests/VM's (DEV1 - DEV2 - TEST1 say) (all seperate vm's) and all mounting the same OCFS2 cluster filesystem (as /u01 maybe) and they can share that for the datafiles under a sep dir so that each DB VM would see:
    /u01/ and as subdirectories to that DEV1 DEV2 TEST1 so:
    /u01/DEV1
    /u01/DEV2
    /u01/TEST1
    and only use the right directory for each guests datafiles (thus sharing the space in u01(the big LUN) as needed per env)....
    i really dont like that as each guest is going to have the same oracle unix user details and able to write to each other dir's - id prefer dedicated LUNS for each VM - not mounted to many VM's
    so I am looking for a way to suggest something better....
    should I just insist on ASM (but this is a risk as I fear they are not experienced with it)
    or go with OEL/RHEL LVM and standard ext filesystems that can be extended - what are the risks with this? (On A Linux Guest For OVM, Which Partitions Can Be LVM? [ID 1080783.1]) - seems to say there is little performance impact
    or is there another option?
    Thanks all
    Martin
    Edited by: Martin Brambley on 11-Jun-2012 08:53

    Martin, what route did you end up going?
    We are about to deploy several hundred OEL VMs that are going to run non-RAC database instances. We don't plan to use ASM either. Our plan right now is to use one large 3TB LUN virtual disk to carve out the operating system space for the VMs and then have a separate physical attached LUN for each VM that will host a /u01 filesystem using LVM. I have concerns with this as we don't know how much space /u01 will ultimately need and if we end up having to extend /u01 on all of these VMs, that sounds like it will be messy. Right now I've got 400 separate 25gb LUNs presented to all of my OVM servers that we plan to use for /u01 filesystems.

  • What is the recommended number of clients per Mac server? Also what are some recommended specs when purchasing an Apple machine that will have Mac OS X server installed?

    What is the recommended number of clients per Mac server? Also what are some recommended specs when purchasing an Apple machine that will have Mac OS X server installed? We have around 300 clients that need to be enroled on the Mac server. I want to know what is the recommended amount of clients a Mac server should contain. Also what are some recommended specs to make sure the server will flawlessly?

    Hello cpreasbeck,
    Thank you for contacting Apple Support Communities.
    I was able to find the following transition guide for Xserve that provides some workload guidance to determine performance when planning a server deployment.
    Transition Guide Xserve
    http://images.apple.com/xserve/pdf/L422277A_Xserve_Guide.pdf
    On page 9, Performance there is a chart that provides maximum numbers of connected users for various activities such as file sharing, mail, web, calendar, directory services and Time Machine and the CPU used as a server (Xserve, Mac Pro, Mac Mini). This information is a bit dated as the referenced software is Snow Leopard Server (OS X 10.6), and the hardware is older also, but it should give you a general idea of what you might need to look for.
    Regards,
    Jeff D.

  • How to create a new ASM Diskgroup in Oracle 10g RAC?

    Hi,
    Our env is Oracle 10g R2 RAC on HP-UX. I want to create a new ASM Diskgroup. Please let me know if the following steps are ok to create a new ASM Diskgroup.
    1. Ensure the new Disk is visible in both ASM instances in RAC (v$asm_disk) and the header_status is 'CANDIDATE'
    2. From Node 1 ASM Instance issue the create diskgroup command.
    SQL> create diskgroup DATA2 external redundancy disk '/dev/rdsk/c4t0d5';
    3. Query v$asm_diskgroup and make sure the Diskgroup is created.
    4. Mount the DATA2 Diskgroup from Node 2 ASM Instance.
    5. Query v$asm_diskgroup and make sure the Diskgroup is visible from Node2 ASM instance.
    6. Ensure the header_status is 'MEMBER'.
    Rgds,

    correct.
    instead of using device file '/dev/rdsk/c4t0d5' you can create an alternate device file using mknod, which is called "asm_disk_xg" for example.
    check here: http://download.oracle.com/docs/cd/B19306_01/install.102/b14202/storage.htm#CDEECIHI
    hth

Maybe you are looking for