Reformatting ASM disks without ASM available

Hello Everyone,
In a test environment I recently built a single instance Oracle 11G database using ASM to manage database storage disks. The O/S is HP-UX v11.31 and the ASM disks are LUNs presented from a SAN. For a number of reasons I have had to rebuild this environment again from scratch, the O/S partition has been rebuilt and I have installed Oracle 11G again. However, when I now come to create & configure my ASM instance it does not find any candidate disks. I believe this is because those disks still have ASM formatting from the previous 'life' of the environment and the Diskgroups were not dropped , or the disks 'wiped' prior to rebuilding the O/S partition.
My problem is therefore - How can I 'reformat' the disks for reuse so that they will appear as candidates during ASM configuration, given the fact that I no longer have any ASM instance in existence. I'm guessing that the answer maybe to run some HP-UX O/S level commands to do this ?
Any advice or suggestion would be much appreciated !
Thanks,
Shaun

Hi,
Thanks very much for this. I've run the command against the affected SAN disks and it has worked. All disks now appear as candidates when I run ASM configuration through OUI.
Thanks again,
Shaun

Similar Messages

  • Please Help - When I try to add ASM Disk to ASM Diskgroup it crashes Server

    We are using a Pillar SAN and have LUNS Created and are using the following multipath device: (I'm a DBA more then anything else... but I am rather familiar with linux .... SAN Hardware not so much)
    Device Size Mount Point
    /dev/dpda1 11G /u01
    The Above device is working fine... Below are the ASM Disks being Created
    Device Size Oracle ASM Disk Name
    /dev/dpdb1 198G ORCL1
    /dev/dpdc1 21G SIRE1
    /dev/dpdd1 21G CART1
    /dev/dpde1 21G SRTS1
    /dev/dpdf1 21G CRTT1
    I try to create to the first ASM Disk
    /etc/init.d/oracleasm createdisk ORCL1 /dev/dpdb1
    Marking disk "ORCL1" as an ASM disk: [FAILED]
    So I check the oracleasm log:
    #cat /var/log/oracleasm
    Device "/dev/dpdb1" is not a partition
    I did some research and found that this is a common problem with multipath devices and to work around it you have to use asmtool
    # /usr/sbin/asmtool -C -l /dev/oracleasm -n ORCL1 -s /dev/dpdb1 -a force=yes
    asmtool: Device "/dev/dpdb1" is not a partition
    asmtool: Continuing anyway
    now I scan and list the disks
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    # /etc/init.d/oracleasm listdisks
    ORCL1
    Here is whats going on in /var/log/messages when I run the oracleasm scandisks command
    # date
    Fri Aug 14 13:51:58 MST 2009
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    cat /var/log/messages | grep "Aug 14 13:5"
    Aug 14 13:52:06 seer kernel: dpdb: dpdb1
    Aug 14 13:52:06 seer kernel: dpdc: dpdc1
    Aug 14 13:52:06 seer kernel: dpdd: dpdd1
    Aug 14 13:52:06 seer kernel: dpde: dpde1
    Aug 14 13:52:06 seer kernel: dpdf: dpdf1
    Aug 14 13:52:06 seer kernel: dpdg: dpdg1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: printk: 30 messages suppressed.
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: sda : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sda : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: Dev sda: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdb: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdb: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdb: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdb: sdb1
    Aug 14 13:52:06 seer kernel: SCSI device sdc: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdc: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdc: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdc: sdc1
    Aug 14 13:52:06 seer kernel: SCSI device sdd: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdd: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdd: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdd: sdd1
    Aug 14 13:52:06 seer kernel: SCSI device sde: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sde: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sde: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sde: sde1
    Aug 14 13:52:06 seer kernel: SCSI device sdf: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdf: sdf1
    Aug 14 13:52:06 seer kernel: SCSI device sdg: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdg: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdg: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdg: sdg1
    Aug 14 13:52:06 seer kernel: SCSI device sdh: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdh: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdh: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdh: sdh1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sdi, logical block 0
    Aug 14 13:52:06 seer kernel: sdi : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdi : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdi: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdi: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdi: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdi:end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer last message repeated 4 times
    Aug 14 13:52:06 seer kernel: Dev sdi: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdj: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdj: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdj: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdj: sdj1
    Aug 14 13:52:06 seer kernel: SCSI device sdk: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdk: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdk: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdk: sdk1
    Aug 14 13:52:06 seer kernel: SCSI device sdl: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdl: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdl: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdl: sdl1
    Aug 14 13:52:06 seer kernel: SCSI device sdm: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdm: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdm: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdm: sdm1
    Aug 14 13:52:06 seer kernel: SCSI device sdn: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdn: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdn: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdn: sdn1
    Aug 14 13:52:06 seer kernel: SCSI device sdo: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdo: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdo: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdo: sdo1
    Aug 14 13:52:06 seer kernel: SCSI device sdp: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdp: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdp: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdp: sdp1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: sdq : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdq : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdq: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdq: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdq: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdq:end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdq: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdr: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdr: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdr: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdr: sdr1
    Aug 14 13:52:06 seer kernel: SCSI device sds: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sds: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sds: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sds: sds1
    Aug 14 13:52:06 seer kernel: SCSI device sdt: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdt: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdt: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdt: sdt1
    Aug 14 13:52:06 seer kernel: SCSI device sdu: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdu: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdu: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdu: sdu1
    Aug 14 13:52:06 seer kernel: SCSI device sdv: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdv: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdv: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdv: sdv1
    Aug 14 13:52:06 seer kernel: SCSI device sdw: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdw: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdw: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdw: sdw1
    Aug 14 13:52:06 seer kernel: SCSI device sdx: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdx: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdx: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdx: sdx1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: sdy : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdy : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdy: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdy: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdy: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdy:end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdy: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdz: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdz: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdz: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdz: sdz1
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdaa: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaa: sdaa1
    Aug 14 13:52:06 seer kernel: SCSI device sdab: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdab: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdab: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdab: sdab1
    Aug 14 13:52:06 seer kernel: SCSI device sdac: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdac: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdac: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdac: sdac1
    Aug 14 13:52:06 seer kernel: SCSI device sdad: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdad: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdad: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdad: sdad1
    Aug 14 13:52:06 seer kernel: SCSI device sdae: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdae: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdae: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdae: sdae1
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdaf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaf: sdaf1
    Aug 14 13:52:06 seer kernel: scsi_wr_disk: unknown partition table
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdy, sector 0
    Here's some extra info:
    # /sbin/blkid | grep asm
    /dev/sdc1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdk1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sds1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdaa1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/dpdb1: LABEL="ORCL1" TYPE="oracleasm"
    I have learned that by excluding devices in the oracleasm configuration file I eliminate those I/O errors in /var/log/messages
    # cat /etc/sysconfig/oracleasm
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver. It is generated
    # By running /etc/init.d/oracleasm configure. Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=oinstall
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dp sd"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sdc sdk sds sdaa sda"
    # ls -la /dev/oracleasm/disks/
    total 0
    drwxr-xr-x 1 root root 0 Aug 14 10:47 .
    drwxr-xr-x 4 root root 0 Aug 13 15:32 ..
    brw-rw---- 1 oracle oinstall 251, 33 Aug 14 13:46 ORCL1
    Now I can go into dbca to create the ASM instance, which starts up fine...  create a new diskgroup, I see ORCL1 as a provision ASM disk I select it ...  Click OK
    CRASH!!!  Box hangs have to reboot it....
    I have gotten myself to exactly the same point right before clicking OK and here is what is in the ASM alertlog so far
    Fri Aug 14 14:42:02 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.6.0.
    Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfile+ASM.ora
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    diagnostic_dest = "/u01/app/oracle"
    Fri Aug 14 14:42:04 2009
    PMON started with pid=2, OS id=3300
    Fri Aug 14 14:42:04 2009
    VKTM started with pid=3, OS id=3302 at elevated priority
    VKTM running at (20)ms precision
    Fri Aug 14 14:42:04 2009
    DIAG started with pid=4, OS id=3306
    Fri Aug 14 14:42:04 2009
    PSP0 started with pid=5, OS id=3308
    Fri Aug 14 14:42:04 2009
    DSKM started with pid=6, OS id=3310
    Fri Aug 14 14:42:04 2009
    DIA0 started with pid=7, OS id=3312
    Fri Aug 14 14:42:04 2009
    MMAN started with pid=8, OS id=3314
    Fri Aug 14 14:42:04 2009
    DBW0 started with pid=9, OS id=3316
    Fri Aug 14 14:42:04 2009
    LGWR started with pid=6, OS id=3318
    Fri Aug 14 14:42:04 2009
    CKPT started with pid=10, OS id=3320
    Fri Aug 14 14:42:04 2009
    SMON started with pid=11, OS id=3322
    Fri Aug 14 14:42:04 2009
    RBAL started with pid=12, OS id=3324
    Fri Aug 14 14:42:04 2009
    GMON started with pid=13, OS id=3326
    ORACLE_BASE from environment = /u01/app/oracle
    Fri Aug 14 14:42:04 2009
    SQL> ALTER DISKGROUP ALL MOUNT
    Fri Aug 14 14:42:41 2009
    At this point I don't want to click the OK until I am sure someone is in the office to reboot the machine manually if I do hang it again....  I hung it twice yesterday, however I did not have the devices excluded in the oracleasm configuration file as i do now
    Edited by: user10193377 on Aug 14, 2009 3:23 PM
    Well Clicking OK hun it again and I am waiting to get back into it, to see what new information might be gleened
    Does anyone have any ideas on what to check or where to look?????    Will update more once I can log back in

    Hi Mark,
    It looks like something is not correct with your raw device partition based on the error messages:
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    It could be a number of things. I would check with your vendor and Oracle support to see if the multipath software drive is supported and if there is a potential workaround for ASM. Sorry this is not quite the solution, but its what jumps to mind based on issues with multipath software and storage vendors for ASM with Linux and Oracle. Have you checked the validation matrix available on Metalink?
    Cheers,
    Ben

  • RAC with ASM and without ASM

    Hi all,
    we planing to install RAC 11g instance active/active . and we are using SAN storage RAID 10.
    I know ASM is nice feature . but it need more maintenance in future . This is what I see
    it from Manual and training . for patching ..... because it maintain as instance.
    why I do need ASM since I have SAN and I can control mirroring ...etc
    I need sold answer here ?? why I need to use this feature that already can be covered using another facility like SAN.
    Best Regards,

    What I have found in a RAC world is there is maintenance no matter which way you go, A cluster file system will require upgrades, patches, etc. RAW volumes will require extra effort in allocation, etc. as well as increase the number of files in the database. ASM requires additional instance on each node to maintain which is quite simple and rolling patches in ASM is becoming reality slowly. I have found that removing the management of RAW volumes is more trouble then the maintenance of the ASM instances and the added benefits of ASM outweigh the maintenance for sure. I found that the cluster file system mainteance is pretty well a wash.
    As for ASM being widely used, the most recent RAC clusters (last 3) I have built have all been ASM....... 1 on HPUX and 2 on Linux (Red Hat and Oracle Enterprise Linux) and future clusters coming up that I know of are all going to be ASM as well. While it may be true that a lot of existing RAC environments have not yet gone to ASM almost all new RAC environments are. It is certainly taking hold. If you look at the effort on a large database to move to ASM from RAW volumes or cluster file system it can appear to be a lot of work and that is true, but in the long run my experience with ASM has been positive therefore I would not hesitate to recommend new RAC clusters be built with ASM and existing clusters should have a migration plan in place. As with some cluster file systems like veritas, GPFS, etc. There is addtional cost involved where ASM does not have the additional cost so moving existing clusters can save $$........ RAM volumne management may not fall on the DBA but someone has to manage all those volumnes at a SAN level and that is additional management just may not really be with the DBA.
    Just my additional 2 cents worth.
    Hope this helps.

  • Recover OCR and VOTE disk after complete corruption of ASM disk groups.

    Hi Gurus,
    I am simulating a recovery situation to perform recover of OCR and Vote files after complete corruption of ASM related disks and diskgroups. I have setup my environment as follows:\
    Environment: RAC
    OS: OEL 5.5 32-bit
    GI Version: 11.2.0.2.0
    ASM Disk groups: +OCR, +DATA
    OCR, Vote Files location: +OCR
    ASM Redundancy: External
    ASM Disks: /dev/asm-disk1, /dev/asm-disk2
    /dev/asm-disk1 - mapped on +OCR
    /dev/asm-disk2 - mapped on +DATA
    With the above configuration in place I have manually corrupted +OCR, +DATA diskgroups with dd command. I used this command to completely corrupt +OCR disk group.
    dd if=/dev/zero of=/dev/asm-disk1. I have manual backups as well as automatic backups of OCR and Vote disk. I am not using ASMLib.
    I followed this link:
    http://docs.oracle.com/cd/E11882_01/rac.112/e17264/adminoc.htm#TDPRC237
    When I tried to recover OCR file, I could not do so as there is no such diskgroup which ASM can restore the OCR, Voting disk to. I could not Re-create OCR and DATA diskgroups as I cannot connect to ASM instance. If you have a solution or workaround for my situation please describe it. That will be greatly appreciated.
    Thanks and Regards,
    Suresh.

    Please go through the following document which have the detailed steps to restore the OCR
    How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems [ID 1062983.1]

  • ASM disk added without scan on second node

    Hi All ,
    Oracle Version:11.2.0.3
    I need one help for one issue with ASM disk addition.
    It is a two node RAC and one disk group was filled.
    One disk was available as UNUSED001 and so we renamed it ran scan disk  and added the disk to the diskgroup on node1.
    But , as we did not run the scan disk on second node , the name is still showing as UNUSED001 and assigned to diskgroup so showing as MEMBER.
    Also, the renamed disk is also showing as MEMBER but not assigned with any diskgroup.
    Usually, when this heppens we have to reboot the node to fix the issue , but would like to get idea if this can be fixed without bouncing nodes.

    Hi ,
    + Probably that disk addition failed with ORA-15075 as same named device is not visible after renaming of disk.
      As this validation takes place after writing disk header ,it is showing as MEMBER.
    + Get downtime of cluster on 2nd node and run scandisks on 2nd node.
    + Now renamed disk should be showing up on node 2.
    + if showing up ,then validate .
    -- All expected diskgroups were mounted on both nodes or not
    sql> select inst_id,name,state from gv$asm_diskgroup;
    -- If mounted validate that renamed disk group_number and mount_status
    sql> col path for a30
    sql> select inst_id,group_number,path,mount_status from gv$asm_disk;
    + If group_number is 0 and mount_status is CLOSED ,then it is not part of any mounted diskgroup.
      Add that disk again with force option in same diskgroup.
    sql> alter diskgroup <diskgroup_name> add disk 'ORCL:<LABEL_NAME>' force;
    And allow rebalance to complete.
    Regards,
    Aritra

  • ASM Disk Remove/Add Question

    Hello All,
    I have a quick question -
    We have an ASM Diskgroup that had 8 Disks in it. I removed 2 Diskss and the Diskgroup has been rebalanced. I have not done anything with the removed Disks.
    They both appear in the ASM target as eligble to add back into the Diskgroup.
    Do I need to re-format/partition/create disk on them before adding them back in? Would existing data on the Diskl cause an issue in the Diskgroup?
    This is the first time I will be adding back Disks that have been previously in the Diskgroup.
    Thanks in advance for the advice.
    Michele

    Hi,
    Disks eligible to be assigned to a diskgroup must have the status "CANDIDATE" or "FORMER" or "PROVISIONED".
    · CANDIDATE - Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement
    · PROVISIONED - Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement. The PROVISIONED header status is different from the
    CANDIDATE header status in that PROVISIONED implies that an additional platform-specific action has been taken by an administrator to make the disk available for ASM.
    · FORMER - Disk was once part of a disk group but has been dropped cleanly from the group. It may be added to a new disk group with the ALTER DISKGROUP statement.
    The ALTER DISKGROUP...DROP DISK without WAIT option statement returns before the drop and rebalance operations are complete. Do not reuse, remove, or disconnect the dropped disk until the HEADER_STATUS column for this disk in the V$ASM_DISK view changes to FORMER.
    You don't need change anything on OS Level. Oracle will reuse asmdisk dropped without needs perform any administrative task on OS Level.
    Regards,
    Levi Pereira

  • How to resize ASM disks

    Hi! I have Oracle RAC on Centos with one active node and share storage HP MSA1500 12x500 Gb FC. Also, i have two servers which i want install new rac database.
    [oracle@server ~]$ crsctl query css votedisk
    0. 0 /u04/sync/oracrs/CSSFile
    located 1 votedisk(s).
    [oracle@server ~]$ ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 262120
    Used space (kbytes) : 1672
    Available space (kbytes) : 260448
    ID : 1521772939
    Device/File Name : /u04/sync/oracrs/CRSFile
    Device/File integrity check succeeded
    Device/File not configured
    Cluster registry integrity check succeeded
    [oracle@server ~]$
    CRS & CSS install on ocfs.
    Затем, через oracleasm
    [oracle@server ~]$ /etc/init.d/oracleasm listdisks
    VOL8
    VOL9
    [oracle@server ~]$
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sda1
    Disk "/dev/sda1" is not marked an ASM disk
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdb1
    Disk "/dev/sdb1" is marked an ASM disk with the label ""
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdc1
    Disk "/dev/sdc1" is marked an ASM disk with the label ""
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdd1
    Disk "/dev/sdd1" is marked an ASM disk with the label "VOL8"
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdd2
    Disk "/dev/sdd2" is marked an ASM disk with the label "VOL9"
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdd3
    Disk "/dev/sdd3" is not marked an ASM disk
    [root@server ~]#
    [root@server ~]# fdisk -l
    Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes
    255 heads, 32 sectors/track, 17433 cylinders
    Units = cylinders of 8160 * 512 = 4177920 bytes
    Device Boot Start End Blocks Id System
    /dev/cciss/c0d0p1 * 1 50 203984 83 Linux
    /dev/cciss/c0d0p2 51 1305 5120400 82 Linux swap
    /dev/cciss/c0d0p3 1306 17433 65802240 83 Linux
    Disk /dev/sda: 1048 MB, 1048657920 bytes
    33 heads, 61 sectors/track, 1017 cylinders
    Units = cylinders of 2013 * 512 = 1030656 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 1 1017 1023580 83 Linux
    Disk /dev/sdb: 500.0 GB, 500071791104 bytes
    255 heads, 63 sectors/track, 60796 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 60796 488343838+ 83 Linux
    Disk /dev/sdc: 499.0 GB, 499025092608 bytes
    255 heads, 63 sectors/track, 60669 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 60669 487323711 83 Linux
    Disk /dev/sdd: 500.0 GB, 500073750528 bytes
    255 heads, 63 sectors/track, 60797 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 1000 8032468+ 83 Linux
    /dev/sdd2 1001 2000 8032500 83 Linux
    /dev/sdd3 2001 3000 8032500 83 Linux
    [root@server ~]#
    Than in sqlplusi see, what disks use ASM:
    SQL> select name, total_mb, free_mb,path from v$asm_disk;
    NAME TOTAL_MB FREE_MB PATH
    VOL1 476898 431798 /dev/raw/raw1
    VOL2 475902 475379 /dev/raw/raw2
    SQL> select name, total_mb, free_mb from v$asm_diskgroup;
    NAME TOTAL_MB FREE_MB
    DATA 476898 431798
    RECOVERY_AREA 475902 475379
    SQL> select name, type from V$asm_diskgroup;
    NAME TYPE
    DATA EXTERN
    RECOVERY_AREA EXTERN
    Also i see what disks use diskgroups DATA and RECOVERY:
    [root@server ~]# cat /etc/sysconfig/rawdevices
    # This file and interface are deprecated.
    # Applications needing raw device access should open regular
    # block devices with O_DIRECT.
    # raw device bindings
    # format: <rawdev> <major> <minor>
    # <rawdev> <blockdev>
    # example: /dev/raw/raw1 /dev/sda1
    # /dev/raw/raw2 8 5
    /dev/raw/raw1 /dev/sdb1
    /dev/raw/raw2 /dev/sdc1
    /dev/raw/raw8 /dev/sdd1
    /dev/raw/raw9 /dev/sdd2
    [root@server ~]#
    [root@server ~]# mount
    /dev/cciss/c0d0p3 on / type ext3 (rw)
    none on /proc type proc (rw)
    none on /sys type sysfs (rw)
    none on /dev/pts type devpts (rw,gid=5,mode=620)
    usbfs on /proc/bus/usb type usbfs (rw)
    /dev/cciss/c0d0p1 on /boot type ext3 (rw)
    none on /dev/shm type tmpfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    configfs on /config type configfs (rw)
    ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
    /dev/sda1 on /u04/sync type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
    oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
    [root@server ~]#
    [root@server ~]# ls -l /dev/oracleasm/disks/
    total 0
    brw-rw---- 1 oracle dba 8, 49 Aug 8 19:34 VOL8
    brw-rw---- 1 oracle dba 8, 50 Aug 8 19:34 VOL9
    [root@server ~]#
    How i could resize asm disk from 450 G for example to 100 Gb for my new rac installation?
    Sorry for my english:(

    Hi mmusette, hi all,
    please, allow a little clarification. ASM allows resizing disks:
    From: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm#sthref1727
    Resizing Disks in Disk Groups+
    The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations:
    Resize all disks in the disk group
    Resize specific disks
    Resize all of the disks in a specified failure group
    If you do not specify a new size in the SIZE clause then ASM uses the size of the disk as returned by the operating system. This could be a means of recovering disk space when you had previously restricted the size of the disk by specifying a size smaller than disk capacity.
    The new size is written to the ASM disk header record and if the size of the disk is increasing, then the new space is immediately available for allocation. If the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation can successfully relocate all extents, then the new size is made permanent, otherwise the rebalance fails.
    However, if you have setup the ASM disk on a physical disk partition, you probably will not be able to resize the partition without destroying the data on the disk. If you, however, used a volume manager to create volumes and you based your ASM disks on those volumes (through RAW devices or directly) AND your volume manager allows resizing the volumes, you should be able to make use of the command mentioned above.
    Thanks,
    Markus

  • Creation of ASM disk for OUI

    I need to install a Oracle Database in order to install Enterprise Manager Cloud Control 12c.
    Need the database to use a ASM disk.
    I used the following command to create the disk, per the Oracle Database Installation Guide.
    #/usr/sbin/oracleasm createdisk DISK1 /dev/sdd1
    #oracleasm listdisks
    DISK1
    However, when running the OUI for Oracle Database 12c (understand 11.2.0.3 is certified for Cloud Control), step 7
    errors INS-30517 when attempting to select "Oracle Automatic Storage Management" for "Storage type".
    Researched the error at this location but no cause or action was provided.
    http://docs.oracle.com/cd/E16655_01/server.121/e26079/common_errormessages.htm
    INS-30517: Automatic Storage Management software is not configured on this system.
    The database install guide states that I need to ensure the "disk discovery string" is set the "ORCL:*" or is left empy ("") so the installer discovers these disks.
    It doesn't show how to confirm or change the settings.
    At this point I'm at a stopping point.

    All ASMLib installations require the oracleasmlib and oracleasm-support packages. The oracleasm kernel driver is included in the Oracle UEK kernel. Perhaps you are missing the oracleasmlib package. You can download it from:
    Oracle Linux: Oracle ASMLib | Oracle Technology Network
    Oracleasmlib is not necessary for ASM to work, but it contains software necessary for Linux oracleasm, including the /usr/sbin/oracleasm-discover utility, which the Oracle installer used in the previous 11g version to detect available ASM volumes.

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Configuring Disk for ASM in a Standalone environment

    Hi,
    I am attempting to install ASM for the very first time ! I am installing ASM in a Standalone Linux 6 environment. I am following CH3 of the 11gR2 Database Installation Guide. I have installed the Oracle ASM packages and have configured ASM.
    I am now trying to configure the disk to use the ASM Library Driver
    In my case, my server has a single 250 Gb SCSI disk. I have used this for the install of Linux 6. However I have left some space unformatted which I had intended to use with ASM.
    when I do fdisk -l I get
    Disk /dev/sde: 250.1 GB, 250059350016 bytes
    255 heads, 63 sectors/track, 30401 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000b3ade
       Device Boot      Start         End      Blocks   Id  System
    /dev/sde1   *           1          64      512000   83  Linux
    Partition 1 does not end on cylinder boundary.
    /dev/sde2              64       30402   243685376   8e  Linux LVM
    Disk /dev/mapper/vg_lab3-lv_root: 10.5 GB, 10485760000 bytes
    255 heads, 63 sectors/track, 1274 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Disk /dev/mapper/vg_lab3-lv_swap: 4227 MB, 4227858432 bytes
    255 heads, 63 sectors/track, 514 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Disk /dev/mapper/vg_lab3-lv_home: 52.4 GB, 52428800000 bytes
    255 heads, 63 sectors/track, 6374 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    when I do df -k I get
    Filesystem           1K-blocks      Used Available Use% Mounted on
    /dev/mapper/vg_lab3-lv_root
                          10079084   3435876   6131208  36% /
    tmpfs                  1551236        76   1551160   1% /dev/shm
    /dev/sde1               495844     52850    417394  12% /boot
    /dev/mapper/vg_lab3-lv_home
                          50395844   2217864  45617980   5% /home
    A couple of things I am not sure of -
    Q1. Has the linux install in effect created some virtual disks i.e.
    /dev/mapper/vg_lab3-lv_root, /dev/mapper/vg_lab3-lv_swap and /dev/mapper/vg_lab3-lv_home ?
    Q2. Are /dev/sde1 and /dev/sde2 virtual disks in their own right or partitions of /dev/sde ?
    The 11gR2 install manual states on P3-13 that I should use fdisk or parted to create a single whole-disk partition on the disk devices to use
    I am not quite sure what I need to do -
    Q3a. I assume I can use part of my physical disk ( i.e. /dev/sbe2 ) with ASM rather than it having to be a complete physical disk ?
    Q3b. Do I simply mark /dev/sbe2 as an ASM disk using oracleasm createdisk DISK1 /dev/sbe2 ?
    or do I somehow have to create a partition on /dev/sbe2 ? ( I guess this harks back to Q2. in that is sbe2 a disk or a partition ? )
    any help greatly appreciated
    Jim

    Thanks Billy.
    I have checked back to my Linux install note and indeed it does seem to create the logical volumes at that stage i.e. it creates Lv_root, Lv_home and Lv_swap. During the nistall I was offered to specify the type of installation. The following options were given -
    - Use All Space
    - Replace Existing Linux Systems
    - Shrink Current System
    - Use Free Space
    - Create Custom Layout
    Q1. I chose the Use All Space. However from what you are saying it looks as if I  perhaps should have used the option 'Create Custom Layout'. I am guessing the use all space option is what created the Logical Volumes - is this correct and should I have used Create Custom Layout ?
    I reduced lv_home from 183732 to 50000 Mb. This left 173 Gb free - I intended to use this free space with ASM
    Q2. Can a single disk only have 4 primary partitions ?
    I guess I am where I am now, so I am wondering how I proceed with the disk layout I have presently.
    Can you clarify from my output of fdisk -l that was in my original post, what exactly I have. My understanding is as follows -
    There are 3 logical volumes created :  /dev/mapper/vg_lab3-lv_root, /dev/mapper/vg_lab3-lv_swap and /dev/mapper/vg_lab3-lv_home
    There is then the physical volume /dev/sde. On this physical volume is /dev/sde1 and /dev/sde2
    Q3. Are these partitions on that volume ?
    Q4. /dev/sde1 appears to be a boot partition and /dev/sde2 appears to be the partition of 173 Gb that I did not allocate during the Linux install. Is this correct ?
    Q5. Is it possible that I can now take /dev/sde2 and mark that as an ASM Disk ? I wasn't sure if creating an ASM disk is restricted to an entire physical disk or if you can use a partition ?
    When I do the following as root user
    /etc/init.d/oracleasm createdisk VOL1 /dev/sde2
    Unable to open device "/dev/sde2" Device or resource busy
    I cannot understand how this is the case - /dev/sde2 is not mounted
    lsof /dev/sde2 does not show anything using /dev/sde2
    /etc/init.d/oracleasm status
    shows ASM is loaded and /dev/oracleasm is mounted
    Any help you can offer would be greatly received.
    Thanks,
    Jim

  • Problem with create asm disk group

    Hi all
    I am about configuring ASM, so I have downloaded the Grid infrastructure 11g (32 bit), I have configured and created parameters and directories.
    I runned the installer but get stack at the 3 step where I have to change the discovery path. I have taped as path /dev where I have created 3 partitions sdb1, sdc1 and sdd1.
    Is there any thing should I perform on partitions may be or parameters to set before I go through the installation?
    Thanks for help

    You can use the below link to install ASMLIB:
    http://gssdba.wordpress.com/category/asm/
    REFERANCE : Doc ID 580153.1
    There are two different methods to configure ASM on Linux:
    ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. RAW devices are not required with this method as ASMLib works with block devices.
    ASM with Standard Linux I/O: This method creates all Oracle database files on raw character devices managed by ASM using standard Linux I/O system calls. You will be required to create RAW devices for all disk partitions used by ASM.
    You can download the ASMLIB rpm’s from below URL:
    http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
    STEP 01: LOG IN AS ROOT USER AND INSTALL THE RPMS
    [root@node1 ASMLIB]# rpm -Uvh oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm \
    > oracleasmlib-2.0.4-1.el5.i386.rpm \
    > oracleasm-support-2.1.8-1.el5.i386.rpm
    warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
    Preparing… ########################################### [100%]
    1:oracleasm-support ########################################### [ 33%]
    2:oracleasm-2.6.18-164.el########################################### [ 67%]
    3:oracleasmlib ########################################### [100%]
    STEP 02: CONFIGURE ASMLIB
    [root@node1 ASMLIB]# /etc/init.d/oracleasm configure
    Configuring the Oracle ASM library driver.
    This will configure the on-boot properties of the Oracle ASM library
    driver. The following questions will determine whether the driver is
    loaded on boot and what permissions it will have. The current values
    will be shown in brackets (‘[]‘). Hitting <ENTER> without typing an
    answer will keep that current value. Ctrl-C will abort.
    Default user to own the driver interface []: oracle
    Default group to own the driver interface []: dba
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver: [ OK ]
    Scanning the system for Oracle ASMLib disks: [ OK ]
    STEP 03 :CREATE ASM DISK
    [root@node1 ASMLIB]# /etc/init.d/oracleasm listdisks
    [root@node1 ASMLIB]#
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
    Marking disk “VOL1″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
    Marking disk “VOL2″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
    Marking disk “VOL3″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL4 /dev/sde1
    Marking disk “VOL4″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL5 /dev/sdf1
    Marking disk “VOL5″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm listdisks
    VOL1
    VOL2
    VOL3
    VOL4
    VOL5
    [root@node1 ~]#

  • Correct steps to reload ASM disks into a running ASM instance on Linux RHEL

    Hi guys,
    i hope this is the correct forum, awfully i did not found any document specifically talking abou this.
    first of all let me explain quickly my goal: in my server (it contains the same data of 3 production RACs for reporting purposes) i would like to shutdown a single instance (i have 3) and realod his disks to refresh the data or because for some reason its mount into ASM is failed.
    this is a mix of Oracle ASM and Linux stuff...
    on this server there is a shell script that perform the following steps:
    1) shutdown 3 instances of 11g (11.2.0.2.0 with patch for asm already installed for bug... dont remember)
    2) shutdown ASM instance
    3) stop ASMLIB
    4) stop multipathd
    5) remove all disks devices from linux
    6) take a snapshot on the 3 production RACs
    7) remap the disks to this server (SAN stuff)
    8) rescan the disks
    9) start ASMLIB
    10) start ASM instance
    11) mount the DiskGroups
    12) start 3 oracle instances
    this works fine... my problem is when i try to refresh the disks of just 1 of the 3 instances, i perform the following steps:
    1) stop instance ABC
    2) dismount ABC diskgroups related to ABC instance
    3) wait 5 seconds (dont ask... if idid it without ASM still have the disks locked)
    4) delete the ASMLIB disks (oracleasm deletedisk XYZ) relate to such diskgroups
    5) remove mpaths related to the disks that need to be refreshed or correctly mounted oneagain
    6) remove devices
    7) unmap disks on SAN
    8) take snapshot (clone disks)
    9) map disks from SAN
    10) delete bindings file
    11) rescan devices
    12) execute multipath command (to rebuild bindings file)
    13) execute oracleasm scandisks
    14) mount DiskGroups
    15) start instance ABC
    actually i dont understand why the OS (or specifically ASMLIB) do not recognize me anymore one of the ASMLIB disk
    do you have any idea ? or im completly crazy doing that ?
    best regards,
    Luca

    Hi,
    well this step
    4) delete the ASMLIB disks (oracleasm deletedisk XYZ) relate to such diskgroups
    is not good. Because it will clear the ASM Header and as a result ASM will never again be able to detect the disks.
    So it is clear that "scnadisks" later will not see the disks, because you wiped them.
    Worse: You did the snapshot after deleting them... this gets tricky to get them "repaired" again.
    However there is the possibility to relabel a dropped ASM disks. You need to open an SR for this, since the procedure to restore the lost ASM header is not that simple..
    Regards
    Sebastian

  • EM Alert: Warning:+ASM Disk Group requires rebalance

    Environment:
    O.S Version : HP-UX B.11.31 U ia64
    Oracle DB Version : 11.2.0.3.0 , Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Database files are on : ASM Disk Group
    It is about the ASM Diskgroup low disk space alert by Oracle Enterprise Manager.
    Message=Disk Group DG_FLASH_01 requires rebalance because at least one disk is low on space.
    Metric=Disk Minimum Free (%) without Rebalance
    Metric value=18.961548
    Disk Group Name=DG_FLASH_01
    Severity=Warning
    Target Name=+ASM_dbsrver.siva.com
    Target type=Automatic Storage Management
    There is only 1 LUN assigned to this diskgroup +DG_FLASH_01_.
    We have a Oracle Enterprise Manager Grid Control Job MN_DBSRVR_DEL_ARCHIVELOGS runs every 12 hours by 1 AM and again at 1PM daily. It cleans-up the archivelogs older than 3 days old. The FLASH diskgroup is continuously being written with new files for both archivelogs and flashback logs.
    If there was multiple disks and a vast difference between the files on the different LUNs then a rebalance would be good to run.
    How to address such recurring alert of " *Disk Group DG_FLASH_01 requires rebalance because at least one disk is low on space*. " with _only one LUN on the +ASM Diskgroup_?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    As I stated earlier there is only one disk on this diskgroup
    DISK_NUMBER      OS_MB   TOTAL_MB    FREE_MB NAME                           PATH
              0      65536      65536      12995 DG_FLASH_01_0000        /devasm/gc/ora_asm_gc_b03_a14_d10_08_36
              0      65536      65536      43064 DG_DATA_01_0000         /devasm/gc/ora_asm_gc_b03_a13_d12_08_35So disk REBALANCE not required
    Edited by: Sivaprasad S on Feb 14, 2013 11:46 PM

  • Change ASM DISK NAME in 11.2 version

    Hi,
    Is it possible to change ASM DISK NAME for example in diskgroup DATA01. I added disk without NAME, and system-choosen NAME is allocated. Can I set it to e.g., ORCL:DATA09 . Currently its DATA09_009.
    Please suggest your views.
    Thanks a lot.
    Best Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    You can change the label of disk using oracleasm renamedisk; oracle discourages it
    I will changing the label of disk from CLONE1 to CLONE1a and CLONE2 to CLONE2a of diskgroup CLONE
    SQL> col disk_number for 99
    SQL> col name for a15
    SQL> col label for a15
    SQL> col path for a30
    SQL> select disk_number,name,label,path from v$asm_disk;
    0 DATA CLONE1 ORCL:CLONE1
    1 LOGS CLONE2 ORCL:CLONE2
    ASMCMD> lsdsk -G CLONE
    Path
    ORCL:CLONE1
    ORCL:CLONE2
    ASMCMD> umount CLONE
    [root@otest]# /etc/init.d/oracleasm force-renamedisk CLONE1 CLONE1a
    Renaming disk "CLONE1" to "CLONE1a": [  OK  ]
    [root@otest]# /etc/init.d/oracleasm force-renamedisk CLONE2 CLONE2a
    Renaming disk "CLONE2" to "CLONE2a": [  OK  ]
    ASMCMD> mount CLONE
    ASMCMD> lsdsk -G CLONE
    Path
    ORCL:CLONE1A
    ORCL:CLONE2A
    select disk_number,name,label,path from v$asm_disk;SQL> SQL> SQL> SQL>
    0 DATA CLONE1A ORCL:CLONE1A
    1 LOGS CLONE2A ORCL:CLONE2A
    Edited by: vlethakula on Jan 18, 2013 11:48 AM
    Edited by: vlethakula on Jan 18, 2013 11:56 AM

  • Using udev devices for ASM disk on Linux

    Hello all,
    I need details related to using udev devices as ASM disk on Linux.
    What I want to know is, once i get the list of ASM Disks that are used to create the DiskGroups, based on the path of the Disk (whether /dev/sdj or /dev/dm-3 or /dev/asm_udev_device, /dev/vx/<vxvm-volume>, /dev/<lvm-volume>) how do i get to know if the device type is a udev device?
    I want to know an automated process, so that i can script it out to separate out each device easily.
    Please let me know if anyone has expertise on using udev devices with Oracle ASM.
    I have referred the link but no much details available http://www.oracle.com/technology/products/database/asm/pdf/device-mapper-udev-crs-asm%20rh4.pdf
    Further, the behavior of udevinfo command is very different across version of Linux. From Suse 10 onwards, udevadm has to be used instead.
    Its confusing to me.
    Please help.
    ~Sumeet.

    My installation, ASM On SI
    S.O HP-UIx
    Disk internal, not SAN
    Internal disk configuration for Oracle.
    PV Name /dev/dsk/c11t0d0
    PV Name /dev/dsk/c11t1d0
    PV Name /dev/dsk/c11t2d0
    PV Name /dev/dsk/c11t3d0
    PV Name /dev/dsk/c11t4d0
    PV Name /dev/dsk/c11t6d0
    Vg:
    /dev/vgora
    Lv
    /dev/vgora/lvu01 <-- Oracle Bin mounted /u00
    /dev/vgora/lvu02 <-- External Archive /u02
    In /dev/vgora
    brw-r----- 1 root sys 64 0x010001 Mar 8 14:42 lvu01
    brw-r----- 1 root sys 64 0x010002 Mar 8 16:15 lvu02
    brwxrwxr-x 1 root sys 64 0x010003 Mar 19 14:19 lvasm01
    brwxrwxr-x 1 root sys 64 0x010004 Mar 19 14:20 lvasm02
    brwxrwxr-x 1 root sys 64 0x010005 Mar 19 14:20 lvasm03
    brwxrwxr-x 1 root sys 64 0x010006 Mar 19 14:20 lvasm04
    brwxrwxr-x 1 root sys 64 0x010007 Mar 19 14:20 lvasm05
    brwxrwxr-x 1 root sys 64 0x010008 Mar 19 14:20 lvasm06
    brwxrwxr-x 1 root sys 64 0x010009 Mar 19 14:20 lvasm07
    brwxrwxr-x 1 root sys 64 0x01000a Mar 19 14:20 lvasm08
    brwxrwxr-x 1 root sys 64 0x01000b Mar 19 14:20 lvasm09
    brwxrwxr-x 1 root sys 64 0x01000c Mar 19 14:21 lvasm10
    crw-r----- 1 root sys 64 0x010001 Mar 8 14:42 rlvu01
    crw-r----- 1 root sys 64 0x010002 Mar 8 16:15 rlvu02
    crw-r--r-- 1 root sys 64 0x010000 Mar 8 14:19 group
    FOR ASM
    crw-rw---- 1 oracle dba 64 0x010003 Mar 19 14:19 rlvasm01
    crw-rw---- 1 oracle dba 64 0x010004 Mar 19 14:20 rlvasm02
    crw-rw---- 1 oracle dba 64 0x010005 Mar 19 14:20 rlvasm03
    crw-rw---- 1 oracle dba 64 0x010006 Mar 19 14:20 rlvasm04
    crw-rw---- 1 oracle dba 64 0x010007 Mar 19 14:20 rlvasm05
    crw-rw---- 1 oracle dba 64 0x010008 Mar 19 14:20 rlvasm06
    crw-rw---- 1 oracle dba 64 0x010009 Mar 19 14:20 rlvasm07
    crw-rw---- 1 oracle dba 64 0x01000a Mar 19 14:20 rlvasm08
    crw-rw---- 1 oracle dba 64 0x01000b Mar 19 14:20 rlvasm09
    crw-rw---- 1 oracle dba 64 0x01000c Mar 19 14:21 rlvasm10
    And the problem is:
    When i run dbca for create diskgroup, that not recognize the disk.
    I will set asm_diskstring with the correctly path.
    I try to create a diskgroup from command line, but i see this error.
    SQL> create diskgroup DG_DATA_01 external redundancy DISK '/dev/vgora/rlvasm01' ;
    create diskgroup DG_DATA_01 external redundancy DISK '/dev/vgora/rlvasm01'
    ERROR at line 1:
    ORA-15018: diskgroup cannot be created
    ORA-15031: disk specification '/dev/vgora/rlvasm01' matches no disks
    ORA-15025: could not open disk '/dev/vgora/rlvasm01'
    ORA-15059: invalid device type for ASM disk
    Additional information: 255

Maybe you are looking for