ASM Disk management

Hi,
In ASM concept we have separate user to manage the asm instance. he belongs to osasm, osdba, osoper groups.
My doubts is, who is going to manage the disks and disk groups. is that by sys user or special asm user.
thank you!

899329 wrote:
Hi,
In ASM concept we have separate user to manage the asm instance. he belongs to osasm, osdba, osoper groups.
My doubts is, who is going to manage the disks and disk groups. is that by sys user or special asm user.
thank you!You are confused in the ownership of the devices and the management of the ASM IMO. The ownership of the ASM devices would be with the user that, from 11g, should be a different user on the o/s than oracle , for example grid . This would have the ownership over the devices and the asmadmin group would be having the group ownership. The management of the devices in the form of disk group(s) and of the ASM instance in general would be done with the Sys account using the Sysasm role from 11.2.
HTH
Aman....

Similar Messages

  • Please Help - When I try to add ASM Disk to ASM Diskgroup it crashes Server

    We are using a Pillar SAN and have LUNS Created and are using the following multipath device: (I'm a DBA more then anything else... but I am rather familiar with linux .... SAN Hardware not so much)
    Device Size Mount Point
    /dev/dpda1 11G /u01
    The Above device is working fine... Below are the ASM Disks being Created
    Device Size Oracle ASM Disk Name
    /dev/dpdb1 198G ORCL1
    /dev/dpdc1 21G SIRE1
    /dev/dpdd1 21G CART1
    /dev/dpde1 21G SRTS1
    /dev/dpdf1 21G CRTT1
    I try to create to the first ASM Disk
    /etc/init.d/oracleasm createdisk ORCL1 /dev/dpdb1
    Marking disk "ORCL1" as an ASM disk: [FAILED]
    So I check the oracleasm log:
    #cat /var/log/oracleasm
    Device "/dev/dpdb1" is not a partition
    I did some research and found that this is a common problem with multipath devices and to work around it you have to use asmtool
    # /usr/sbin/asmtool -C -l /dev/oracleasm -n ORCL1 -s /dev/dpdb1 -a force=yes
    asmtool: Device "/dev/dpdb1" is not a partition
    asmtool: Continuing anyway
    now I scan and list the disks
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    # /etc/init.d/oracleasm listdisks
    ORCL1
    Here is whats going on in /var/log/messages when I run the oracleasm scandisks command
    # date
    Fri Aug 14 13:51:58 MST 2009
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    cat /var/log/messages | grep "Aug 14 13:5"
    Aug 14 13:52:06 seer kernel: dpdb: dpdb1
    Aug 14 13:52:06 seer kernel: dpdc: dpdc1
    Aug 14 13:52:06 seer kernel: dpdd: dpdd1
    Aug 14 13:52:06 seer kernel: dpde: dpde1
    Aug 14 13:52:06 seer kernel: dpdf: dpdf1
    Aug 14 13:52:06 seer kernel: dpdg: dpdg1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: printk: 30 messages suppressed.
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: sda : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sda : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: Dev sda: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdb: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdb: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdb: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdb: sdb1
    Aug 14 13:52:06 seer kernel: SCSI device sdc: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdc: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdc: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdc: sdc1
    Aug 14 13:52:06 seer kernel: SCSI device sdd: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdd: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdd: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdd: sdd1
    Aug 14 13:52:06 seer kernel: SCSI device sde: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sde: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sde: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sde: sde1
    Aug 14 13:52:06 seer kernel: SCSI device sdf: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdf: sdf1
    Aug 14 13:52:06 seer kernel: SCSI device sdg: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdg: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdg: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdg: sdg1
    Aug 14 13:52:06 seer kernel: SCSI device sdh: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdh: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdh: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdh: sdh1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sdi, logical block 0
    Aug 14 13:52:06 seer kernel: sdi : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdi : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdi: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdi: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdi: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdi:end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer last message repeated 4 times
    Aug 14 13:52:06 seer kernel: Dev sdi: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdj: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdj: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdj: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdj: sdj1
    Aug 14 13:52:06 seer kernel: SCSI device sdk: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdk: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdk: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdk: sdk1
    Aug 14 13:52:06 seer kernel: SCSI device sdl: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdl: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdl: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdl: sdl1
    Aug 14 13:52:06 seer kernel: SCSI device sdm: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdm: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdm: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdm: sdm1
    Aug 14 13:52:06 seer kernel: SCSI device sdn: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdn: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdn: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdn: sdn1
    Aug 14 13:52:06 seer kernel: SCSI device sdo: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdo: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdo: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdo: sdo1
    Aug 14 13:52:06 seer kernel: SCSI device sdp: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdp: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdp: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdp: sdp1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: sdq : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdq : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdq: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdq: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdq: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdq:end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdq: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdr: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdr: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdr: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdr: sdr1
    Aug 14 13:52:06 seer kernel: SCSI device sds: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sds: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sds: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sds: sds1
    Aug 14 13:52:06 seer kernel: SCSI device sdt: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdt: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdt: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdt: sdt1
    Aug 14 13:52:06 seer kernel: SCSI device sdu: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdu: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdu: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdu: sdu1
    Aug 14 13:52:06 seer kernel: SCSI device sdv: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdv: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdv: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdv: sdv1
    Aug 14 13:52:06 seer kernel: SCSI device sdw: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdw: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdw: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdw: sdw1
    Aug 14 13:52:06 seer kernel: SCSI device sdx: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdx: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdx: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdx: sdx1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: sdy : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdy : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdy: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdy: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdy: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdy:end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdy: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdz: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdz: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdz: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdz: sdz1
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdaa: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaa: sdaa1
    Aug 14 13:52:06 seer kernel: SCSI device sdab: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdab: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdab: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdab: sdab1
    Aug 14 13:52:06 seer kernel: SCSI device sdac: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdac: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdac: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdac: sdac1
    Aug 14 13:52:06 seer kernel: SCSI device sdad: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdad: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdad: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdad: sdad1
    Aug 14 13:52:06 seer kernel: SCSI device sdae: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdae: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdae: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdae: sdae1
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdaf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaf: sdaf1
    Aug 14 13:52:06 seer kernel: scsi_wr_disk: unknown partition table
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdy, sector 0
    Here's some extra info:
    # /sbin/blkid | grep asm
    /dev/sdc1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdk1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sds1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdaa1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/dpdb1: LABEL="ORCL1" TYPE="oracleasm"
    I have learned that by excluding devices in the oracleasm configuration file I eliminate those I/O errors in /var/log/messages
    # cat /etc/sysconfig/oracleasm
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver. It is generated
    # By running /etc/init.d/oracleasm configure. Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=oinstall
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dp sd"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sdc sdk sds sdaa sda"
    # ls -la /dev/oracleasm/disks/
    total 0
    drwxr-xr-x 1 root root 0 Aug 14 10:47 .
    drwxr-xr-x 4 root root 0 Aug 13 15:32 ..
    brw-rw---- 1 oracle oinstall 251, 33 Aug 14 13:46 ORCL1
    Now I can go into dbca to create the ASM instance, which starts up fine...  create a new diskgroup, I see ORCL1 as a provision ASM disk I select it ...  Click OK
    CRASH!!!  Box hangs have to reboot it....
    I have gotten myself to exactly the same point right before clicking OK and here is what is in the ASM alertlog so far
    Fri Aug 14 14:42:02 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.6.0.
    Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfile+ASM.ora
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    diagnostic_dest = "/u01/app/oracle"
    Fri Aug 14 14:42:04 2009
    PMON started with pid=2, OS id=3300
    Fri Aug 14 14:42:04 2009
    VKTM started with pid=3, OS id=3302 at elevated priority
    VKTM running at (20)ms precision
    Fri Aug 14 14:42:04 2009
    DIAG started with pid=4, OS id=3306
    Fri Aug 14 14:42:04 2009
    PSP0 started with pid=5, OS id=3308
    Fri Aug 14 14:42:04 2009
    DSKM started with pid=6, OS id=3310
    Fri Aug 14 14:42:04 2009
    DIA0 started with pid=7, OS id=3312
    Fri Aug 14 14:42:04 2009
    MMAN started with pid=8, OS id=3314
    Fri Aug 14 14:42:04 2009
    DBW0 started with pid=9, OS id=3316
    Fri Aug 14 14:42:04 2009
    LGWR started with pid=6, OS id=3318
    Fri Aug 14 14:42:04 2009
    CKPT started with pid=10, OS id=3320
    Fri Aug 14 14:42:04 2009
    SMON started with pid=11, OS id=3322
    Fri Aug 14 14:42:04 2009
    RBAL started with pid=12, OS id=3324
    Fri Aug 14 14:42:04 2009
    GMON started with pid=13, OS id=3326
    ORACLE_BASE from environment = /u01/app/oracle
    Fri Aug 14 14:42:04 2009
    SQL> ALTER DISKGROUP ALL MOUNT
    Fri Aug 14 14:42:41 2009
    At this point I don't want to click the OK until I am sure someone is in the office to reboot the machine manually if I do hang it again....  I hung it twice yesterday, however I did not have the devices excluded in the oracleasm configuration file as i do now
    Edited by: user10193377 on Aug 14, 2009 3:23 PM
    Well Clicking OK hun it again and I am waiting to get back into it, to see what new information might be gleened
    Does anyone have any ideas on what to check or where to look?????    Will update more once I can log back in

    Hi Mark,
    It looks like something is not correct with your raw device partition based on the error messages:
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    It could be a number of things. I would check with your vendor and Oracle support to see if the multipath software drive is supported and if there is a potential workaround for ASM. Sorry this is not quite the solution, but its what jumps to mind based on issues with multipath software and storage vendors for ASM with Linux and Oracle. Have you checked the validation matrix available on Metalink?
    Cheers,
    Ben

  • Creation of ASM disk for OUI

    I need to install a Oracle Database in order to install Enterprise Manager Cloud Control 12c.
    Need the database to use a ASM disk.
    I used the following command to create the disk, per the Oracle Database Installation Guide.
    #/usr/sbin/oracleasm createdisk DISK1 /dev/sdd1
    #oracleasm listdisks
    DISK1
    However, when running the OUI for Oracle Database 12c (understand 11.2.0.3 is certified for Cloud Control), step 7
    errors INS-30517 when attempting to select "Oracle Automatic Storage Management" for "Storage type".
    Researched the error at this location but no cause or action was provided.
    http://docs.oracle.com/cd/E16655_01/server.121/e26079/common_errormessages.htm
    INS-30517: Automatic Storage Management software is not configured on this system.
    The database install guide states that I need to ensure the "disk discovery string" is set the "ORCL:*" or is left empy ("") so the installer discovers these disks.
    It doesn't show how to confirm or change the settings.
    At this point I'm at a stopping point.

    All ASMLib installations require the oracleasmlib and oracleasm-support packages. The oracleasm kernel driver is included in the Oracle UEK kernel. Perhaps you are missing the oracleasmlib package. You can download it from:
    Oracle Linux: Oracle ASMLib | Oracle Technology Network
    Oracleasmlib is not necessary for ASM to work, but it contains software necessary for Linux oracleasm, including the /usr/sbin/oracleasm-discover utility, which the Oracle installer used in the previous 11g version to detect available ASM volumes.

  • Difference between ASM Disk Group, ADVM Volume and ACFS File system

    Q1. What is the difference between an ASM Disk Group and an ADVM Volume ?
    To my mind, an ASM Disk Group is effectively a logical volume for Database files ( including FRA files ).
    11gR2 seems to have introduced the concepts of ADVM volumes and ACFS File Systems.
    An 11gR2 ASM Disk Group can contain :
    ASM Disks
    ADVM volumes
    ACFS file systems
    Q2. ADVM volumes appear to be dynamic volumes.
    However is this therefore not effectively layering a logical volume ( the ADVM volume ) beneath an ASM Disk Group ( conceptually a logical volume as well ) ?
    Worse still if you have left ASM Disk Group Redundancy to the hardware RAID / SAN level ( as Oracle recommend ), you could effectively have 3 layers of logical disk ? ( ASM on top of ADVM on top of RAID/SAN ) ?
    Q3. if it is 2 layers of logical disk ( i.e. ASM on top of ADVM ), what makes this better than 2 layers using a 3rd party volume manager ( eg ASM on top of 3rd party LVM ) - something Oracle encourages against ?
    Q4. ACFS File systems, seem to be clustered file systems for non database files including ORACLE_HOMEs, application exe's etc ( but NOT GRID_HOME, OS root, OCR's or Voting disks )
    Can you create / modify ACFS file systems using ASM.
    The oracle toplogy diagram for ASM in the 11gR2 ASM Admin guide, shows ACFS as part of ASM. I am not sure from this if ACFS is part of ASM or ASM sits on top of ACFS ?
    Q5. Connected to Q4. there seems to be a number of different ways, ACFS file systems can be created ? Which of the below are valid methods ?
    through ASM ?
    through native OS file system creation ?
    through OEM ?
    through acfsutil ?
    my head is exploding
    Any help and clarification greatly appreciated
    Jim

    Q1 - ADVM volume is a type of special file created in the ASM DG.  Once created, it creates a block device on the OS itself that can be used just like any other block device.  http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmfilesystem.htm#OSTMG30000
    Q2 - the asm disk group is a disk group, not really a logical volume.  It combines attributes of both when used for database purposes, as the database and certain other applications know how to talk "ASM" protocol.  However, you won't find any general purpose applications that can do so.  In addition, some customers prefer to deal directly with file systems and volume devices, which ADVM is made to do.  In your way of thinking, you could have 3 layers of logical disk, but each of them provides different attributes and characteristics.  This is not a bad thing though, as each has a slightly different focus - os file system\device, database specific, and storage centric.
    Q3 - ADVM is specifically developed to extend the characteristics of ASM for use by general OS applications.  It understands the database performance characteristics and is tuned to work well in that situation.  Because it is developed in house, it takes advantage of the ASM design model.  Additionally, rather than having to contact multiple vendors for support, your support is limited to calling Oracle, a one-stop shop.
    Q4 - You can create and modify ACFS file systems using command line tools and ASMCA.  Creating and modifying logical volumes happens through SQL(ASM), asmcmd, and ASMCA.  EM can also be used for both items.  ACFS sits on top of ADVM, which is a file in an ASM disk group.  ACFS is aware of the characteristics of ASM\ADVM volumes, and tunes it's IO to make best use of those characteristics. 
    Q5 - several ways:
    1) Connect to ASM with SQL, use 'alter diskgroup add volume' as Mihael points out.  This creates an ADVM volume.  Then, format the volume using 'mkfs' (*nix) or acfsformat (windows).
    2) Use ASMCA - A gui to create a volume and format a file system.  Probably the easiest if your head is exploding.
    3) Use 'asmcmd' to create a volume, and 'mkfs' to format the ACFS file system.
    Here is information on ASMCA, with examples:
    http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmca_acfs.htm#OSTMG94348
    Information on command line tools, with examples:
    Basic Steps to Manage Oracle ACFS Systems

  • Problem with create asm disk group

    Hi all
    I am about configuring ASM, so I have downloaded the Grid infrastructure 11g (32 bit), I have configured and created parameters and directories.
    I runned the installer but get stack at the 3 step where I have to change the discovery path. I have taped as path /dev where I have created 3 partitions sdb1, sdc1 and sdd1.
    Is there any thing should I perform on partitions may be or parameters to set before I go through the installation?
    Thanks for help

    You can use the below link to install ASMLIB:
    http://gssdba.wordpress.com/category/asm/
    REFERANCE : Doc ID 580153.1
    There are two different methods to configure ASM on Linux:
    ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. RAW devices are not required with this method as ASMLib works with block devices.
    ASM with Standard Linux I/O: This method creates all Oracle database files on raw character devices managed by ASM using standard Linux I/O system calls. You will be required to create RAW devices for all disk partitions used by ASM.
    You can download the ASMLIB rpm’s from below URL:
    http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
    STEP 01: LOG IN AS ROOT USER AND INSTALL THE RPMS
    [root@node1 ASMLIB]# rpm -Uvh oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm \
    > oracleasmlib-2.0.4-1.el5.i386.rpm \
    > oracleasm-support-2.1.8-1.el5.i386.rpm
    warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
    Preparing… ########################################### [100%]
    1:oracleasm-support ########################################### [ 33%]
    2:oracleasm-2.6.18-164.el########################################### [ 67%]
    3:oracleasmlib ########################################### [100%]
    STEP 02: CONFIGURE ASMLIB
    [root@node1 ASMLIB]# /etc/init.d/oracleasm configure
    Configuring the Oracle ASM library driver.
    This will configure the on-boot properties of the Oracle ASM library
    driver. The following questions will determine whether the driver is
    loaded on boot and what permissions it will have. The current values
    will be shown in brackets (‘[]‘). Hitting <ENTER> without typing an
    answer will keep that current value. Ctrl-C will abort.
    Default user to own the driver interface []: oracle
    Default group to own the driver interface []: dba
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver: [ OK ]
    Scanning the system for Oracle ASMLib disks: [ OK ]
    STEP 03 :CREATE ASM DISK
    [root@node1 ASMLIB]# /etc/init.d/oracleasm listdisks
    [root@node1 ASMLIB]#
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
    Marking disk “VOL1″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
    Marking disk “VOL2″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
    Marking disk “VOL3″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL4 /dev/sde1
    Marking disk “VOL4″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm createdisk VOL5 /dev/sdf1
    Marking disk “VOL5″ as an ASM disk: [ OK ]
    [root@node1 ~]# /etc/init.d/oracleasm listdisks
    VOL1
    VOL2
    VOL3
    VOL4
    VOL5
    [root@node1 ~]#

  • ORA-15042: ASM disk "2" is missing from group number "1"

    Hi,
    I'm working on an Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Automatic Storage Management option.
    Into the ASM I had 3 diskgroups:
    - ARCHIVELOG (4 disks)
    - ONLINELOG (1 disks)
    - DATA (10 disks)
    When I try to startup the ASM instance I got:
    A-15042: ASM disk "2" is missing from group number "1"The diskgroup won't be mounted.
    I would like to remove that disk and later add a new one.
    I can I do that?
    I'm not able to mount the ARCHIVELOG diskgroup.
    I tried the command
    SQL> alter diskgroup archivelog drop disk ARCH3 force;
    alter diskgroup archivelog drop disk ARCH3 force
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15001: diskgroup "ARCHIVELOG" does not exist or is not mountedThanks in advance,
    Samuel
    Edited by: Samuel Rabini on Jan 10, 2012 4:11 PM

    As that database is on AWS, I tried this:
    - drop diskgroup archivelog
    - detach of those 4 disks
    - create new 4 disks
    - attach new disks
    - assign those disks to ASM with oracleasm utilty
    - create diskgroup archivelog
    It worked.
    But because I was on AWS and more because it was the ARCHIVELOG diskgroup.
    What would I had to do if it was the DATA diskgroup?
    Thanks

  • EM Alert: Warning:+ASM Disk Group requires rebalance

    Environment:
    O.S Version : HP-UX B.11.31 U ia64
    Oracle DB Version : 11.2.0.3.0 , Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Database files are on : ASM Disk Group
    It is about the ASM Diskgroup low disk space alert by Oracle Enterprise Manager.
    Message=Disk Group DG_FLASH_01 requires rebalance because at least one disk is low on space.
    Metric=Disk Minimum Free (%) without Rebalance
    Metric value=18.961548
    Disk Group Name=DG_FLASH_01
    Severity=Warning
    Target Name=+ASM_dbsrver.siva.com
    Target type=Automatic Storage Management
    There is only 1 LUN assigned to this diskgroup +DG_FLASH_01_.
    We have a Oracle Enterprise Manager Grid Control Job MN_DBSRVR_DEL_ARCHIVELOGS runs every 12 hours by 1 AM and again at 1PM daily. It cleans-up the archivelogs older than 3 days old. The FLASH diskgroup is continuously being written with new files for both archivelogs and flashback logs.
    If there was multiple disks and a vast difference between the files on the different LUNs then a rebalance would be good to run.
    How to address such recurring alert of " *Disk Group DG_FLASH_01 requires rebalance because at least one disk is low on space*. " with _only one LUN on the +ASM Diskgroup_?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    As I stated earlier there is only one disk on this diskgroup
    DISK_NUMBER      OS_MB   TOTAL_MB    FREE_MB NAME                           PATH
              0      65536      65536      12995 DG_FLASH_01_0000        /devasm/gc/ora_asm_gc_b03_a14_d10_08_36
              0      65536      65536      43064 DG_DATA_01_0000         /devasm/gc/ora_asm_gc_b03_a13_d12_08_35So disk REBALANCE not required
    Edited by: Sivaprasad S on Feb 14, 2013 11:46 PM

  • Share an ASM disk group among multiple nodes

    According to Oracle documentation:
    *“To share an ASM disk group among multiple nodes, you must install Oracle Cluster ware on all of the nodes, regardless of whether you install Oracle RAC on the nodes”.*
    And if I understand it right to share the same ASM storage group from multiple nodes from separate RACs or multiple non-RAC nodes ASM instances in those nodes need to communicate to synchronize ASM related metadata using same technique like cache fusions.
    My question is how this ASM communication take place among different ASM instances located in different RACs and standalone servers. Do we have to have some kind of Interconnect settings among the nodes?

    Hi,
    ASM and database instances require shared access to the disks in a disk group. ASM instances manage the metadata of the disk group and provide file layout information to the database instances.
    ASM instances can be clustered using Oracle Clusterware; there is one ASM instance for each cluster node. If there are several database instances for different databases on the same node, then the database instances share the same single ASM instance on that node.
    If the ASM instance on a node fails, then all of the database instances on that node also fail. Unlike a file system failure, an ASM instance failure does not require restarting the operating system. In an Oracle RAC environment, the ASM and database instances on the surviving nodes automatically recover from an ASM instance failure on a node.
    see this link
    http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmcon.htm :)

  • Oracleasm cannot find asm disks on boot

    Hello!
    RAC 11.2.0.2 on SLES10.
    I have configured asmlib on a RAC system (multipath configured) and everything worked as a charm before booting node 2 in the cluster. Problem is that after reboot of node 2 i cannot see my asm disks anymore on that node. This only happens on node 2. I dont have this issue after rebooting node 1. However, if i do a manually "scandisks" on node 2 it sees the disks again and the cluster starts successfully..
    I cannot see any difference between the two nodes.. the asmlib rpm's matches the kernel version.
    oracle@prod2:~> uname -a
    Linux prod2 2.6.16.60-0.54.5-smp #1 SMP Fri Sep 4 01:28:03 UTC 2009 x86_64 x86_64 x86_64 GNU/Linux
    oracle@prod2:~> rpm -qa |grep oracleasm
    oracleasm-support-2.1.4-1.SLE10
    oracleasm-2.6.16.60-0.54.5-smp-2.0.5-1.SLE10
    oracleasmlib-2.0.4-1.SLE10/etc/sysconfig/oracleasm looks like this:
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver.  It is generated
    # By running /etc/init.d/oracleasm configure.  Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=dba
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dm"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sd"I would be grateful if anyone out there can shed some light on this issue..
    Thanks a bunch!

    Hi,
    I think you need to upgrade your powerpath version. If you are using clariion for your storage.

  • ASM Disk Group RAID Levels

    This is the scenario that I am currently working on. Just need some input on whether it is feasible or not.
    We have a 2 node RAC running Oracle 10.2.0.3 on AIX 5L. Database size is ~2TB. The database mostly performs OLTP but also stores some historical data.
    There are two main applications using the database - one performs high reads with some small updates & inserts, while the other is very write intensive but does some reads as well.
    Currently there are three disk groups one for the tablespaces (dg_data), another for system/sysaux/undo tablespaces (dg_system) and another for archived logs & redo log copies (dg_flash) - all using external redundancy. ASM best practises recommend no more than 2 disk groups. It also recommends disk groups with disks of similar characteristics including raid levels. However, the dg_data disk group has both RAID 5 and RAID 1+0 disks which house tablespaces for both applications. Seeing that the applications have different requirements (heavy reads vs heavy writes) does it make sense to create a separate disk group with 2 different RAID levels or would using RAID 5 in dg_data satisfy both requirements?

    I am attempting to generate some statistics on the ASM Disks I/O activity before implementing the disk group separation in order have some metrics for comparison purposes. Enterprise Manager Grid Control displays the performance of disk groups and individual disks by showing the Disk Group I/O Cumulative Statistics. When comparing the results with the asmiostat output I am unable to correlate the results. I know that the asmiostat queries the v$asm_disk_stat view. Where does EM GC pull it's information from?
    For example, I run the following query on the ASM instance:
    SQL> select group_number,disk_number,total_mb,free_mb,name,path,reads,writes,read_time,write_time,bytes_read,bytes_written
    2 from v$asm_disk_stat
    3 where group_number=
    4 (select group_number from v$asm_diskgroup
    5* where name = 'DG_FLASH')
    GROUP_NUMBER DISK_NUMBER TOTAL_MB FREE_MB NAME PATH READS WRITES READ_TIME WRITE_TIME BYTES_READ BYTES_WRITTEN
    1 0 8671 8432 DG_FLASH_0000 /dev/asm2 14379476 10338479 149205.75 19633.64 290,136,450,560.00 7.2165E+10
    1 1 8671 8431 DG_FLASH_0001 /dev/asm3 11470508 10278698 184597.5 19313.54 249,769,027,584.00 9.2911E+10
    1 2 8671 8432 DG_FLASH_0002 /dev/asm4 17274529 8743188 178547.56 38342.52 339,439,240,192.00 6.7165E+10
    The output from the same period on Grid Control is below
    MEMBER DISKS AVG RESPTIME AVG THROUGHPUT TOT I/O TOT RDS TOT WRTS RDERRS WRTERRS
    DG_FLASH_0000 5.58 2.58 33179503 21949607 11,229,896 0 0
    DG_FLASH_0001 8.26 1.83 25752100 13131695 12,620,405 0 0
    DG_FLASH_0002 8.11 1.86 28269693 18798823 9,470,870 0 0
    The statistics in the query are lower than those in the EM GC report. I also tried querying the fixed views (x$) but the results were even more confusing.
    What is the best method for comparing and gathering statistics on ASM activity?

  • Can't have ASM mark a NFS file as an ASM disk : -is not a block device

    Hello,
    I’m trying to experiment with ASM for learning purpose. Because I don’t have access to a SAN, I am trying to use NFS files but I can’t manage to have ASM mark those files as ASM disks.
    [root@localhost /]# /etc/init.d/oracleasm createdisk ASM_DISK_1 /mnt/asm_dsks/dg1/disk1
    Marking disk "ASM_DISK_1" as an ASM disk: [FAILED]
    The oracleasm log says: File "/mnt/asm_dsks/dg1/disk1" is not a block device
    OK, more context now:
    I am trying to install ASM on a RHEL5 virtual machine (on vmware).
    [root@localhost /]# uname -rm
    2.6.18-8.el5 x86_64
    I followed this document:
    http://www.oracle.com/technology/pub/articles/smiley-11gr1-install.html until I got stuck at the following command:
    /etc/init.d/oracleasm createdisk ...
    Now, the NFS filesystem comes from a Solaris 10 system (the only one that's available) running on a physical sun box (this one is not a virtual system).
    I have tried many combinations. I tried creating the files on the linux VM, using dd. As root, as oracle. I tried creating them on the Solaris side, using mkfile... no matter what I try, I always get the same issue.
    I tried to follow this document: Creating Files on a NAS Device for Use with ASM (http://download.oracle.com/docs/html/B10811_05/app_nas.htm#BCFHCIEC)
    But nothing seems to work.
    Any idea, recommendations?
    Thanks,
    Laurent.

    Hi buddy,
    I guess the metalink note 731775.1 should help You.
    In fact the procedure is:
    - Create the disk devices on your NFS directory (using dd)
    - Adjuste the permissions over those files (in this case, oracle:dba)
    - Adjust the ASM_DISKSTRING at the ASM instance and setting the NFS directory in the discovery path
    - Verify if they are available at v$asm_disk view
    - Create the diskgroup using the the NFS disks that You have created.
    Hope it helps,
    Cerreia

  • How reInstall Gride Infrastructure and Use old ASM disk groups

    <pre>Hello to all
    I installed Grig Infrastructure 11gR2 on a standalone server (OS is Linux)
    and I configured ASM and my database created on ASM
    Conceive that my OS disk corrupted and OS doesn't start and the Gride Home is on that disk,
    and I have to install OS again
    My ASM disks are safe , Now how can I install Grig Infrastructure again somehow that it can use previous ASM disks
    and disk groups and I don't oblige to create my database again ?
    In the step 2 of installing Gride Infrastructure it has four options
    <pre>
    1.Install and configure Oracle Grid Infrastructure for a Cluster
    2.Configure Oracle Grid Infrastructure for a Standalone Server
    3.Upgrade Oracle Gride Infrastructure or Oracle Automatice Storage Management
    4.Install Oracle Gride Infrastructure Software Only
    </pre>
    If I select the option 2 it wants to create a disk group again
    I guess that I need to select option 4 and then do some configuration but I don't know what I must configure
    Do you know answer of my question , if yes please explain it's stages
    Thank you so much
    </pre>

    Hi,
    no you are not obliged to recreate your database again. However there is a small flaw in the installation procedure, which does not make it 100% easy...
    When you installed the Oracle Restart (Standalone GI), your ASM diskgroup will contain the SPFILE of the ASM instance. And this is exactly the small flaw you will be encountering. So you have 2 options for "recovery":
    1.) Do a software only install (4), and run roothas.pl. This however will not create any ASM entries. You would have to add it manually (using srvctl) and you can specify the ASM Spfile with the srvctl command. Problem here is however to have to know where your ASM spfile has been. If you have a backup of your OLR and a backup of the GPNP profile, this might be easier to find out.
    2.) Do a new installation (2) and configure a new diskgroup (with a "spare" disk or small lun and a new name), that Oracle restart creates ASM instance and the new ASMSpfile for you.
    Then you can simply mount the diskgroup containing your database additionally. You then shoudl however move your new ASMSpfile to the new diskgroup (or simply exchange it with the existing one). In this case it is easier to find out where it was - however you will need a spare (though small) LUN for the new spfile (temporarily, until you exchange it).
    In either case after you have your ASM instance back (and access to your old diskgroup), you have to reregister your database and services - if you do not have an OLR backup.
    Again => It is doable and you can simply mount the ASM diskgroup containing your database. However I suggest you try this one time to know what really needs to be done in this case.
    Regards
    Sebastian

  • Disk manager window 2003 stop or server reboot under 11g cluster

    Hi
    i have a simple question,
    since we pass to 64-bit window 2003 our cluster (32-bit before). The disk manager service under window stop after 3 to 4 days. At this moment, Oracle stop (ASM do not have any more disk to access).
    Some time the server reboot with out reason.
    Did someone had this problem with 11g cluster under window 2003 64-bit.
    Thanks
    Edited by: ron_berube on 2009-09-08 12:47

    Hi
    i have a simple question,
    since we pass to 64-bit window 2003 our cluster (32-bit before). The disk manager service under window stop after 3 to 4 days. At this moment, Oracle stop (ASM do not have any more disk to access).
    Some time the server reboot with out reason.
    Did someone had this problem with 11g cluster under window 2003 64-bit.
    Thanks
    Edited by: ron_berube on 2009-09-08 12:47

  • What does the "Internal" value on the ASM Disk Group Usage Chart represent?

    In OEM Grid Control, on the Automatic Storage Management Home page there is the Disk Group Usage Chart.
    What does the "Internal (GB)" value on this pie chart represent? and how it has been calculated?
    On one of our servers it is 13% of available ASM disk capacity.. and on another it is 0%.

    Internal space is (Total MB - Free MB) from the @ DiskSpaceUsage - Total MB from the DatabaseDiskSpace usage
    It is basically the space occupied by the your DB in the ASM disk space. There are few bugs related to this and need to be refined
    regards
    Pravin

  • Backup and Restore OCR,Voting Disk and ASM Disks in new SAN-10g RAC

    Dear Friends,
    I am using 10g R2 RAC serup on Linux
    My OCR,Voting Disk and ASM Disk for DBF Files are on a SAN box
    Now i am reorganising SAN by scrapping the entire SAN and cretaing new LUN's (It's a must)
    so pleae let me know
    1) how do i take backup of OCR and Voting Disk from existing SAN and how do i restore it in new LUN's after SAN reorganisation
    2) how do i take backup of existing Database's from existing SAN and how do i restore it in new LUN's after SAN reorganisation.
    I will be doing it in Planned downtime only
    Regards,
    DB

    For step 1 you should following metalink doc.
    For step 2 here is simple backup command script.
    I have done this in windows for you.
    D:\app\ranjit\product\11.2.0\dbhome_1\BIN>rman target /
    Recovery Manager: Release 11.2.0.1.0 - Production on Wed Feb 8 21:48:47 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    connected to target database: ORCL (DBID=1299593730)
    RMAN> run
    +{+
    allocate channel c1 device type disk format 'D:\app\ranjit\rman\%U';
    allocate channel c2 device type disk format 'D:\app\ranjit\rman\%U';
    backup database;
    backup current controlfile format 'D:\app\ranjit\rman\%U';
    +}+
    Regards

Maybe you are looking for