ASM disk mount in alter log

Hello
I am running Oracle 10g with ASM and I am seeing 1 of my 3 disk groups being mounted and unmounted in the alter log of on database instance.
SUCCESS: diskgroup FLASH was mounted
SUCCESS: diskgroup FLASH was dismounted
On this disk group I have only my archive logs. Is this normal ?

Danny,
Are you on RAC? Check this mealink note 361173.1 which talks about the same thing.
Update:
Ignore the messages. They are just informal ones and will be removed from future(!)versions. Basically ASM close (or dismount) those groups when it closes the file descriptors on that DG and mounts when it opens first file on that DG.
Look at this thread over Orafaq where Gopal mentioned for this message this note,
http://www.orafaq.com/maillist/oracle-l/2007/02/18/1244.htm
Aman....
Edited by: Aman.... on Sep 19, 2008 8:52 AM

Similar Messages

  • ASM Disk Mount Error

    Hi,
    I am consistently getting the following error since yesterday while mounting the ASM disk.
    1. I created the ASM disks successfully and created the appropriate ASM DiskGroups using asmca utility.
    2. I was able to see the diskgroups as mounted from asmcmd tool.
    3. All of a sudden, all the diskgroups were dismounted yesterday and I see the following error in the alert logs for all the
    diskgroups.
    4. This is using ASM11g
    As I exhausted all my avenues to fix, I have turned to you for further assistance. Please do the needful.
    SQL> /* ASMCMD */ALTER DISKGROUP DataVSPSI MOUNT
    NOTE: cache registered group DATAVSPSI number=3 incarn=0x7934bf5d
    NOTE: cache began mount (first) of group DATAVSPSI number=3 incarn=0x7934bf5d
    NOTE: Assigning number (3,0) to disk (ORCL:D01_VSP_SI)
    Thu Dec 27 17:38:29 2012
    NOTE: start heartbeating (grp 3)
    kfdp_query(DATAVSPSI): 11
    kfdp_queryBg(): 11
    NOTE: cache opening disk 0 of grp 3: D01_VSP_SI label:D01_VSP_SI
    NOTE: F1X0 found on disk 0 au 2 fcn 0.0
    NOTE: cache mounting (first) external redundancy group 3/0x7934BF5D (DATAVSPSI)
    NOTE: starting recovery of thread=1 ckpt=2.1 group=3 (DATAVSPSI)
    WARNING: IO Failed. group:3 disk(number.incarnation):0.0xeae44fae disk_path:ORCL:D01_VSP_SI
    AU:4 disk_offset(bytes):4333568 io_size:122880 operation:Read type:asynchronous
    result:I/O error process_id:32162
    WARNING: IO Failed. group:3 disk(number.incarnation):0.0xeae44fae disk_path:ORCL:D01_VSP_SI
    AU:4 disk_offset(bytes):4202496 io_size:131072 operation:Read type:asynchronous
    result:I/O error process_id:32162
    ORA-15080: synchronous I/O operation to a disk failed
    ERROR: ASM recovery failed to read ACD
    NOTE: cache initiating offline of disk 0 group DATAVSPSI
    NOTE: process 32162 initiating offline of disk 0.3940831150 (D01_VSP_SI) with mask 0x7e in group 3
    WARNING: Disk D01_VSP_SI in mode 0x7f is now being taken offline
    NOTE: initiating PST update: grp = 3, dsk = 0/0xeae44fae, mode = 0x15
    kfdp_updateDsk(): 12
    kfdp_updateDskBg(): 12
    ERROR: too many offline disks in PST (grp 3)
    WARNING: Disk D01_VSP_SI in mode 0x7f offline aborted
    Thu Dec 27 17:38:29 2012
    NOTE: halting all I/Os to diskgroup DATAVSPSI
    NOTE: crash recovery signalled OER-15130
    ERROR: ORA-15130 signalled during mount of diskgroup DATAVSPSI
    NOTE: cache dismounting (clean) group 3/0x7934BF5D (DATAVSPSI)
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 3/0x7934BF5D (DATAVSPSI)
    Also, I think the lower level disk is fine as I am able to write to the disk as follows :
    [oracle@rmanqa01 trace]$ dd of=/dev/sdp1
    Test Test
    0+1 records in
    0+1 records out
    10 bytes (10 B) copied, 4.97435 seconds, 0.0 kB/s
    [oracle@rmanqa01 trace]$ id
    uid=500(oracle) gid=500(oinstall) groups=6(disk),500(oinstall),501(dba),502(oper),503(asmadmin),504(asmdba),505(asmoper),506(horcm)
    [oracle@rmanqa01 trace]$ ls -l /dev/sdp1
    brw-rw---- 1 root disk 8, 241 Dec 28 11:59 /dev/sdp1
    [oracle@rmanqa01 trace]$
    The oracleasm also lists the disk D01_VSP_SI as follows :
    [root@rmanqa01 log]# /etc/init.d/oracleasm listdisks
    A01_VSP_SI
    ADSK01
    ARCH_AMS_SI
    D01_VSP_SI
    DATA_AMS_SI
    DDSK01
    DEMO_ARCH
    DEMO_DATA
    L01_VSP_SI
    RDSK01
    REDO_AMS_SI
    You have new mail in /var/spool/mail/root
    [root@rmanqa01 log]# /etc/init.d/oracleasm querydisk D01_VSP_SI
    Disk "D01_VSP_SI" is a valid ASM disk
    [root@rmanqa01 log]#
    kfed tool was giving proper data till yesterday.
    Today, I am getting the following :
    ./kfed read /dev/oracleasm/disks/D01_VSP_SI
    kfbh.endian: 84 ; 0x000: 0x54
    kfbh.hard: 101 ; 0x001: 0x65
    kfbh.type: 115 ; 0x002: *** Unknown Enum ***
    kfbh.datfmt: 116 ; 0x003: 0x74
    kfbh.block.blk: 1936020512 ; 0x004: T=0 NUMB=0x73655420
    kfbh.block.obj: 2147486324 ; 0x008: TYPE=0x8 NUMB=0xa74
    kfbh.check: 2886846267 ; 0x00c: 0xac11c73b
    kfbh.fcn.base: 0 ; 0x010: 0x00000000
    kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
    kfbh.spare1: 0 ; 0x018: 0x00000000
    kfbh.spare2: 0 ; 0x01c: 0x00000000
    ERROR!!!, failed to get the oracore error message
    [oracle@rmanqa01 bin]$
    Please help.
    Thanks
    V V
    Edited by: user13479556 on Dec 28, 2012 12:00 PM

    Thanks Berx for pointing that out.
    I deleted the ASM disks and the diskgroups and re-created a fresh and was able to start the ASM instance and mount the DGs.
    Now the mount is persistent, but the dbca fails with the following error in the alert log. Under what situations can this error be seen ?
    NOTE: Loaded library: System
    SUCCESS: diskgroup DATAAMSSI was mounted
    SUCCESS: diskgroup ARCHAMSSI was mounted
    ERROR: failed to establish dependency between database R3AMSSI and diskgroup resource ora.DATAAMSSI.dg*ERROR: failed to establish dependency between database R3AMSSI and diskgroup resource ora.ARCHAMSSI.dg*Mon Dec 31 20:25:25 2012
    SUCCESS: diskgroup LOGAMSSI was mounted
    Mon Dec 31 20:25:25 2012
    ERROR: failed to establish dependency between database R3AMSSI and diskgroup resource ora.LOGAMSSI.dg
    Mon Dec 31 20:25:25 2012
    Successful mount of redo thread 1, with mount id 784925673
    Completed: Create controlfile reuse set database "R3AMSSI"
    MAXINSTANCES 8
    MAXLOGHISTORY 1
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    Datafile
    '+DATAAMSSI/R3AMSSI/system01.dbf',
    '+DATAAMSSI/R3AMSSI/sysaux01.dbf',
    '+DATAAMSSI/R3AMSSI/undotbs01.dbf',
    '+DATAAMSSI/R3AMSSI/users01.dbf'
    LOGFILE GROUP 1 ('+LOGAMSSI/R3AMSSI/redo01.log') SIZE 51200K,
    GROUP 2 ('+LOGAMSSI/R3AMSSI/redo02.log') SIZE 51200K,
    GROUP 3 ('+LOGAMSSI/R3AMSSI/redo03.log') SIZE 51200K RESETLOGS
    Stopping background process MMNL
    Stopping background process MMON
    Starting background process MMON
    Starting background process MMNL
    Mon Dec 31 20:25:28 2012
    MMON started with pid=17, OS id=10452
    ALTER SYSTEM enable restricted session;
    Mon Dec 31 20:25:28 2012
    MMNL started with pid=18, OS id=10454
    alter database "R3AMSSI" open resetlogs
    RESETLOGS after incomplete recovery UNTIL CHANGE 945183
    Errors in file /u01/app/oracle/diag/rdbms/r3amssi/R3AMSSI/trace/R3AMSSI_ora_10434.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+LOGAMSSI/r3amssi/redo01.log'
    ORA-17503: ksfdopn:2 Failed to open file +LOGAMSSI/r3amssi/redo01.log
    ORA-15173: entry 'redo01.log' does not exist in directory 'r3amssi'
    Errors in file /u01/app/oracle/diag/rdbms/r3amssi/R3AMSSI/trace/R3AMSSI_ora_10434.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+LOGAMSSI/r3amssi/redo01.log'
    ORA-17503: ksfdopn:2 Failed to open file +LOGAMSSI/r3amssi/redo01.log
    ORA-15173: entry 'redo01.log' does not exist in directory 'r3amssi'
    Mon Dec 31 20:25:29 2012
    Checker run found 5 new persistent data failures
    Mon Dec 31 20:27:07 2012
    I can verify from asmcmd that the redologs mentioned above as missing is very much present in '+LOGAMSSI/R3AMSSI'
    Thanks
    V V

  • ASM disk mount and dismount !

    Hello ,
    I found this listing continues in my alert.log.I found when oracle archives it mounts the disk and the unmounts it.Is it Usual?
    Thread 1 cannot allocate new log, sequence 161
    Checkpoint not complete
    Current log# 1 seq# 160 mem# 0: +ORCL_ONLINE_OPS1/dwh/redo01.log
    Thread 1 advanced to log sequence 161
    Current log# 2 seq# 161 mem# 0: +ORCL_ONLINE_OPS1/dwh/redo02.log
    Wed Jan 9 17:22:13 2008
    SUCCESS: diskgroup ORCL_ARCH was mounted
    SUCCESS: diskgroup ORCL_ARCH was dismounted
    SUCCESS: diskgroup ORCL_ARCH was mounted
    SUCCESS: diskgroup ORCL_ARCH was dismounted
    SUCCESS: diskgroup ORCL_ARCH was mounted
    SUCCESS: diskgroup ORCL_ARCH was dismounted
    Wed Jan 9 17:27:50 2008
    Shutting down archive processes
    Wed Jan 9 17:27:55 2008
    Thansk,
    Pam

    please check metalink note
    Note.361173.1 Asm Diskgroup success Mount And Umount messages in alert.log during Rman Backup
    Thanks,
    Anil

  • Can't have ASM mark a NFS file as an ASM disk : -is not a block device

    Hello,
    I’m trying to experiment with ASM for learning purpose. Because I don’t have access to a SAN, I am trying to use NFS files but I can’t manage to have ASM mark those files as ASM disks.
    [root@localhost /]# /etc/init.d/oracleasm createdisk ASM_DISK_1 /mnt/asm_dsks/dg1/disk1
    Marking disk "ASM_DISK_1" as an ASM disk: [FAILED]
    The oracleasm log says: File "/mnt/asm_dsks/dg1/disk1" is not a block device
    OK, more context now:
    I am trying to install ASM on a RHEL5 virtual machine (on vmware).
    [root@localhost /]# uname -rm
    2.6.18-8.el5 x86_64
    I followed this document:
    http://www.oracle.com/technology/pub/articles/smiley-11gr1-install.html until I got stuck at the following command:
    /etc/init.d/oracleasm createdisk ...
    Now, the NFS filesystem comes from a Solaris 10 system (the only one that's available) running on a physical sun box (this one is not a virtual system).
    I have tried many combinations. I tried creating the files on the linux VM, using dd. As root, as oracle. I tried creating them on the Solaris side, using mkfile... no matter what I try, I always get the same issue.
    I tried to follow this document: Creating Files on a NAS Device for Use with ASM (http://download.oracle.com/docs/html/B10811_05/app_nas.htm#BCFHCIEC)
    But nothing seems to work.
    Any idea, recommendations?
    Thanks,
    Laurent.

    Hi buddy,
    I guess the metalink note 731775.1 should help You.
    In fact the procedure is:
    - Create the disk devices on your NFS directory (using dd)
    - Adjuste the permissions over those files (in this case, oracle:dba)
    - Adjust the ASM_DISKSTRING at the ASM instance and setting the NFS directory in the discovery path
    - Verify if they are available at v$asm_disk view
    - Create the diskgroup using the the NFS disks that You have created.
    Hope it helps,
    Cerreia

  • Please Help - When I try to add ASM Disk to ASM Diskgroup it crashes Server

    We are using a Pillar SAN and have LUNS Created and are using the following multipath device: (I'm a DBA more then anything else... but I am rather familiar with linux .... SAN Hardware not so much)
    Device Size Mount Point
    /dev/dpda1 11G /u01
    The Above device is working fine... Below are the ASM Disks being Created
    Device Size Oracle ASM Disk Name
    /dev/dpdb1 198G ORCL1
    /dev/dpdc1 21G SIRE1
    /dev/dpdd1 21G CART1
    /dev/dpde1 21G SRTS1
    /dev/dpdf1 21G CRTT1
    I try to create to the first ASM Disk
    /etc/init.d/oracleasm createdisk ORCL1 /dev/dpdb1
    Marking disk "ORCL1" as an ASM disk: [FAILED]
    So I check the oracleasm log:
    #cat /var/log/oracleasm
    Device "/dev/dpdb1" is not a partition
    I did some research and found that this is a common problem with multipath devices and to work around it you have to use asmtool
    # /usr/sbin/asmtool -C -l /dev/oracleasm -n ORCL1 -s /dev/dpdb1 -a force=yes
    asmtool: Device "/dev/dpdb1" is not a partition
    asmtool: Continuing anyway
    now I scan and list the disks
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    # /etc/init.d/oracleasm listdisks
    ORCL1
    Here is whats going on in /var/log/messages when I run the oracleasm scandisks command
    # date
    Fri Aug 14 13:51:58 MST 2009
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    cat /var/log/messages | grep "Aug 14 13:5"
    Aug 14 13:52:06 seer kernel: dpdb: dpdb1
    Aug 14 13:52:06 seer kernel: dpdc: dpdc1
    Aug 14 13:52:06 seer kernel: dpdd: dpdd1
    Aug 14 13:52:06 seer kernel: dpde: dpde1
    Aug 14 13:52:06 seer kernel: dpdf: dpdf1
    Aug 14 13:52:06 seer kernel: dpdg: dpdg1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: printk: 30 messages suppressed.
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: sda : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sda : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: Dev sda: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdb: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdb: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdb: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdb: sdb1
    Aug 14 13:52:06 seer kernel: SCSI device sdc: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdc: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdc: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdc: sdc1
    Aug 14 13:52:06 seer kernel: SCSI device sdd: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdd: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdd: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdd: sdd1
    Aug 14 13:52:06 seer kernel: SCSI device sde: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sde: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sde: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sde: sde1
    Aug 14 13:52:06 seer kernel: SCSI device sdf: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdf: sdf1
    Aug 14 13:52:06 seer kernel: SCSI device sdg: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdg: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdg: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdg: sdg1
    Aug 14 13:52:06 seer kernel: SCSI device sdh: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdh: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdh: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdh: sdh1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sdi, logical block 0
    Aug 14 13:52:06 seer kernel: sdi : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdi : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdi: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdi: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdi: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdi:end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer last message repeated 4 times
    Aug 14 13:52:06 seer kernel: Dev sdi: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdj: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdj: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdj: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdj: sdj1
    Aug 14 13:52:06 seer kernel: SCSI device sdk: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdk: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdk: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdk: sdk1
    Aug 14 13:52:06 seer kernel: SCSI device sdl: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdl: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdl: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdl: sdl1
    Aug 14 13:52:06 seer kernel: SCSI device sdm: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdm: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdm: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdm: sdm1
    Aug 14 13:52:06 seer kernel: SCSI device sdn: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdn: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdn: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdn: sdn1
    Aug 14 13:52:06 seer kernel: SCSI device sdo: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdo: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdo: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdo: sdo1
    Aug 14 13:52:06 seer kernel: SCSI device sdp: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdp: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdp: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdp: sdp1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: sdq : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdq : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdq: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdq: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdq: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdq:end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdq: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdr: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdr: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdr: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdr: sdr1
    Aug 14 13:52:06 seer kernel: SCSI device sds: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sds: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sds: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sds: sds1
    Aug 14 13:52:06 seer kernel: SCSI device sdt: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdt: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdt: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdt: sdt1
    Aug 14 13:52:06 seer kernel: SCSI device sdu: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdu: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdu: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdu: sdu1
    Aug 14 13:52:06 seer kernel: SCSI device sdv: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdv: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdv: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdv: sdv1
    Aug 14 13:52:06 seer kernel: SCSI device sdw: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdw: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdw: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdw: sdw1
    Aug 14 13:52:06 seer kernel: SCSI device sdx: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdx: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdx: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdx: sdx1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: sdy : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdy : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdy: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdy: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdy: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdy:end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdy: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdz: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdz: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdz: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdz: sdz1
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdaa: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaa: sdaa1
    Aug 14 13:52:06 seer kernel: SCSI device sdab: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdab: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdab: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdab: sdab1
    Aug 14 13:52:06 seer kernel: SCSI device sdac: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdac: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdac: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdac: sdac1
    Aug 14 13:52:06 seer kernel: SCSI device sdad: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdad: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdad: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdad: sdad1
    Aug 14 13:52:06 seer kernel: SCSI device sdae: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdae: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdae: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdae: sdae1
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdaf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaf: sdaf1
    Aug 14 13:52:06 seer kernel: scsi_wr_disk: unknown partition table
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdy, sector 0
    Here's some extra info:
    # /sbin/blkid | grep asm
    /dev/sdc1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdk1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sds1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdaa1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/dpdb1: LABEL="ORCL1" TYPE="oracleasm"
    I have learned that by excluding devices in the oracleasm configuration file I eliminate those I/O errors in /var/log/messages
    # cat /etc/sysconfig/oracleasm
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver. It is generated
    # By running /etc/init.d/oracleasm configure. Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=oinstall
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dp sd"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sdc sdk sds sdaa sda"
    # ls -la /dev/oracleasm/disks/
    total 0
    drwxr-xr-x 1 root root 0 Aug 14 10:47 .
    drwxr-xr-x 4 root root 0 Aug 13 15:32 ..
    brw-rw---- 1 oracle oinstall 251, 33 Aug 14 13:46 ORCL1
    Now I can go into dbca to create the ASM instance, which starts up fine...  create a new diskgroup, I see ORCL1 as a provision ASM disk I select it ...  Click OK
    CRASH!!!  Box hangs have to reboot it....
    I have gotten myself to exactly the same point right before clicking OK and here is what is in the ASM alertlog so far
    Fri Aug 14 14:42:02 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.6.0.
    Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfile+ASM.ora
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    diagnostic_dest = "/u01/app/oracle"
    Fri Aug 14 14:42:04 2009
    PMON started with pid=2, OS id=3300
    Fri Aug 14 14:42:04 2009
    VKTM started with pid=3, OS id=3302 at elevated priority
    VKTM running at (20)ms precision
    Fri Aug 14 14:42:04 2009
    DIAG started with pid=4, OS id=3306
    Fri Aug 14 14:42:04 2009
    PSP0 started with pid=5, OS id=3308
    Fri Aug 14 14:42:04 2009
    DSKM started with pid=6, OS id=3310
    Fri Aug 14 14:42:04 2009
    DIA0 started with pid=7, OS id=3312
    Fri Aug 14 14:42:04 2009
    MMAN started with pid=8, OS id=3314
    Fri Aug 14 14:42:04 2009
    DBW0 started with pid=9, OS id=3316
    Fri Aug 14 14:42:04 2009
    LGWR started with pid=6, OS id=3318
    Fri Aug 14 14:42:04 2009
    CKPT started with pid=10, OS id=3320
    Fri Aug 14 14:42:04 2009
    SMON started with pid=11, OS id=3322
    Fri Aug 14 14:42:04 2009
    RBAL started with pid=12, OS id=3324
    Fri Aug 14 14:42:04 2009
    GMON started with pid=13, OS id=3326
    ORACLE_BASE from environment = /u01/app/oracle
    Fri Aug 14 14:42:04 2009
    SQL> ALTER DISKGROUP ALL MOUNT
    Fri Aug 14 14:42:41 2009
    At this point I don't want to click the OK until I am sure someone is in the office to reboot the machine manually if I do hang it again....  I hung it twice yesterday, however I did not have the devices excluded in the oracleasm configuration file as i do now
    Edited by: user10193377 on Aug 14, 2009 3:23 PM
    Well Clicking OK hun it again and I am waiting to get back into it, to see what new information might be gleened
    Does anyone have any ideas on what to check or where to look?????    Will update more once I can log back in

    Hi Mark,
    It looks like something is not correct with your raw device partition based on the error messages:
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    It could be a number of things. I would check with your vendor and Oracle support to see if the multipath software drive is supported and if there is a potential workaround for ASM. Sorry this is not quite the solution, but its what jumps to mind based on issues with multipath software and storage vendors for ASM with Linux and Oracle. Have you checked the validation matrix available on Metalink?
    Cheers,
    Ben

  • Shrinking ASM disk

    I just tried to resize an ASM disk and although the feedback was 'successful', there doesn't appear to have been any change.
    I was attempting to shrink disk DATA_0001 from 200G to 100G. Am I missing something obvious?
    SQL> select group_number, name, path, os_mb, total_mb, free_mb from v$asm_disk;
    GROUP_NUMBER NAME                 PATH                                OS_MB   TOTAL_MB    FREE_MB
              0                      /dev/iscsi/rman11                   20489          0          0
              0                      /dev/iscsi/rmanB11                 102398          0          0
              0                      /dev/iscsi/rman1                    20490          0          0
              0                      /dev/iscsi/vote3                      300          0          0
              0                      /dev/iscsi/vote1                      300          0          0
              0                      /dev/iscsi/rmanP11                 204805          0          0
              0                      /dev/iscsi/vote2                      300          0          0
              0                      /dev/iscsi/rmanP1                  204810          0          0
              0                      /dev/iscsi/rmanB1                  102405          0          0
              1 DATA_0000            /dev/iscsi/db1                      10245      10245      10109
              2 FRA_0000             /dev/iscsi/flshbk1                  20490      20490      20465
    GROUP_NUMBER NAME                 PATH                                OS_MB   TOTAL_MB    FREE_MB
              2 FRA_0001             /dev/iscsi/flshbkR1                409605     409605     409262
              1 DATA_0001            /dev/iscsi/dbR1                    204810     204810     202297
    13 rows selected.
    SQL> alter diskgroup data resize disk 'data_0001' size 100g;
    Diskgroup altered.
    SQL> select group_number, name, path, os_mb, total_mb, free_mb from v$asm_disk;
    GROUP_NUMBER NAME                 PATH                                OS_MB   TOTAL_MB    FREE_MB
              0                      /dev/iscsi/rman11                   20489          0          0
              0                      /dev/iscsi/rmanB11                 102398          0          0
              0                      /dev/iscsi/rman1                    20490          0          0
              0                      /dev/iscsi/vote3                      300          0          0
              0                      /dev/iscsi/vote1                      300          0          0
              0                      /dev/iscsi/rmanP11                 204805          0          0
              0                      /dev/iscsi/vote2                      300          0          0
              0                      /dev/iscsi/rmanP1                  204810          0          0
              0                      /dev/iscsi/rmanB1                  102405          0          0
              1 DATA_0000            /dev/iscsi/db1                      10245      10245      10004
              2 FRA_0000             /dev/iscsi/flshbk1                  20490      20490      20465
    GROUP_NUMBER NAME                 PATH                                OS_MB   TOTAL_MB    FREE_MB
              2 FRA_0001             /dev/iscsi/flshbkR1                409605     409605     409262
              1 DATA_0001            /dev/iscsi/dbR1                    204810     204810     202402
    13 rows selected.The free_mb seems to have increased, but otherwise I can't see the effect of my change. Maybe I'm looking in the wrong place??
    I tried restarting the ASM instance but it made no difference.
    After resizing the disk in ASM I shrunk the disk volume in our storage array. ASM was of course down at the time.
    When I attempted to restart ASM I saw this ...
    SQL> startup
    ASM instance started
    Total System Global Area  283930624 bytes
    Fixed Size                  2158992 bytes
    Variable Size             256605808 bytes
    ASM Cache                  25165824 bytes
    ORA-15032: not all alterations performed
    ORA-15036: disk '/dev/iscsi/dbR1' is truncatedNone of my diskgroups are mounted ...
    SQL> select group_number, name, state from v$asm_diskgroup;
    GROUP_NUMBER NAME            STATE
              0 DATA            DISMOUNTED
              0 FRA             DISMOUNTEDHere's the messages from the ASM instance alert log ...
    SQL> ALTER DISKGROUP ALL MOUNT
    NOTE: cache registered group DATA number=1 incarn=0x5f5e3343
    NOTE: cache began mount (not first) of group DATA number=1 incarn=0x5f5e3343
    NOTE: cache registered group FRA number=2 incarn=0x5f5e3344
    NOTE: cache began mount (not first) of group FRA number=2 incarn=0x5f5e3344
    WARNING::ASMLIB library not found. See trace file for details.
    NOTE: Assigning number (1,0) to disk (/dev/iscsi/db1)
    NOTE: cache dismounting group 1/0x5F5E3343 (DATA)
    NOTE: dbwr not being msg'd to dismount
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 1/0x5F5E3343 (DATA)
    NOTE: cache ending mount (fail) of group DATA number=1 incarn=0x5f5e3343
    kfdp_dismount(): 1
    kfdp_dismountBg(): 1
    NOTE: De-assigning number (1,0) from disk (/dev/iscsi/db1)
    ERROR: diskgroup DATA was not mounted
    NOTE: Assigning number (2,1) to disk (/dev/iscsi/flshbkR1)
    NOTE: Assigning number (2,0) to disk (/dev/iscsi/flshbk1)
    NOTE: cache dismounting group 2/0x5F5E3344 (FRA)
    NOTE: dbwr not being msg'd to dismount
    NOTE: lgwr not being msg'd to dismount
    NOTE: cache dismounted group 2/0x5F5E3344 (FRA)
    NOTE: cache ending mount (fail) of group FRA number=2
    incarn=0x5f5e3344
    kfdp_dismount(): 2
    kfdp_dismountBg(): 2
    NOTE: De-assigning number (2,0) from disk (/dev/iscsi/flshbk1)
    NOTE: De-assigning number (2,1) from disk (/dev/iscsi/flshbkR1)
    ERROR: diskgroup FRA was not mounted
    ORA-15032: not all alterations performed
    ORA-15036: disk '/dev/iscsi/dbR1' is truncated
    ERROR: ALTER DISKGROUP ALL MOUNTAny clues?
    Thanks,
    Steve

    Thanks Markus. I changed the size of the volume back to the original and was able to restart the ASM instances on both nodes. I confirmed it saw the size as the original 200G.
    I then shut down ASM on the second node and issued the alter on the first node. Here's what happened ...
    SQL> select group_number, name, path, os_mb, total_mb, free_mb from v$asm_disk order by name;
    GROUP_NUMBER NAME       PATH                                OS_MB   TOTAL_MB    FREE_MB
               1 DATA_0000  /dev/iscsi/db1                      10245      10245      10004
               1 DATA_0001  /dev/iscsi/dbR1                    204810     204810     202402
               2 FRA_0000   /dev/iscsi/flshbk1                  20490      20490      20465
               2 FRA_0001   /dev/iscsi/flshbkR1                409605     409605     409262
               0            /dev/iscsi/rman1                    20490          0          0
               0            /dev/iscsi/rmanP1                  204810          0          0
               0            /dev/iscsi/rmanB1                  102405          0          0
               0            /dev/iscsi/rmanP11                 204805          0          0
               0            /dev/iscsi/rmanB11                 102398          0          0
               0            /dev/iscsi/rman11                   20489          0          0
    10 rows selected.
    SQL> alter diskgroup data resize disk 'data_0001' size 100g;
    alter diskgroup data resize disk 'data_0001' size 100g
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15130: diskgroup "DATA" is being dismounted
    ORA-15066: offlining disk "DATA_0001" may result in a data lossHere's the details from the alert log ...
    SQL> alter diskgroup data resize disk 'data_0001' size 100g
    NOTE: requesting all-instance membership refresh for group=1
    WARNING: cache read a corrupted block gn=1 dsk=1 blk=257 from disk 1
    NOTE: a corrupted block was dumped to /var/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_5295.trc
    ERROR: cache failed to read gn=1 dsk=1  blk=257 from disk(s): 1
    ORA-15196: invalid ASM block header [kfc.c:9133] [endian_kfbh] [2147483649] [257] [0 != 1]
    System State dumped to trace file /var/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_5295.trc
    NOTE: cache initiating offline of disk 1  group 1
    WARNING: initiating offline of disk 1.3688884620 (DATA_0001) with mask 0x7e
    NOTE: initiating PST update: grp = 1, dsk = 1, mode = 0x15
    kfdp_updateDsk(): 14
    Thu May 07 15:45:38 2009
    kfdp_updateDskBg(): 14
    ERROR: too many offline disks in PST (grp 1)
    Thu May 07 15:45:38 2009
    NOTE: halting all I/Os to diskgroup DATA
    Thu May 07 15:45:38 2009
    SQL> alter diskgroup DATA dismount force
    NOTE: active pin found: 0x0x6ddf6060
    NOTE: active pin found: 0x0x6ddf6168
    ERROR: ORA-15130 signalled during resize of diskgroup DATA
    Thu May 07 15:45:38 2009
    NOTE: membership refresh pending for group 1/0xdc0f1999 (DATA)
    kfdp_query(): 15
    kfdp_queryBg(): 15
    SUCCESS: refreshed membership for 1/0xdc0f1999 (DATA)
    ERROR: ORA-15130 thrown in RBAL for group number 1
    Errors in file /var/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_5202.trc:
    ORA-15130: diskgroup "DATA" is being dismounted
    Errors in file /var/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_5202.trc:
    ORA-15130: diskgroup "DATA" is being dismounted
    ORA-15032: not all alterations performed
    ORA-15130: diskgroup "DATA" is being dismounted
    ORA-15066: offlining disk "DATA_0001" may result in a data loss
    ERROR: alter diskgroup data resize disk 'data_0001' size 100g
    NOTE: cache dismounting group 1/0xDC0F1999 (DATA)
    NOTE: dbwr not being msg'd to dismount
    Thu May 07 15:45:41 2009
    Dirty detach reconfiguration started (old inc 6, new inc 6)
    List of nodes:
    0
    Global Resource Directory partially frozen for dirty detach
    * dirty detach - domain 1 invalid = TRUE
    10 GCS resources traversed, 0 cancelled
    Dirty Detach Reconfiguration complete
    Thu May 07 15:45:41 2009
    freeing rdom 1
    Thu May 07 15:45:41 2009
    WARNING: dirty detached from domain 1
    NOTE: cache dismounted group 1/0xDC0F1999 (DATA)
    kfdp_dismount(): 16
    kfdp_dismountBg(): 16
    NOTE: De-assigning number (1,0) from disk (/dev/iscsi/db1)
    NOTE: De-assigning number (1,1) from disk (/dev/iscsi/dbR1)
    SUCCESS: diskgroup DATA was dismounted
    SUCCESS: alter diskgroup DATA dismount force
    ERROR: PST-initiated MANDATORY DISMOUNT of group DATA
    Thu May 07 15:46:06 2009
    SQL> alter diskgroup data resize disk 'data_0001' size 100g
    ORA-15032: not all alterations performed
    ORA-15001: diskgroup "DATA" does not exist or is not mounted
    ERROR: alter diskgroup data resize disk 'data_0001' size 100gLooks like my earlier attempt has indeed screwed up something so even though the instances start OK and mount the diskgroup, I think there's a fair chance something would go splat sooner rather than later.
    As this is a test database, I think I'll cut my losses and rebuild the diskgroup then restore it from backup. I'm assuming the effort involved in correcting the corruption will be greater than rebuilding and restoring.
    Do you agree with this?
    Then I'll try again and hopefully get it right.
    Thanks for your help!!!
    Steve

  • ASM, Disk Hung: how remove??

    Hi guys, I have a big issue to solve: I work on Solaris OS and Oracle 10.0.2 with ASM.
    I can't delete a new (no tablespace are expanded till now) disk added to diskgroup (DGFC).
    This is my situation:
    select name, state FROM V$ASM_DISK where name like 'DGFC%';
    NAME STATE
    DGFC_0000 NORMAL
    DGFC_0001 NORMAL
    DGFC_0002 NORMAL
    DGFC_0003 NORMAL
    DGFC_0004 NORMAL
    DGFC_0005 NORMAL
    DGFC_0006 NORMAL
    DGFC_0007 NORMAL
    DGFC_0008 NORMAL <---- This is the last disk added (I would remove it)
    Maybe this could be important: DGFC_0008 was dropped from another diskgroup and header_status has changed from CANDIDATE to FORMER, then I added to DGFC diskgroup in the classic way.
    Sending ALTER DISKGROUP DGFC DROP DISK DGFC_0008; the situation change into
    NAME STATE
    DGFC_0008 HUNG
    and log file alert_+ASM.log contains:
    Fri Jan 14 21:18:08 2011
    SQL> ALTER DISKGROUP DGFC DROP DISK DGFC_0008
    Fri Jan 14 21:18:08 2011
    NOTE: PST update: grp = 1
    NOTE: requesting all-instance PST refresh for group=1
    Fri Jan 14 21:18:08 2011
    NOTE: PST refresh pending for group 1/0x22e8f203 (DGFC)
    SUCCESS: refreshed PST for 1/0x22e8f203 (DGFC)
    Fri Jan 14 21:18:13 2011
    NOTE: starting rebalance of group 1/0x22e8f203 (DGFC) at power 1
    Starting background process ARB0
    ARB0 started with pid=12, OS id=5071
    Fri Jan 14 21:18:13 2011
    NOTE: assigning ARB0 to group 1/0x22e8f203 (DGFC)
    Fri Jan 14 21:18:13 2011
    WARNING: allocation failure on disk DGFC_0000 for file 3 xnum 30
    Fri Jan 14 21:18:13 2011
    Errors in file /users/app/oracle/admin/+ASM/bdump/+asm_arb0_5071.trc:
    ORA-15041: diskgroup space exhausted
    Fri Jan 14 21:18:13 2011
    NOTE: stopping process ARB0
    Fri Jan 14 21:18:16 2011
    WARNING: rebalance not completed for group 1/0x22e8f203 (DGFC)
    Fri Jan 14 21:18:16 2011
    SUCCESS: rebalance completed for group 1/0x22e8f203 (DGFC)
    NOTE: PST update: grp = 1
    WARNING: grp 1 disk DGFC_0008 still has contents (45 AUs)
    NOTE: PST update: grp = 1
    This is /users/app/oracle/admin/+ASM/bdump/+asm_arb0_5071.trc:
    Instance name: +ASM
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 12
    Unix process pid: 5071, image: [email protected] (ARB0)
    *** SERVICE NAME:() 2011-01-14 21:18:13.067
    *** SESSION ID:(39.20) 2011-01-14 21:18:13.067
    ARB0 relocating file +DGFC.3.1 (7 entries)
    ORA-15041: diskgroup space exhausted
    Anyone could help ??? Thanks a lot

    Thanks a lot Levi,
    The answer was into the Log: disk exhausted. Original diskgroup was quite full and asm can't rebalance correctly the data on the new disk (which datas??!!).
    I solved adding a new disk (16GB) and alterering diskgroup DGFC. New space ready and disk dropped correctly.
    Thanks a lot
    Bye

  • Want to move datafiles, controlfiles, redolog on new ASM Disks (11gR2 RAC)

    Hi Guys,
    Setup: Two Node 11gR2 (11.2.0.1) RAC on RHEL 5.4
    Existing disks are from Old SAN & New Disks are from New SAN.
    Can I move all datafiles (+DATA), controlfiles (+CTRL), redolog (+REDO) on new ASM Disks by adding disks in is same Diskgroup & dropping older disks from existing Diskgroup taking advantage of ASM Re-balancing Feature.
    1) add required disks in the DATA Diskgroups,
    ALTER DISKGROUP DATA ADD DISK
    '/dev/oracleasm/disks/NEWDATA3' NAME NEWDATA_0003,
    '/dev/oracleasm/disks/NEWDATA4' NAME NEWDATA_0004,
    '/dev/oracleasm/disks/NEWDATA5' NAME NEWDATA_0005
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    2) When rebalance completes, drop the old disks.
    ALTER DISKGROUP DATA DROP DISK
    NEWDATA_0000,
    NEWDATA_0001
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    3) Do it same for Redo log groups & Controlfile Diskgroups.
    I hope, I could do this Activity, even if database is Up. is there possibility of Database block Corruption ??? (or is it necessary to perform above steps when database is down)
    Would be appreciated, your quick responses on the same.
    It's an urgent requirement. Thanks.
    Regards,
    Manish

    Manish Nashikkar wrote:
    Hi Guys,
    Setup: Two Node 11gR2 (11.2.0.1) RAC on RHEL 5.4
    Existing disks are from Old SAN & New Disks are from New SAN.
    Can I move all datafiles (+DATA), controlfiles (+CTRL), redolog (+REDO) on new ASM Disks by adding disks in is same Diskgroup & dropping older disks from existing Diskgroup taking advantage of ASM Re-balancing Feature.
    1) add required disks in the DATA Diskgroups,
    ALTER DISKGROUP DATA ADD DISK
    '/dev/oracleasm/disks/NEWDATA3' NAME NEWDATA_0003,
    '/dev/oracleasm/disks/NEWDATA4' NAME NEWDATA_0004,
    '/dev/oracleasm/disks/NEWDATA5' NAME NEWDATA_0005
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    2) When rebalance completes, drop the old disks.
    ALTER DISKGROUP DATA DROP DISK
    NEWDATA_0000,
    NEWDATA_0001
    REBALANCE POWER 11;
    Check rebalance status from v$ASM_OPERATION.
    3) Do it same for Redo log groups & Controlfile Diskgroups.
    I hope, I could do this Activity, even if database is Up. is there possibility of Database block Corruption ??? (or is it necessary to perform above steps when database is down)
    Would be appreciated, your quick responses on the same.
    It's an urgent requirement. Thanks.
    Regards,
    Manish
    Hi Manish,
    Yes you can do that by adding new disk to existing diskgroup and delete old diskgroup. The good thing is this can be done online however you need to make sure the rebalance power is meet your business time, higher rebalance power is faster to rebalance to complete however it also will consume more resources
    Cheers

  • ORA-15042: ASM disk "2" is missing from group number "1"

    Hi,
    I'm working on an Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Automatic Storage Management option.
    Into the ASM I had 3 diskgroups:
    - ARCHIVELOG (4 disks)
    - ONLINELOG (1 disks)
    - DATA (10 disks)
    When I try to startup the ASM instance I got:
    A-15042: ASM disk "2" is missing from group number "1"The diskgroup won't be mounted.
    I would like to remove that disk and later add a new one.
    I can I do that?
    I'm not able to mount the ARCHIVELOG diskgroup.
    I tried the command
    SQL> alter diskgroup archivelog drop disk ARCH3 force;
    alter diskgroup archivelog drop disk ARCH3 force
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15001: diskgroup "ARCHIVELOG" does not exist or is not mountedThanks in advance,
    Samuel
    Edited by: Samuel Rabini on Jan 10, 2012 4:11 PM

    As that database is on AWS, I tried this:
    - drop diskgroup archivelog
    - detach of those 4 disks
    - create new 4 disks
    - attach new disks
    - assign those disks to ASM with oracleasm utilty
    - create diskgroup archivelog
    It worked.
    But because I was on AWS and more because it was the ARCHIVELOG diskgroup.
    What would I had to do if it was the DATA diskgroup?
    Thanks

  • Change ASM DISK NAME in 11.2 version

    Hi,
    Is it possible to change ASM DISK NAME for example in diskgroup DATA01. I added disk without NAME, and system-choosen NAME is allocated. Can I set it to e.g., ORCL:DATA09 . Currently its DATA09_009.
    Please suggest your views.
    Thanks a lot.
    Best Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    You can change the label of disk using oracleasm renamedisk; oracle discourages it
    I will changing the label of disk from CLONE1 to CLONE1a and CLONE2 to CLONE2a of diskgroup CLONE
    SQL> col disk_number for 99
    SQL> col name for a15
    SQL> col label for a15
    SQL> col path for a30
    SQL> select disk_number,name,label,path from v$asm_disk;
    0 DATA CLONE1 ORCL:CLONE1
    1 LOGS CLONE2 ORCL:CLONE2
    ASMCMD> lsdsk -G CLONE
    Path
    ORCL:CLONE1
    ORCL:CLONE2
    ASMCMD> umount CLONE
    [root@otest]# /etc/init.d/oracleasm force-renamedisk CLONE1 CLONE1a
    Renaming disk "CLONE1" to "CLONE1a": [  OK  ]
    [root@otest]# /etc/init.d/oracleasm force-renamedisk CLONE2 CLONE2a
    Renaming disk "CLONE2" to "CLONE2a": [  OK  ]
    ASMCMD> mount CLONE
    ASMCMD> lsdsk -G CLONE
    Path
    ORCL:CLONE1A
    ORCL:CLONE2A
    select disk_number,name,label,path from v$asm_disk;SQL> SQL> SQL> SQL>
    0 DATA CLONE1A ORCL:CLONE1A
    1 LOGS CLONE2A ORCL:CLONE2A
    Edited by: vlethakula on Jan 18, 2013 11:48 AM
    Edited by: vlethakula on Jan 18, 2013 11:56 AM

  • ASM disk added without scan on second node

    Hi All ,
    Oracle Version:11.2.0.3
    I need one help for one issue with ASM disk addition.
    It is a two node RAC and one disk group was filled.
    One disk was available as UNUSED001 and so we renamed it ran scan disk  and added the disk to the diskgroup on node1.
    But , as we did not run the scan disk on second node , the name is still showing as UNUSED001 and assigned to diskgroup so showing as MEMBER.
    Also, the renamed disk is also showing as MEMBER but not assigned with any diskgroup.
    Usually, when this heppens we have to reboot the node to fix the issue , but would like to get idea if this can be fixed without bouncing nodes.

    Hi ,
    + Probably that disk addition failed with ORA-15075 as same named device is not visible after renaming of disk.
      As this validation takes place after writing disk header ,it is showing as MEMBER.
    + Get downtime of cluster on 2nd node and run scandisks on 2nd node.
    + Now renamed disk should be showing up on node 2.
    + if showing up ,then validate .
    -- All expected diskgroups were mounted on both nodes or not
    sql> select inst_id,name,state from gv$asm_diskgroup;
    -- If mounted validate that renamed disk group_number and mount_status
    sql> col path for a30
    sql> select inst_id,group_number,path,mount_status from gv$asm_disk;
    + If group_number is 0 and mount_status is CLOSED ,then it is not part of any mounted diskgroup.
      Add that disk again with force option in same diskgroup.
    sql> alter diskgroup <diskgroup_name> add disk 'ORCL:<LABEL_NAME>' force;
    And allow rebalance to complete.
    Regards,
    Aritra

  • How to move or migrate whole directories between ASM disk groups?

    Hello everyone!
    I'm playing around with Oracle ASM and Oracle Database (11g R1), I'm a student. This is just for testing purposes.
    Computer specifications are:
    Processor: Intel Pentium 4 HT 3.00 Ghz.
    RAM Memory: 2 GB.
    Hard Disk: 250 GB
    O.S.: Windows XP Professional Edition SP 2.
    I installed Oracle ASM, I created an ASM disk group (+FRA), I installed Oracle Database and I created a testing database. The database is working properly over the ASM disk group. Days ago, I got help about the initialization parameters DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_1, DB_CREATE_ONLINE_LOG_DEST_2 and DB_RECOVERY_FILE_DEST, based on their function, I created another 3 ASM disk groups (+FILES, LOG1, LOG2). Currently, the four initialization parameters are pointing to its corresponding ASM disk group. As you can deduce, at the installation moment of the Oracle Database I used the ASM disk group "+FRA" and inside it were created the directories: CONTROLFILE, DATAFILE, ONLINELOG, PARAMETERFILE, TEMPFILE and the SPFile.
    My point is I wanna move or migrate the directories DATAFILE, PARAMETERFILE, TEMPFILE and the SPFile to "+FILES", ONLINELOG and CONTROLFILE to "+LOG1" and "+LOG2", this way, the ASM disk group "+FRA" will contain the Flash Recovery Area only. What is the procedure to do this?
    Thanks in advance!

    user1987306 wrote:
    Hello everyone!
    My point is I wanna move or migrate the directories DATAFILE, PARAMETERFILE, TEMPFILE and the SPFile to "+FILES", ONLINELOG and CONTROLFILE to "+LOG1" and "+LOG2", this way, the ASM disk group "+FRA" will contain the Flash Recovery Area only. What is the procedure to do this?
    Thanks in advance!
    Hi,
    There are couple of approaches you can use, here is some of them
    - To move datafile, start the database in mount state
    RMAN > copy datafile '+FRA/xxx' to '+FILES1';
    SQL > alter database rename file '+FRA/xxx' to '+FILES1/xxx';
    - To move tempfile
    SQL > alter tablespace TEMP add tempfile '+FILES1' SIZE 10M;
    SQL > alter database tempfile '+FRA/xxx' drop;
    - To move onlinelog
    SQL > alter database add logfile member '+LOG1' to group 1;
    SQL > alter database add logfile member '+LOG2' to group 1;
    SQL > alter database drop logfile member '+FRA/xxx';
    - To move controlfile
    SQL > restore controlfile to '+FILES1' from '+FRA/xxx';
    update the spfile to reflect new location of controlfile
    Cheers

  • Rebuild ASM Disk - Copying multiple datafiles from one disk to another

    Hi,
    I have an environment of four 11GR2 Oracle databases on a Linux server. Each database has its own ASM disk.
    DB1 -> ASM_DISK1
    DB2 -> ASM_DISK2
    DB3 -> ASM_DISK3
    DB4 -> ASM_DISK4
    I need to rebuild one of the ASM disks (ASM_DISK1), but first I need to copy all of the datafiles to another disk (ASM_DISK2). I tried backing up the database using RMAN, but it was taking too long (nearly two days when I cancelled it). So now I am going to copy the files using ASMCMD CP command.
    Basically my task is as follows:
    1. Shutdown database.
    2. Copy all data from ASM_DISK1 to ASM_DISK2.
    3. Drop ASM_DISK1.
    4. Re-create ASM_DISK1.
    5. Copy all data back to ASM_DISK1.
    6. Start database.
    Database size is 700GB.
    I am using the below script to copy the files.
    Copy Script
    ================
    asmcmd ls +ASM_DISK1/DB1/DATAFILE >> asm_list.txt
    for FILENAME in `cat asm_list.txt`
    do
    asmcmd >> asm_LOG.log <<EOF
    cp ASM_DISK1/DB1/DATAFILE/$FILENAME ASM_DISK2/DB1_BACKUP/DATAFILE/$FILENAME.dbf
    EOF
    done
    ================
    I will then rename each file in the database like so:
    alter database rename file '+ASM_DISK1/DB1/DATAFILE/filename' to '+ASM_DISK1/DB1/DATAFILE/filename.dbf'
    My questions are as follows.
    Is this approach a valid solution?
    Will renaming the files during copy corrupt the files?
    When I copy the files back to the original disk after rebuild, then rename them, will the database be able to start?
    Rgs,
    Rob

    rgilligan_tnf wrote:
    Hi,
    I have an environment of four 11GR2 Oracle databases on a Linux server. Each database has its own ASM disk.
    DB1 -> ASM_DISK1
    DB2 -> ASM_DISK2
    DB3 -> ASM_DISK3
    DB4 -> ASM_DISK4
    I need to rebuild one of the ASM disks (ASM_DISK1), but first I need to copy all of the datafiles to another disk (ASM_DISK2). I tried backing up the database using RMAN, but it was taking too long (nearly two days when I cancelled it). So now I am going to copy the files using ASMCMD CP command.
    And how do you propose to update the controlfile to point to the new location?
    unless your datafiles are offline and/or the database is down, you will corrupt them and have an unusable database when you finish.
    how were you doing this with RMAN? Depending on the size of your database(700G), it very well could take some time. I have restored databases at a rate of >300G/hr from scratch. You will need to shutdown at some point to relocate the controlfiles and system and redo logfiles.
    Just curious, what is the problem with diskgroup ASM_DISK1 that you want to rebuild it?
    Basically my task is as follows:
    1. Shutdown database.
    2. Copy all data from ASM_DISK1 to ASM_DISK2.
    3. Drop ASM_DISK1.
    4. Re-create ASM_DISK1.
    5. Copy all data back to ASM_DISK1.
    6. Start database.
    Database size is 700GB.
    I am using the below script to copy the files.
    Copy Script
    ================
    asmcmd ls +ASM_DISK1/DB1/DATAFILE >> asm_list.txt
    for FILENAME in `cat asm_list.txt`
    do
    asmcmd >> asm_LOG.log <<EOF
    cp ASM_DISK1/DB1/DATAFILE/$FILENAME ASM_DISK2/DB1_BACKUP/DATAFILE/$FILENAME.dbf
    EOF
    done
    ================
    I will then rename each file in the database like so:
    alter database rename file '+ASM_DISK1/DB1/DATAFILE/filename' to '+ASM_DISK1/DB1/DATAFILE/filename.dbf'
    My questions are as follows.
    Is this approach a valid solution?
    Will renaming the files during copy corrupt the files?
    When I copy the files back to the original disk after rebuild, then rename them, will the database be able to start?
    Rgs,
    Rob

  • How to resize ASM disks

    Hi! I have Oracle RAC on Centos with one active node and share storage HP MSA1500 12x500 Gb FC. Also, i have two servers which i want install new rac database.
    [oracle@server ~]$ crsctl query css votedisk
    0. 0 /u04/sync/oracrs/CSSFile
    located 1 votedisk(s).
    [oracle@server ~]$ ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 262120
    Used space (kbytes) : 1672
    Available space (kbytes) : 260448
    ID : 1521772939
    Device/File Name : /u04/sync/oracrs/CRSFile
    Device/File integrity check succeeded
    Device/File not configured
    Cluster registry integrity check succeeded
    [oracle@server ~]$
    CRS & CSS install on ocfs.
    Затем, через oracleasm
    [oracle@server ~]$ /etc/init.d/oracleasm listdisks
    VOL8
    VOL9
    [oracle@server ~]$
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sda1
    Disk "/dev/sda1" is not marked an ASM disk
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdb1
    Disk "/dev/sdb1" is marked an ASM disk with the label ""
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdc1
    Disk "/dev/sdc1" is marked an ASM disk with the label ""
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdd1
    Disk "/dev/sdd1" is marked an ASM disk with the label "VOL8"
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdd2
    Disk "/dev/sdd2" is marked an ASM disk with the label "VOL9"
    [root@server ~]# /etc/init.d/oracleasm querydisk /dev/sdd3
    Disk "/dev/sdd3" is not marked an ASM disk
    [root@server ~]#
    [root@server ~]# fdisk -l
    Disk /dev/cciss/c0d0: 72.8 GB, 72833679360 bytes
    255 heads, 32 sectors/track, 17433 cylinders
    Units = cylinders of 8160 * 512 = 4177920 bytes
    Device Boot Start End Blocks Id System
    /dev/cciss/c0d0p1 * 1 50 203984 83 Linux
    /dev/cciss/c0d0p2 51 1305 5120400 82 Linux swap
    /dev/cciss/c0d0p3 1306 17433 65802240 83 Linux
    Disk /dev/sda: 1048 MB, 1048657920 bytes
    33 heads, 61 sectors/track, 1017 cylinders
    Units = cylinders of 2013 * 512 = 1030656 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 1 1017 1023580 83 Linux
    Disk /dev/sdb: 500.0 GB, 500071791104 bytes
    255 heads, 63 sectors/track, 60796 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 60796 488343838+ 83 Linux
    Disk /dev/sdc: 499.0 GB, 499025092608 bytes
    255 heads, 63 sectors/track, 60669 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 60669 487323711 83 Linux
    Disk /dev/sdd: 500.0 GB, 500073750528 bytes
    255 heads, 63 sectors/track, 60797 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 1000 8032468+ 83 Linux
    /dev/sdd2 1001 2000 8032500 83 Linux
    /dev/sdd3 2001 3000 8032500 83 Linux
    [root@server ~]#
    Than in sqlplusi see, what disks use ASM:
    SQL> select name, total_mb, free_mb,path from v$asm_disk;
    NAME TOTAL_MB FREE_MB PATH
    VOL1 476898 431798 /dev/raw/raw1
    VOL2 475902 475379 /dev/raw/raw2
    SQL> select name, total_mb, free_mb from v$asm_diskgroup;
    NAME TOTAL_MB FREE_MB
    DATA 476898 431798
    RECOVERY_AREA 475902 475379
    SQL> select name, type from V$asm_diskgroup;
    NAME TYPE
    DATA EXTERN
    RECOVERY_AREA EXTERN
    Also i see what disks use diskgroups DATA and RECOVERY:
    [root@server ~]# cat /etc/sysconfig/rawdevices
    # This file and interface are deprecated.
    # Applications needing raw device access should open regular
    # block devices with O_DIRECT.
    # raw device bindings
    # format: <rawdev> <major> <minor>
    # <rawdev> <blockdev>
    # example: /dev/raw/raw1 /dev/sda1
    # /dev/raw/raw2 8 5
    /dev/raw/raw1 /dev/sdb1
    /dev/raw/raw2 /dev/sdc1
    /dev/raw/raw8 /dev/sdd1
    /dev/raw/raw9 /dev/sdd2
    [root@server ~]#
    [root@server ~]# mount
    /dev/cciss/c0d0p3 on / type ext3 (rw)
    none on /proc type proc (rw)
    none on /sys type sysfs (rw)
    none on /dev/pts type devpts (rw,gid=5,mode=620)
    usbfs on /proc/bus/usb type usbfs (rw)
    /dev/cciss/c0d0p1 on /boot type ext3 (rw)
    none on /dev/shm type tmpfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    configfs on /config type configfs (rw)
    ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
    /dev/sda1 on /u04/sync type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
    oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
    [root@server ~]#
    [root@server ~]# ls -l /dev/oracleasm/disks/
    total 0
    brw-rw---- 1 oracle dba 8, 49 Aug 8 19:34 VOL8
    brw-rw---- 1 oracle dba 8, 50 Aug 8 19:34 VOL9
    [root@server ~]#
    How i could resize asm disk from 450 G for example to 100 Gb for my new rac installation?
    Sorry for my english:(

    Hi mmusette, hi all,
    please, allow a little clarification. ASM allows resizing disks:
    From: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm#sthref1727
    Resizing Disks in Disk Groups+
    The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations:
    Resize all disks in the disk group
    Resize specific disks
    Resize all of the disks in a specified failure group
    If you do not specify a new size in the SIZE clause then ASM uses the size of the disk as returned by the operating system. This could be a means of recovering disk space when you had previously restricted the size of the disk by specifying a size smaller than disk capacity.
    The new size is written to the ASM disk header record and if the size of the disk is increasing, then the new space is immediately available for allocation. If the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation can successfully relocate all extents, then the new size is made permanent, otherwise the rebalance fails.
    However, if you have setup the ASM disk on a physical disk partition, you probably will not be able to resize the partition without destroying the data on the disk. If you, however, used a volume manager to create volumes and you based your ASM disks on those volumes (through RAW devices or directly) AND your volume manager allows resizing the volumes, you should be able to make use of the command mentioned above.
    Thanks,
    Markus

  • ASM Disk offline - State hung

    Hey in my ASM configuration I setup this
    Disk Path ASM Name Failure Group
    /dev/raw/raw4 Data1 FG1_Data [on SAN1]
    /dev/raw/raw5 Data2 FG2_Data [on SAN2]
    /dev/raw/raw6 Reco1 FG1_Reco [on SAN1]
    /dev/raw/raw7 Reco2 FG2_Reco [on SAN2]
    all good.
    My I switched off SAN1. A query on v$asm_disk shows disk DATA1 & RECO1 as offline - hung state.
    I did in the asm instance in sqlplus:
    ALTER DISKGROUP DATA DROP DISK data1; -> success
    ALTER DISKGROUP RECO DROP DISK reco1; -> success
    ALTER DISKGROUP DATA ADD failgroup DATA3 '/dev/raw/raw4'; -> success
    ALTER DISKGROUP RECO ADD failgroup RECO3 '/dev/raw/raw6'; -> success
    The query on v$asm_disk stills show the DATA1 and RECO1 - but DATA3 and RECO3 as well.
    Is it right, that the "old" entries will disappear, once the ASM instance is restarted ? I can´t do this at the moment.
    In generell - why went the ASM disks to offline ?
    SAN1 was switched off for about 5 minutes - and was then switched back on.
    Is this expected behaviour ?

    Hi Chris,
    as long as the information is still partially valid, the devices will be displayed in v$asm_disk. Main reason is: Some Oracle processes still have handles open on these devices, and as long as they exist the devices can still be seen.
    A Restart of ASM + database will definitely remove these entries (if they cannot be seen by the operating system).
    Regarding your other question: Oracle can only detect if a device is going offline (errors in the database and asm alter log). If a device goes online again, Oracle does not check this (this would be a major overhead to rescan lets say every minute to see if a disk is back online).
    Hence you have to reinclude the dropped/offlined disks yourself.
    Depending on the version (10g or 11g) and depending on the diskgroup compatibility level for rdbms a disk can simply be returned into the diskgroup (11g - alter diskgroup <dg> set disk <disk> online) and resilvered or have to be reincluded into the ASM instance (no resilvering, but a complete rebalance 10g: alter diskgroup <dg> add disk <disk> to failgroup <fg> [force])
    Sebastian

Maybe you are looking for