Error in resizing the an asm disk in a group....

SQL> alter diskgroup DGROUP2 resize disk DGROUP2_0000 size 512M;
alter diskgroup DGROUP2 resize disk DGROUP2_0000 size 512M
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15057: specified size of 512 MB is larger than actual size of 100 MB
SQL> alter diskgroup DGROUP2 resize all size 512M;
alter diskgroup DGROUP2 resize all size 512M
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15057: specified size of 512 MB is larger than actual size of 100 MB

Is there a question here at all?
It says you've only got 100MB of free space. Are we supposed to think that this is not true, and that you think you have a lot more than that?
As a general rule, I'd take Oracle's word on this matter as the operative statement!

Similar Messages

  • Hello, I am trying to upgrade to yosemite, but I get the "disk cannot be used to startup your computer" error. Resizing the partition does not work, I get the error "MediaKit reports no such partition" probably because I installed linux in dual boot

    Hello, I am trying to upgrade my macbook pro to yosemite, but I get the "disk cannot be used to startup your computer" error.
    Resizing the partition does not work for me and I get the error "MediaKit reports no such partition" probably because I installed linux in dual boot and the disk manager is lost.
    Anyway to tell the yosemite installer that it should not pay attention whether the disk is bootable or not ?
    If I am doomed, any way to delete the installer and downloaded OS from my hard drive ?
    Thanks for your help

    As usual, the Linux installer wrecked the partition table. You would have to boot from your OS X installation disc and repartition. Doing so will of course remove all data from the drive, so you must back up first if you haven't already done so.

  • I am getting an error when resizing the Mac HD. I had a separate partition for Windows, but have deleted it and want to add the empty 50GB partition to my Mac HD.

    I am getting an error when resizing the Mac HD. I had a separate partition for Windows, but have deleted it and want to add the empty 50GB partition to my Mac HD.

    SparkchaserEd wrote:
    I am getting an error when resizing the Mac HD.
    What does this error say?

  • Can anyone tell me what this Time Machine error means? The network backup disk does not support the required AFP features?

    Can anyone tell me what this Time Machine error means? The network backup disk does not support the required AFP features?

    AFP - Apple Filing Protocol
    The Network Attached Storage (NAS) that you are pointing Time Machine at does not have the features needed by Time Machine in order to do its Thing.  Time Machine needs some specific features that are not typically available on generic networked storage devices.
    There are manufactures that support the Mac OS X HFS+ file system formats and implement all the needed AFP protocol packets necessary so that they can be used with Time Machine, but apparently yours does not.
    If you are not using a networked mounted volume for Time Machine, then more information will be needed about your Time Machine setup.

  • Error while taking the backup to disk.

    Hi all,
    I am facing one  problem while taking the backup to disk. After copying few files to disk it got terminated and hereunder the log ,
    BR0001I *****_____________________________________________
    BR0201I Compressing /oracle/OPD/sapdata2/sr3_6/sr3.data6
    BR0203I to /backup2/archlog/bdwuyjxg/sr3.data6.Z ...
    <b>BR0278E Command output of 'LANG=C compress -c /oracle/OPD/sapdata2/sr3_6/sr3.data6 > /backup2/archlog/bdwuyjxg/sr3.data6.Z':
    compress: I/O error</b>
    BR0280I BRBACKUP time stamp: 2007-12-15 13.41.00
    BR0279E Return code from 'LANG=C compress -c /oracle/OPD/sapdata2/sr3_6/sr3.data6 > /backup2/archlog/bdwuyjxg/sr3.data6.Z': 1
    BR0224E Compressing /oracle/OPD/sapdata2/sr3_6/sr3.data6 to /backup2/archlog/bdwuyjxg/sr3.data6.Z failed due to previous errors
    BR0280I BRBACKUP time stamp: 2007-12-15 13.41.02
    BR0304I Starting and opening database instance OPD ...
    BR0280I BRBACKUP time stamp: 2007-12-15 13.41.16
    BR0305I Start and open of database instance OPD successful
    BR0115I Compression rate for all files 6.0097:1
    Please confirm me whether the problem is with hardware or data corruption and need your precious suggestions on the same.
    Thanks and Regards,

    <b>compress: I/O error</b>
    This looks like hardware error.
    is there enough free space on  /backup2/archlog/bdwuyjxg/?
    do you get the same error if you try manualy?

  • Need to format the old ASM disks on solaris.10.

    Hello Gurus,
    we uninstalled the ASM on solaris, but while installing the ASM again it says that mount point already used by another instance, but there is no db and ASM running (this is the new server) so we need to use dd command or need to reformat the raw devices which already exists and used by the old ASM instance,here is the confusion...
    there are 6 Luns presented to the this host for ASM,its not used by anyone...
    # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
    1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
    2. c2t60050768018E82BE98000000000007B2d0 <IBM-2145-0000-150.00GB>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b2
    3. c2t60050768018E82BE98000000000007B3d0 <IBM-2145-0000 cyl 44798 alt 2 hd 64 sec 256>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b3
    4. c2t60050768018E82BE98000000000007B4d0 <IBM-2145-0000 cyl 19198 alt 2 hd 64 sec 256>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b4
    5. c2t60050768018E82BE98000000000007B5d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b5
    6. c2t60050768018E82BE98000000000007B6d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b6
    7. c2t60050768018E82BE98000000000007B7d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b7
    but the thing is when we try to list the raw devices by ls -ltr on /etc/rdsk location all disk owned by root and other not in oracle:dba & oinstall.
    root@b2dslbmom3dbb3301 [dev/rdsk]
    # ls -ltr
    total 144
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:h,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:h,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:g,raw
    lrwxrwxrwx 1 root root 68 Jun 13 15:34 c2t60050768018E82BE98000000000007B2d0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:wd,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:47 c2t60050768018E82BE98000000000007B3d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:48 c2t60050768018E82BE98000000000007B4d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:49 c2t60050768018E82BE98000000000007B5d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:51 c2t60050768018E82BE98000000000007B6d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:53 c2t60050768018E82BE98000000000007B7d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:h,raw
    so we need to know where the raw devices located for oracle to do the dd command to remove the old asm header on the raw device inorder to start the fresh installation
    but when we use the command which already given by the unix person who is no longer works here now, we are able to see the following information
    root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
    crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    also we are having the information of the mkode, with minor and major number we used for making the softlinks for raw device with ASM.
    Cd dev/oraasm/
    /usr/sbin/mknod asm_disk_03 c 118 232
    /usr/sbin/mknod asm_disk_02 c 118 224
    /usr/sbin/mknod asm_disk_01 c 118 216
    /usr/sbin/mknod asm_ocrvote_03 c 118 208
    /usr/sbin/mknod asm_ocrvote_02 c 118 200
    /usr/sbin/mknod asm_ocrvote_01 c 118 192
    But the final thing is we need find out where the above configuration located on the host, i think this raw device present method is different than the normal method on solaris??
    please help me to proceed my installtion .... thanks in advance....
    i am really confused with the following command from where we are getting the oracle disk raw devices information,since there is no info there in /etc/rdsk location (Os is solaris 10)
    root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
    crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    please help....

    Hi Winner;
    For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Grid Computing » Automatic Storage Management which you can get more quick response
    Regard
    Helios

  • OCR and VOT disk, in the same asm disk or different disks ?! and how ?!

    What Oracle recommends regarding the OCR and Voting disks in Rac 11gR2 on ASM ?!
    Making them under the same disk or different disks ?!
    and how ?!
    Thanks a lot,
    Regards,
    Gehad.

    Best practice is that OCR/Voting should be under ASM. Making them under the same disk or different disks depends upon the ASM diskgroup. If we want to keep it on seperate failover groups then we should create muliple failover group.
    For external : 1 ; Normal : 3 and higher : 5 asm diskgroups.
    I hope i would be clear ith my answers.
    Regards,
    Dheeraj Vaish

  • Reinstalling Oracle 11gR2 RAC Grid Problem - ASM Disks Group

    Folks,
    Hello.
    I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer.
    I have been installing Grid Infrastructure using runInstaller in the first VM rac1 from step 1 to step 9 of 10.
    On the step 9 of 10 in the Wizard, accidentally, I touch the Mouse, and the Wizard is gone.
    The directory for installing Grid in the 2 VMs is the same: /u01
    In order to make sure everything is correct, I delete entire directory /u01 in the 2 VMs and install Grid in rac1 again.
    I have understood it's not the right way to delete /u01. The right way is to follow the tutorial
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/rem_orcl.htm#CBHEFHAC
    But I have deleted /u01 and need to fix one by one. I install Grid again and get the error message on step 5 of 9 as follows:
    [INS - 30516] Please specify unique disk groups.
    [INS-3050] Empty ASM disk group.
    Cause - Installer has detected the disk group name provided already exists on the system.
    Action - Specify different disk group.
    In Wizard, the previous Disk Group name is "DATA" and its Candidate disks (5 ASMDISKs) are gone. I try to use a different name "DATA2", but no ASMDISKs come up under "Candidate disks". For "ALL Disks", all ASMDISKs cannot be selected.
    I want to use the same ASM disk group "DATA" and don't want to create a new disk group.
    My question is:
    How to have the previous ASM disks and its group "DATA" come up under "Candidate Disks" so that can use it again ?
    Thanks.

    Hi, in case this helps anyone else. I got this INS-30516 error too was stumped for little while. I have 2 x 2-node RAC which are hitting same SAN. The first-built RAC has a DATA diskgroup. When went to build second RAC on new ASM disk new DIskgroup (but same diskgroup name DATA) got INS-30516 about diskgroup name already in use etc. Finally figured out all that was required was to restrict diskstring using button in installer to only retrieve the LUNS for this RAC (this was quick and dirty - all LUNS for both RAC being presented to both RAC). Once diskstring only searched for the LUNS required for this RAC only, e.g.
    ORCL:DATA_P* (for DATA_PD and FRA_PD)
    the error went away.
    I also have DATA_DR and FRA_DR presenting to both RAC. Apparently it scans the header and if it finds a diskgroup name that is already in use based on the diskstring scan it will not allow reuse of the diskgroup name since it has no way of knowing that the other ASM disks are for a different RAC.
    HTH

  • Please Help - When I try to add ASM Disk to ASM Diskgroup it crashes Server

    We are using a Pillar SAN and have LUNS Created and are using the following multipath device: (I'm a DBA more then anything else... but I am rather familiar with linux .... SAN Hardware not so much)
    Device Size Mount Point
    /dev/dpda1 11G /u01
    The Above device is working fine... Below are the ASM Disks being Created
    Device Size Oracle ASM Disk Name
    /dev/dpdb1 198G ORCL1
    /dev/dpdc1 21G SIRE1
    /dev/dpdd1 21G CART1
    /dev/dpde1 21G SRTS1
    /dev/dpdf1 21G CRTT1
    I try to create to the first ASM Disk
    /etc/init.d/oracleasm createdisk ORCL1 /dev/dpdb1
    Marking disk "ORCL1" as an ASM disk: [FAILED]
    So I check the oracleasm log:
    #cat /var/log/oracleasm
    Device "/dev/dpdb1" is not a partition
    I did some research and found that this is a common problem with multipath devices and to work around it you have to use asmtool
    # /usr/sbin/asmtool -C -l /dev/oracleasm -n ORCL1 -s /dev/dpdb1 -a force=yes
    asmtool: Device "/dev/dpdb1" is not a partition
    asmtool: Continuing anyway
    now I scan and list the disks
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    # /etc/init.d/oracleasm listdisks
    ORCL1
    Here is whats going on in /var/log/messages when I run the oracleasm scandisks command
    # date
    Fri Aug 14 13:51:58 MST 2009
    # /etc/init.d/oracleasm scandisks
    Scanning the system for Oracle ASMLib disks: [  OK  ]
    cat /var/log/messages | grep "Aug 14 13:5"
    Aug 14 13:52:06 seer kernel: dpdb: dpdb1
    Aug 14 13:52:06 seer kernel: dpdc: dpdc1
    Aug 14 13:52:06 seer kernel: dpdd: dpdd1
    Aug 14 13:52:06 seer kernel: dpde: dpde1
    Aug 14 13:52:06 seer kernel: dpdf: dpdf1
    Aug 14 13:52:06 seer kernel: dpdg: dpdg1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: printk: 30 messages suppressed.
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: sda : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sda : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: Dev sda: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sda, logical block 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdb: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdb: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdb: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdb: sdb1
    Aug 14 13:52:06 seer kernel: SCSI device sdc: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdc: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdc: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdc: sdc1
    Aug 14 13:52:06 seer kernel: SCSI device sdd: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdd: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdd: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdd: sdd1
    Aug 14 13:52:06 seer kernel: SCSI device sde: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sde: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sde: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sde: sde1
    Aug 14 13:52:06 seer kernel: SCSI device sdf: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdf: sdf1
    Aug 14 13:52:06 seer kernel: SCSI device sdg: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdg: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdg: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdg: sdg1
    Aug 14 13:52:06 seer kernel: SCSI device sdh: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdh: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdh: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdh: sdh1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: Buffer I/O error on device sdi, logical block 0
    Aug 14 13:52:06 seer kernel: sdi : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdi : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdi: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdi: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdi: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdi:end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer last message repeated 4 times
    Aug 14 13:52:06 seer kernel: Dev sdi: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdj: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdj: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdj: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdj: sdj1
    Aug 14 13:52:06 seer kernel: SCSI device sdk: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdk: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdk: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdk: sdk1
    Aug 14 13:52:06 seer kernel: SCSI device sdl: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdl: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdl: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdl: sdl1
    Aug 14 13:52:06 seer kernel: SCSI device sdm: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdm: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdm: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdm: sdm1
    Aug 14 13:52:06 seer kernel: SCSI device sdn: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdn: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdn: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdn: sdn1
    Aug 14 13:52:06 seer kernel: SCSI device sdo: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdo: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdo: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdo: sdo1
    Aug 14 13:52:06 seer kernel: SCSI device sdp: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdp: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdp: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdp: sdp1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: sdq : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdq : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdq: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdq: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdq: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdq:end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdq: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdr: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdr: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdr: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdr: sdr1
    Aug 14 13:52:06 seer kernel: SCSI device sds: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sds: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sds: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sds: sds1
    Aug 14 13:52:06 seer kernel: SCSI device sdt: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdt: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdt: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdt: sdt1
    Aug 14 13:52:06 seer kernel: SCSI device sdu: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdu: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdu: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdu: sdu1
    Aug 14 13:52:06 seer kernel: SCSI device sdv: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdv: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdv: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdv: sdv1
    Aug 14 13:52:06 seer kernel: SCSI device sdw: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdw: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdw: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdw: sdw1
    Aug 14 13:52:06 seer kernel: SCSI device sdx: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdx: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdx: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdx: sdx1
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: sdy : READ CAPACITY failed.
    Aug 14 13:52:06 seer kernel: sdy : status=1, message=00, host=0, driver=08
    Aug 14 13:52:06 seer kernel: sd: Current: sense key: Illegal Request
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sdy: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sdy: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sdy: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sdy:end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer last message repeated 5 times
    Aug 14 13:52:06 seer kernel: Dev sdy: unable to read RDB block 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: end_request: I/O error, dev sdy, sector 0
    Aug 14 13:52:06 seer kernel: unable to read partition table
    Aug 14 13:52:06 seer kernel: SCSI device sdz: 21502464 512-byte hdwr sectors (11009 MB)
    Aug 14 13:52:06 seer kernel: sdz: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdz: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdz: sdz1
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: 421476864 512-byte hdwr sectors (215796 MB)
    Aug 14 13:52:06 seer kernel: sdaa: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaa: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaa: sdaa1
    Aug 14 13:52:06 seer kernel: SCSI device sdab: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdab: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdab: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdab: sdab1
    Aug 14 13:52:06 seer kernel: SCSI device sdac: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdac: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdac: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdac: sdac1
    Aug 14 13:52:06 seer kernel: SCSI device sdad: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdad: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdad: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdad: sdad1
    Aug 14 13:52:06 seer kernel: SCSI device sdae: 43006464 512-byte hdwr sectors (22019 MB)
    Aug 14 13:52:06 seer kernel: sdae: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdae: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdae: sdae1
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: 2107390464 512-byte hdwr sectors (1078984 MB)
    Aug 14 13:52:06 seer kernel: sdaf: Write Protect is off
    Aug 14 13:52:06 seer kernel: SCSI device sdaf: drive cache: write through w/ FUA
    Aug 14 13:52:06 seer kernel: sdaf: sdaf1
    Aug 14 13:52:06 seer kernel: scsi_wr_disk: unknown partition table
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sda, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdi, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdq, sector 0
    Aug 14 13:52:07 seer kernel: end_request: I/O error, dev sdy, sector 0
    Here's some extra info:
    # /sbin/blkid | grep asm
    /dev/sdc1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdk1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sds1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/sdaa1: LABEL="ORCL1" TYPE="oracleasm"
    /dev/dpdb1: LABEL="ORCL1" TYPE="oracleasm"
    I have learned that by excluding devices in the oracleasm configuration file I eliminate those I/O errors in /var/log/messages
    # cat /etc/sysconfig/oracleasm
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver. It is generated
    # By running /etc/init.d/oracleasm configure. Please use that method
    # to modify this file
    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true
    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle
    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=oinstall
    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dp sd"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sdc sdk sds sdaa sda"
    # ls -la /dev/oracleasm/disks/
    total 0
    drwxr-xr-x 1 root root 0 Aug 14 10:47 .
    drwxr-xr-x 4 root root 0 Aug 13 15:32 ..
    brw-rw---- 1 oracle oinstall 251, 33 Aug 14 13:46 ORCL1
    Now I can go into dbca to create the ASM instance, which starts up fine...  create a new diskgroup, I see ORCL1 as a provision ASM disk I select it ...  Click OK
    CRASH!!!  Box hangs have to reboot it....
    I have gotten myself to exactly the same point right before clicking OK and here is what is in the ASM alertlog so far
    Fri Aug 14 14:42:02 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =0
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.6.0.
    Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfile+ASM.ora
    System parameters with non-default values:
    large_pool_size = 12M
    instance_type = "asm"
    diagnostic_dest = "/u01/app/oracle"
    Fri Aug 14 14:42:04 2009
    PMON started with pid=2, OS id=3300
    Fri Aug 14 14:42:04 2009
    VKTM started with pid=3, OS id=3302 at elevated priority
    VKTM running at (20)ms precision
    Fri Aug 14 14:42:04 2009
    DIAG started with pid=4, OS id=3306
    Fri Aug 14 14:42:04 2009
    PSP0 started with pid=5, OS id=3308
    Fri Aug 14 14:42:04 2009
    DSKM started with pid=6, OS id=3310
    Fri Aug 14 14:42:04 2009
    DIA0 started with pid=7, OS id=3312
    Fri Aug 14 14:42:04 2009
    MMAN started with pid=8, OS id=3314
    Fri Aug 14 14:42:04 2009
    DBW0 started with pid=9, OS id=3316
    Fri Aug 14 14:42:04 2009
    LGWR started with pid=6, OS id=3318
    Fri Aug 14 14:42:04 2009
    CKPT started with pid=10, OS id=3320
    Fri Aug 14 14:42:04 2009
    SMON started with pid=11, OS id=3322
    Fri Aug 14 14:42:04 2009
    RBAL started with pid=12, OS id=3324
    Fri Aug 14 14:42:04 2009
    GMON started with pid=13, OS id=3326
    ORACLE_BASE from environment = /u01/app/oracle
    Fri Aug 14 14:42:04 2009
    SQL> ALTER DISKGROUP ALL MOUNT
    Fri Aug 14 14:42:41 2009
    At this point I don't want to click the OK until I am sure someone is in the office to reboot the machine manually if I do hang it again....  I hung it twice yesterday, however I did not have the devices excluded in the oracleasm configuration file as i do now
    Edited by: user10193377 on Aug 14, 2009 3:23 PM
    Well Clicking OK hun it again and I am waiting to get back into it, to see what new information might be gleened
    Does anyone have any ideas on what to check or where to look?????    Will update more once I can log back in

    Hi Mark,
    It looks like something is not correct with your raw device partition based on the error messages:
    Aug 14 13:52:06 seer kernel: Add. Sense: Logical unit not supported
    Aug 14 13:52:06 seer kernel:
    Aug 14 13:52:06 seer kernel: sda: test WP failed, assume Write Enabled
    Aug 14 13:52:06 seer kernel: sda: asking for cache data failed
    Aug 14 13:52:06 seer kernel: sda: assuming drive cache: write through
    Aug 14 13:52:06 seer kernel: sda:end_request: I/O error, dev sda, sector 0
    It could be a number of things. I would check with your vendor and Oracle support to see if the multipath software drive is supported and if there is a potential workaround for ASM. Sorry this is not quite the solution, but its what jumps to mind based on issues with multipath software and storage vendors for ASM with Linux and Oracle. Have you checked the validation matrix available on Metalink?
    Cheers,
    Ben

  • [FATAL] [INS-30508] Invalid ASM disks.

    Dear Gurus
    please help for troubleshoot the Invalid asm disk error on solaris
    Oracle Grid 11.2.0.3.0
    Solaris10 with EMC Powerpath Partition
    -bash-3.2$ ./runInstaller -silent -responseFile /aaa/Oracle11g_SunSPARC_64bit/grid/response/grid_install.rsp
    Starting Oracle Universal Installer...
    Checking Temp space: must be greater than 180 MB.   Actual 90571 MB    Passed
    Checking swap space: must be greater than 150 MB.   Actual 90667 MB    Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-09-15_12-17-47PM. Please wait ...-bash-3.2$ [FATAL] [INS-30508] Invalid ASM disks.
       CAUSE: The disks [/dev/rdsk/emcpower2a, /dev/rdsk/emcpower6a, /dev/rdsk/emcpower8a] were not valid.
       ACTION: Please choose or enter valid ASM disks.
    A log of this session is currently saved as: /tmp/OraInstall2013-09-15_12-17-47PM/installActions2013-09-15_12-17-47PM.log. Oracle recommends that if you want to keep this log, you should move it from the temporary location to a more permanent location.

    I'm grateful for your response and your time
    We had resolve the issue our self
    [/dev/rdsk/emcpower2a, /dev/rdsk/emcpower6a, /dev/rdsk/emcpower8a]
    above partition by default take the slice 0
    we had refer below link
    ASM Create Partitions in Solaris and add them as disks in AS
    In Sparc architecture, the solaris disk is subdivided into 8 slices.
    Below is the common configuration of these eight slices:
    slice 0: Holds files and directories that make up the operating
    system.*
    slice 1: Swap, Provides virtual memory, or swap space.
    slice 2: Refers to the entire disk, by convention. The size of this
    slice should not be changed.**
    slice 3: /export, Holds alternative versions of the operating system.
    slice 4: /export/swap. Provides virtual memory space for client
    systems. ***
    slice 5: /opt. Holds application software added to a system.
    slice 6: /usr. Holds operating system commands--also known as
    executables-- designed to be run by users.
    slice 7: /home. Holds files created by users.
    * Cannot be used as ASM disk. Using this slice causes disk corruption
    and may render the disk as unusable.
    ** Should not be used as ASM Disk, as slice refers to the entire disk
    (Including partition tables).
    *** Is the recommended slice to be used for ASM disk.
    as per asm recommendation we had used slice 4 the same
    so we had detail diagnose and came to know that we have to use
    [/dev/rdsk/emcpower2e, /dev/rdsk/emcpower6e, /dev/rdsk/emcpower8e]
    here e refer the slice 4
    we had use below changes in silent file and it's working fine
    oracle.install.asm.diskGroup.disks=/dev/rdsk/emcpower2e,/dev/rdsk/emcpower6e,/dev/rdsk/emcpower8e
    sample logs
    INFO: Starting Output Reader Threads for process /tmp/OraInstall2013-09-17_05-03-21PM/ext/bin/kfod
    INFO: Parsing 2560 CANDIDATE /dev/rdsk/emcpower2e oracle oinstall
    INFO: The process /tmp/OraInstall2013-09-17_05-03-21PM/ext/bin/kfod exited with code 0
    INFO: Waiting for output processor threads to exit.
    INFO: Parsing 2560 CANDIDATE /dev/rdsk/emcpower6e oracle oinstall
    INFO: Parsing 2560 CANDIDATE /dev/rdsk/emcpower8e oracle oinstall

  • ORA-15020: discovered duplicate ASM disk

    Hello.
    I am installing Oracle GI and Rdbms 11.2.0.3+, and when the installer is creating the diskgroup fail with the error ORA-15020: discovered duplicate ASM disk.
    INFO: Read: Configuring ASM failed with the following message:
    INFO: Read: One or more disk group(s) creation failed as below:
    INFO: Read: Disk Group DATA1 creation failed with the following message:
    INFO: Read: ORA-15018: diskgroup cannot be created
    INFO: Read: ORA-15020: discovered duplicate ASM disk "DATA1_0004"
    INFO: Read:
    INFO: Read:
    INFO: Completed Plugin named: Automatic Storage Management Configuration Assistant
    I have permission with all the disk:
    crw-rw---- 1 oracle dba 118, 30 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d10s6
    crw-rw---- 1 oracle dba 118, 22 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d11s6
    crw-rw---- 1 oracle dba 118, 14 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d12s6
    crw-rw---- 1 oracle dba 118, 6 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d13s6
    crw-rw---- 1 oracle dba 118, 38 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d14s6
    crw-rw---- 1 oracle dba 118, 110 Mar 6 10:49 /dev/rdsk/c1t50001FE1500B89E8d1s6
    crw-rw---- 1 oracle dba 118, 102 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d2s6
    crw-rw---- 1 oracle dba 118, 94 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d3s6
    crw-rw---- 1 oracle dba 118, 86 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d4s6
    crw-rw---- 1 oracle dba 118, 78 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d5s6
    crw-rw---- 1 oracle dba 118, 70 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d6s6
    crw-rw---- 1 oracle dba 118, 62 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d7s6
    crw-rw---- 1 oracle dba 118, 54 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d8s6
    crw-rw---- 1 oracle dba 118, 46 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89E8d9s6
    crw-rw---- 1 oracle dba 118, 150 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd10s6
    crw-rw---- 1 oracle dba 118, 142 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd11s6
    crw-rw---- 1 oracle dba 118, 134 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd12s6
    crw-rw---- 1 oracle dba 118, 126 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd13s6
    crw-rw---- 1 oracle dba 118, 118 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd14s6
    crw-rw---- 1 oracle dba 118, 222 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd1s6
    crw-rw---- 1 oracle dba 118, 214 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd2s6
    crw-rw---- 1 oracle dba 118, 206 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd3s6
    crw-rw---- 1 oracle dba 118, 198 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd4s6
    crw-rw---- 1 oracle dba 118, 190 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd5s6
    crw-rw---- 1 oracle dba 118, 182 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd6s6
    crw-rw---- 1 oracle dba 118, 174 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd7s6
    crw-rw---- 1 oracle dba 118, 166 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd8s6
    crw-rw---- 1 oracle dba 118, 158 Mar 4 17:17 /dev/rdsk/c1t50001FE1500B89EDd9s6
    All the partition start with the cylinder 1.
    Do you know the workaround?
    I have a open sr severity 2, but they are working too slow.
    Regards,
    Milton

    Hello Levi.
    I set the variables:
    ORACLE_HOME=/oracluster/product/11.2/clusterware
    ORACLE_SID=+ASM1
    PATH=/usr/bin:/usr/ccs/bin:/usr/ccs/bin:/sbin:/usr/sbin:/oracluster/product/11.2/clusterware/bin:/oracluster/product/11.2/clusterware/OPatch:/oracluster/product/11.2/clusterware/opmn/bin:/opt/xpdf-3.02pl1-solaris:/usr/ucb:
    NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
    ORACLE_BASE=/orasoft/product
    SSH_AUTH_SOCK=/tmp/ssh-PThz1424/agent.1424
    EDITOR=vi
    LOGNAME=oracle
    MAIL=/var/mail//oracle
    PS1=sc-prodbd0-1>(oracle):$PWD>
    LDR_CNTRL=NOKRTL
    USER=oracle
    ORACLE_HOSTNAME=sc-prodBD0-1
    SHELL=/bin/ksh
    ORACLE_TERM=vt220
    HOME=/orasoft
    LD_LIBRARY_PATH=/oracluster/product/11.2/clusterware/lib:/usr/local/lib:
    TERM=vt220

  • Cannot identify ASM disks

    Hi All,
    I am getting this error [FATAL] [INS-30508] Invalid ASM disks. The environment is as below
    HP-UX Itanium
    11.2.0.2
    Volume group ownership:
    drwxr-xr-x 2 root sys 8192 Apr 21 14:09 vg1_x_ks_r1_1
    logical volume Ownership:
    crw-rw---- 1 oracle dba 64 0x010001 Apr 21 14:09 rlv_x_kw_01
    crw-rw---- 1 oracle dba 64 0x010002 Apr 21 14:09 rlv_x_kw_02
    When i change the volume group owner to oracle, it works fine. I want to know why it should be oracle??
    Please let me know if you have any references on this as i am a beginer and i want to learn in depth
    Thanks,

    Just as a sidenode:
    for 11.2 you should set the ownerwship to be "installation user" of grid infrastructure and usergroup "ASM administrator", which you specify during GI installation.
    For this reason documentation states
    grid:asmadmin
    However if in your case you are using oracle as the installation owner and dba as the asm administrator group, than this is o.k.
    Sebastian

  • ASM disk corrupted

    Hi,
    A few days ago, I have asked a question about“ASM disk header corruption”at ASM disk header corruption .
    Because it was only my assumption, I didn't think about it deeply. This morning I encountered a problem from a thread. It said that the diskgroup couldn't mount, and ORA-15196 appeared in the alert.log.
    It occurred to me that if the diskgroup couldn't mount and the header of an asm disk corrupted, how should I deal with it? Will the data in the ASM disk be lost?
    Please help me with this problem.
    Thanks in advance.

    user526904 wrote:
    I hope you have the backup, Incase its the media failure and if you can detemine the corroupt datafiles, just restore the corroupt datafiles from backup and recover them. Database backups cannot restore a corrupted ASM header. rman backups the Oracle data files. Not the physical ASM disk itself with the disk's headers.
    To restore that, you will need a physical disk backup. A physical disk backup will need all processes using the disks to terminate in order to ensure that all file handles are closed and that the backup is consistent. Not something that is easily done in today's 24x7 environments. RAID is usually used to address this type of failure (e.g. via hot swappable disks, where you simply replace the faulty disk with a new one, while the storage system is running).
    So where you do not have that physical redundancy, and have to deal with "physical disk" error (like corrupted header blocks), you need to be extremely careful on how to try and recover that. I would not even try and touch that disk. I will ensure that no processes touch that disk at all, create a duplicate disk (same size) and manually "mirror" the data (using dd for example). This will serve two purposes. Tests whether physical reads on the problem disk succeeds (is this actual media failure, or logical failure?). And create a 2nd disk that can be used for testing/playing purposes, prior to trying any fixes on the problem disk.

  • Do I need to resize the sparsebundle file after migrating to new larger AirPort Time Capsule?

    Just upgraded my Airport Time Capsule from an old 1TB to a new 2TB version.
    I have migrated the sparse.bundle file by using the ethernet method but now I want to know if I need to resize the file using Disk Utility or does it happen automatically? If it doesn't resize automatically then Apple should at least have some option to automate this process as it seems a natural request.
    Over time people would want to continue using their same old back up and just allow the file to get bigger as it is moved onto a newer larger device and still have access to all the previous old backups all on one device/system.

    It should continue to resize automatically or would if Yosemite worked properly.. since TM in Yosemite is lousy it might well need you to fix it.. It isn't hard.
    See Pondini A8 here.
    http://pondini.org/TM/Troubleshooting.html

  • ASM Disk

      What types of disks are supported and needed for the oracle ASM
      we are using SAN for shared storage
      can we use a " disk partition " for asm do we require all the disks need to be partitioned with primary partition ty type
      can we use a single disk worth 800 GB partition and divide it into 3 primary and 1 extended partition and on extended create logical partitions and use it for oracle asm
    Can any body Please provide an accurate minimal disk group recommendation for installing an oracle rac 11. 2.0.2.0  standard edition- 64bit with  RHEL- 6 Host

    Oracle documentation says
        create an Oracle ASM disk group using one of the following storage resources:
    Disk PartitionA disk partition can be the entire disk drive or a section of a disk drive. However, the Oracle ASM disk cannot be in a partition that includes the partition table because the partition table would be overwritten.
    Logical Unit Number (LUN)A LUN is a disk presented to a computer system by a storage array. Oracle recommends that you use hardware RAID functionality to create LUNs. Storage hardware RAID 0+1 or RAID5, and other RAID configurations, can be provided to Oracle ASM as Oracle ASM disks.
    Logical VolumeA logical volume is supported in less complicated configurations where a logical volume is mapped to a LUN, or a logical volume uses disks or raw partitions. Logical volume configurations are not recommended by Oracle because they create a duplication of functionality. Oracle also does not recommended using logical volume managers for mirroring because Oracle ASM provides mirroring.
    Network File System (NFS)An Oracle ASM disk group can be created from NFS files, including Oracle Direct NFS (dNFS), as well as whole disks, partitions, and LUNs. The NFS files that are provisioned to a disk group may be from multiple NFS servers to provide better load balancing and flexible capacity planning.Direct NFS can be used to store data files, but is not supported for Oracle Clusterware files. To install Oracle Real Application Clusters (Oracle RAC) on Windows using Direct NFS, you must also have access to a shared storage method other than NFS for Oracle Clusterware files.

Maybe you are looking for