Dropping a Disk from ASM disk group

Hi Team,
I was dropping 10 disks from ASM , due to storage migration.
9 Disks I was dropped successfully, but 1 disk (/dev/oracleasm/disks/ST_OCR_VOTE02 ) was not able to recognize , it is part of which group..!!
That is not part of any group but still showing as member..!!! Please help . Not sure this is part of which group.
SQL> select GROUP_NUMBER,DISK_NUMBER,NAME,path,header_status,FAILGROUP from v$asm_disk where path='/dev/oracleasm/disks/ST_OCR_VOTE02';
GROUP_NUMBER DISK_NUMBER NAME                           PATH                                               HEADER_STATU FAILGROUP
           0           0                                /dev/oracleasm/disks/ST_OCR_VOTE02                 MEMBER
SQL>
           0          17                                NORMAL   FORMER       /dev/oracleasm/disks/STASM3_02                              0          0
           0           9                                NORMAL   FORMER       /dev/oracleasm/disks/STASM1_01                              0          0
           0          10                                NORMAL   FORMER       /dev/oracleasm/disks/STASM5_01                              0          0
           0          11                                NORMAL   FORMER       /dev/oracleasm/disks/STASM4_01                              0          0
           0          12                                NORMAL   FORMER       /dev/oracleasm/disks/STASM3_01                              0          0
           0          13                                NORMAL   FORMER       /dev/oracleasm/disks/STASM2_01                              0          0
           0          14                                NORMAL   FORMER       /dev/oracleasm/disks/STASM1_02                              0          0
           0          15                                NORMAL   FORMER       /dev/oracleasm/disks/STASM2_02                              0          0
           0          16                                NORMAL   FORMER       /dev/oracleasm/disks/STASM4_02                              0          0
           0           0                                NORMAL   MEMBER       /dev/oracleasm/disks/ST_OCR_VOTE02                          0          0
Thanks Manohar

Hi ,
As group_number of that disk is 0 ,it is not part of any mounted diskgroup in your system .
So,you cannot drop that disk .
May be sometime back due to some issue that disk was evicted ,but was not dropped properly ,so disk header still holds old diskgroup information.
kfed read <device_name>|egrep "kfbh.type|kfdhdb.hdrsts|kfdhdb.grpname"
above command will tell you the cause of MEMBER
To reuse that disk,
-- Validate same disk having similar status on all nodes of this cluster
-- if So,then to overwrite that disk header ,use FORCE option to create/add it into any existing diskgroup.
Regards,
Aritra

Similar Messages

  • Asm disk showing as former after dropping disk groups

    I used the excellent note by Vincent Chaun on installing RAC 10g on vmware and I got the RAC working. Exploring RAC, I dropped the disk groups and when I went to create it through dbca, it is only showing as former and not provisioned. Can anyone help on how to get this working? change.
    Thanks

    If the disk has been a member of the disk group and has been cleanly dropped, the header status will be FORMER. You can create the disk group on a disk which has a status of either of the followings:
    CANDIDATE
    FORMER
    PROVISIONED
    When this is the first time a disk has been created, the header status will be CANDIDATE. If the disk has been a member of the disk group and has been cleanly dropped, the header status will be FORMER. The PROVISIONED header status is similar to CANDIDATE except that PROVISIONED implies that an additional platform-specific action has been taken by an administrator to make the disk available for ASM. However, you can blow up the disk label by using 'dd' if you want to getrid of the FORMER header status but as said above you can create a diskgroup on disk that has header status FORMER.
    Edited by: Gagandeep Arora on Jun 2, 2010 8:26 AM

  • Best way to move redo log from one disk group to another in ASM?

    Hi All,
    Our db is 10.2.0.3 RAC db. And database servers are window 2003 server.
    We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group 1 to 2. Here are the options we are thinking about but not sure which one is the best from easiness and safety prospective.
    Thank you very much for your help in advance.
    Shirley
    Option 1:
    1)     shutdown immediate
    2)     copy log files from disk group 1 to disk group2 using RMAN (need to research on this)
    3)     startup mount
    4)     alter database rename file ….
    5)     Open database open
    6)     delete the redo files from disk group 1 in ASM (how?)
    Option 2:
    1)     create a set of redo log groups in disk group 2
    2)     drop the redo log groups in disk group 1 when they are inactive and have been archived
    3)     delete the redo files associated with those dropped groups from disk group 1 (how?) (According to Oracle menu: when you drop the redo log group the operating system files are not deleted and you need to manually delete those files)
    Option 3:
    1)     create a set of redo members in disk group 2 for each redo log group in disk group 1
    2)     drop the redo log memebers in disk group 1
    3)     delete the redo files from disk group 1 associated with the dropped members

    Absolutely not, they are not even remotely similar concepts.
    OMF: Oracle Managed Files. It is an RDMBS feature, no matter what your storage technology is, Oracle will take care of file naming and location, you only have to define the size of a file, and in the case of a tablespace on an OMF DB Configuration you only need to issue a command similar to this:
    CREATE TABLESPACE <TSName>; So the OMF environment creates an autoextensible datafile at the predefined location with 100M by default as its initial size.
    On ASM it should only be required to specify '+DGroupName' as the datafile or redo log file argument so it can be fully managed by ASM.
    EMC. http://www.emc.com No further commens on it.
    ~ Madrid
    http://hrivera99.blogspot.com

  • ASM Device Blocks from DISK GROUP

    Hi Experts,
    Need your expertise in finding the issues.
    Currently we are in Linux RHEL 6, oracle 11.2.0.3
    We dont have oracleasm binaries but using udev.
    We have 12 ASM DISK groups with names ASM_DISK_01... ASM_DISK_12
    All I am interested is how can I see where the block devices are mapped .
    i.e ASM_DISK_01 is mapped to /dev/sd1 on the host... like that which file i need to check.
    I tried /dev/mapper/*permissiom* file ...but no luck .
    Please help

    Oracle is not supporting Red Hat Linux 6.x with the asmLib files. It is supporting the Oracle version of Linux, however it is no longer supplying the asmlib disk management utilities as it did in the Red Hat Linux 5.x release. You really don't need the asmlib because Red Hat Linux is provides the disk/device management with udev.
    I have just successfully implemented Red Hat Linux 6.3 using Oracle Grid Infrastructure 11.2.0.3 ASM and Oracle Real Application Cluster (RAC) release 11.2.0.3 and it working like a charm. At first I thought that I had to do a lot of things differently and thinking that it was time as we know it has changed (mayan calendar end) however we are just in a different time and Oracle 11gR2 (11.2.0.3) Grid Infrastructure and Red Hat Linux 6.3 are a great compliment.
    Oracle continues to improve on its Grid Infrastructure ASM and Real Application Cluster ASM as Red Hat Linux continues to provide improved functionality and enterprise management.
    I'm preparing to post my most recent build.
    Edited by: yakub21 on Feb 13, 2013 6:14 PM

  • Disk Group from normal to external in a RAC environment

    Hello,
    my environment is based on 11.2.0.3.7 RAC SE with two nodes.
    Currently I have 4 DG, all in NORMAL redundancy, to contain respectively:
    ARCH
    REDO
    DATA
    VOTING
    At the moment I focus only with non-VOTING DG.
    Each of them has 2 failure groups that physically maps to disks in 2 different server rooms.
    The storage arrays are EMC VNX (one in each server room).
    They will be substituted by a VPLEX system that will be configured as a single storage entity with DGs in external redundancy.
    I see from document id
    How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy) (Doc ID 438580.1)
    that it is not possbile to do this online apparently.
    Can you confirm?
    I also read the thread in this forum:
    https://community.oracle.com/message/10173887#10173887
    that seems to confirm this too.
    I have some pressure to free the old storage arrays, but in short time I will not be able to stop the RAC RDBMSs I have in place.
    So the question is: can I proceed into two steps so that
    1) add a third failure group composed by the VPLEX disks
    2) wait for data sync of the third failure group
    3) drop one of the two old failure groups (ASM should be let me make this, correct?)
    3) brutally remove all disks of the remaining old storage failure group
    and proceed in reduced redundancy for some time until I can afford the maintenance window
    Inside the ASM administrator guide I see this:
    Normal redundancy disk group - It is best to have enough free space in your disk
    group to tolerate the loss of all disks in one failure group. The amount of free
    space should be equivalent to the size of the largest failure group.
    and also
    Normal redundancy
    Oracle ASM provides two-way mirroring by default, which means that all files are
    mirrored so that there are two copies of every extent. A loss of one Oracle ASM
    disk is tolerated. You can optionally choose three-way or unprotected mirroring.
    spare

    When you are creating external table you must specify location that external table will use to manipulate with external data.
    This is done with LOCATION and/or DEFAULT_DIRECTORY parameter.
    If you want that every instance in your cluster is able to use one specific external table, then you would need to have the location specified in the create external table command visible/accessible to all servers in your cluster, propably by some specific shared os disks/storage configuration; e.g. mounting remote disks, and this could very easy cause slower ext. table performance than it would be when the specified location is on the db server.
    This will be the one and only way because it is impossible to specify remote location, either when creating directory or in any specification parameters when creating external table.

  • How to move or migrate whole directories between ASM disk groups?

    Hello everyone!
    I'm playing around with Oracle ASM and Oracle Database (11g R1), I'm a student. This is just for testing purposes.
    Computer specifications are:
    Processor: Intel Pentium 4 HT 3.00 Ghz.
    RAM Memory: 2 GB.
    Hard Disk: 250 GB
    O.S.: Windows XP Professional Edition SP 2.
    I installed Oracle ASM, I created an ASM disk group (+FRA), I installed Oracle Database and I created a testing database. The database is working properly over the ASM disk group. Days ago, I got help about the initialization parameters DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_1, DB_CREATE_ONLINE_LOG_DEST_2 and DB_RECOVERY_FILE_DEST, based on their function, I created another 3 ASM disk groups (+FILES, LOG1, LOG2). Currently, the four initialization parameters are pointing to its corresponding ASM disk group. As you can deduce, at the installation moment of the Oracle Database I used the ASM disk group "+FRA" and inside it were created the directories: CONTROLFILE, DATAFILE, ONLINELOG, PARAMETERFILE, TEMPFILE and the SPFile.
    My point is I wanna move or migrate the directories DATAFILE, PARAMETERFILE, TEMPFILE and the SPFile to "+FILES", ONLINELOG and CONTROLFILE to "+LOG1" and "+LOG2", this way, the ASM disk group "+FRA" will contain the Flash Recovery Area only. What is the procedure to do this?
    Thanks in advance!

    user1987306 wrote:
    Hello everyone!
    My point is I wanna move or migrate the directories DATAFILE, PARAMETERFILE, TEMPFILE and the SPFile to "+FILES", ONLINELOG and CONTROLFILE to "+LOG1" and "+LOG2", this way, the ASM disk group "+FRA" will contain the Flash Recovery Area only. What is the procedure to do this?
    Thanks in advance!
    Hi,
    There are couple of approaches you can use, here is some of them
    - To move datafile, start the database in mount state
    RMAN > copy datafile '+FRA/xxx' to '+FILES1';
    SQL > alter database rename file '+FRA/xxx' to '+FILES1/xxx';
    - To move tempfile
    SQL > alter tablespace TEMP add tempfile '+FILES1' SIZE 10M;
    SQL > alter database tempfile '+FRA/xxx' drop;
    - To move onlinelog
    SQL > alter database add logfile member '+LOG1' to group 1;
    SQL > alter database add logfile member '+LOG2' to group 1;
    SQL > alter database drop logfile member '+FRA/xxx';
    - To move controlfile
    SQL > restore controlfile to '+FILES1' from '+FRA/xxx';
    update the spfile to reflect new location of controlfile
    Cheers

  • Question on Asm Disk Groups

    Hello,
    I have five 200gb (Total 1 TB) disks on my prod rac environment and we are using 11g 11.1.0.6. These disks have been mounted a while back and the db is currently 50gb full.
    My question is i need to unmount one of the disks of 200gb since we are having a shortage of disks at the moment, so i can use it as the flash recovery drive. My understanding with asm is that even if once disk fails the rebalnce act would happen and it would stripe data across four disks without any loss of data or downtime.
    We are using external redudancy on these disks and they use Raid 5.
    Can i attempt this, if this is possible could you guys give me the best command i can use to do this.
    Thx All

    Just do an
    alter diskgroup <name> drop disk <name>;The documentation states:
    DROP DISK The DROP DISK clause lets you drop one or more disks from the disk group and automatically rebalance the disk group. When you drop a disk, Automatic Storage Management relocates all the data from the disk and clears the disk header so that it no longer is part of the disk group.
    Note:
    If you need to drop more than one disk i strongly recommend to do it in a single step, e.g.
    alter diskgroup <name> drop disk <nameA>,<nameB>;Thsi will do the rebalance only once.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • Adding 2 disks to existing production ASM disk group - (urgent)

    Environment: AIX
    ORACLE: 10.2.0.4 RAC ( 2 node)
    We want to add 2 disks(/dev/asm_disk65 & /dev/asm_disk66) to existing production ASM diskgroup (DATA_GRP01).
    SQL> /
    DISK_GROUP_NAME DISK_FILE_PATH DISK_FILE_NAME DISK_FILE_FAIL_GROUP
    DATA_GRP01 /dev/asm_disk14 DATA_GRP01_0006 DATA_GRP01_0006
    DATA_GRP01 /dev/asm_disk15 DATA_GRP01_0007 DATA_GRP01_0007
    DATA_GRP01 /dev/asm_disk13 DATA_GRP01_0005 DATA_GRP01_0005
    DATA_GRP01 /dev/asm_disk3 DATA_GRP01_0010 DATA_GRP01_0010
    DATA_GRP01 /dev/asm_disk12 DATA_GRP01_0004 DATA_GRP01_0004
    DATA_GRP01 /dev/asm_disk11 DATA_GRP01_0003 DATA_GRP01_0003
    DATA_GRP01 /dev/asm_disk10 DATA_GRP01_0002 DATA_GRP01_0002
    DATA_GRP01 /dev/asm_disk4 DATA_GRP01_0011 DATA_GRP01_0011
    DATA_GRP01 /dev/asm_disk1 DATA_GRP01_0001 DATA_GRP01_0001
    DATA_GRP01 /dev/asm_disk9 DATA_GRP01_0016 DATA_GRP01_0016
    DATA_GRP01 /dev/asm_disk0 DATA_GRP01_0000 DATA_GRP01_0000
    DATA_GRP01 /dev/asm_disk7 DATA_GRP01_0014 DATA_GRP01_0014
    DATA_GRP01 /dev/asm_disk5 DATA_GRP01_0012 DATA_GRP01_0012
    DATA_GRP01 /dev/asm_disk8 DATA_GRP01_0015 DATA_GRP01_0015
    DATA_GRP01 /dev/asm_disk2 DATA_GRP01_0009 DATA_GRP01_0009
    DATA_GRP01 /dev/asm_disk16 DATA_GRP01_0008 DATA_GRP01_0008
    DATA_GRP01 /dev/asm_disk6 DATA_GRP01_0013 DATA_GRP01_0013
    [CANDIDATE] /dev/asm_disk65
    [CANDIDATE] /dev/asm_disk66
    We issue,
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65'; name DATA_GRP01_0017 REBALANCE POWER 5;
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk66'; name DATA_GRP01_0018 REBALANCE POWER 5;
    SQL> ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017 REBALANCE POWER 5;
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017 REBALANCE POWER 5
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15075: disk(s) are not visible cluster-wide
    Node1:
    crw-rw---- 1 oracle dba 38, 68 Nov 18 02:33 /dev/asm_disk65
    crw-rw---- 1 oracle dba 38, 13 Nov 18 02:53 /dev/asm_disk13
    crw-rw---- 1 oracle dba 38, 10 Nov 18 03:00 /dev/asm_disk10
    crw-rw---- 1 oracle dba 38, 3 Nov 18 03:00 /dev/asm_disk3
    crw-rw---- 1 oracle dba 38, 7 Nov 18 03:00 /dev/asm_disk7
    crw-rw---- 1 oracle dba 38, 2 Nov 18 03:00 /dev/asm_disk2
    crw-rw---- 1 oracle dba 38, 8 Nov 18 03:00 /dev/asm_disk8
    crw-rw---- 1 oracle dba 38, 15 Nov 18 03:00 /dev/asm_disk15
    crw-rw---- 1 oracle dba 38, 14 Nov 18 03:00 /dev/asm_disk14
    crw-rw---- 1 oracle dba 38, 12 Nov 18 03:00 /dev/asm_disk12
    crw-rw---- 1 oracle dba 38, 11 Nov 18 03:00 /dev/asm_disk11
    crw-rw---- 1 oracle dba 38, 5 Nov 18 03:00 /dev/asm_disk5
    crw-rw---- 1 oracle dba 38, 4 Nov 18 03:00 /dev/asm_disk4
    crw-rw---- 1 oracle dba 38, 69 Nov 18 03:39 /dev/asm_disk66
    crw-rw---- 1 oracle dba 38, 9 Nov 18 04:39 /dev/asm_disk9
    crw-rw---- 1 oracle dba 38, 6 Nov 18 06:36 /dev/asm_disk6
    crw-rw---- 1 oracle dba 38, 16 Nov 18 06:36 /dev/asm_disk16
    Node 2 :
    crw-rw---- 1 oracle dba 38, 68 Nov 15 17:59 /dev/asm_disk65
    crw-rw---- 1 oracle dba 38, 69 Nov 15 17:59 /dev/asm_disk66
    crw-rw---- 1 oracle dba 38, 8 Nov 17 19:55 /dev/asm_disk8
    crw-rw---- 1 oracle dba 38, 7 Nov 17 19:55 /dev/asm_disk7
    crw-rw---- 1 oracle dba 38, 5 Nov 17 19:55 /dev/asm_disk5
    crw-rw---- 1 oracle dba 38, 3 Nov 17 19:55 /dev/asm_disk3
    crw-rw---- 1 oracle dba 38, 2 Nov 17 19:55 /dev/asm_disk2
    crw-rw---- 1 oracle dba 38, 10 Nov 17 19:55 /dev/asm_disk10
    crw-rw---- 1 oracle dba 38, 4 Nov 17 21:04 /dev/asm_disk4
    crw-rw---- 1 oracle dba 38, 12 Nov 17 21:45 /dev/asm_disk12
    crw-rw---- 1 oracle dba 38, 14 Nov 17 22:21 /dev/asm_disk14
    crw-rw---- 1 oracle dba 38, 15 Nov 17 23:06 /dev/asm_disk15
    crw-rw---- 1 oracle dba 38, 11 Nov 17 23:18 /dev/asm_disk11
    crw-rw---- 1 oracle dba 38, 9 Nov 18 04:39 /dev/asm_disk9
    crw-rw---- 1 oracle dba 38, 6 Nov 18 04:41 /dev/asm_disk6
    crw-rw---- 1 oracle dba 38, 16 Nov 18 06:20 /dev/asm_disk16
    crw-rw---- 1 oracle dba 38, 13 Nov 18 06:20 /dev/asm_disk13
    node1 SQL> select GROUP_NUMBER,NAME,STATE from v$asm_diskgroup;
    GROUP_NUMBER NAME STATE
    1 DATA_GRP01 MOUNTED
    2 FB_GRP01 MOUNTED
    SQL> select GROUP_NUMBER,NAME,PATH,HEADER_STATUS,MOUNT_STATUS from v$asm_disk order by name;
    GROUP_NUMBER NAME PATH HEADER_STATUS MOUNT_STATUS
    1 DATA_GRP01_0000 /dev/asm_disk0 MEMBER CACHED
    1 DATA_GRP01_0001 /dev/asm_disk1 MEMBER CACHED
    1 DATA_GRP01_0002 /dev/asm_disk10 MEMBER CACHED
    1 DATA_GRP01_0003 /dev/asm_disk11 MEMBER CACHED
    1 DATA_GRP01_0004 /dev/asm_disk12 MEMBER CACHED
    1 DATA_GRP01_0005 /dev/asm_disk13 MEMBER CACHED
    1 DATA_GRP01_0006 /dev/asm_disk14 MEMBER CACHED
    1 DATA_GRP01_0007 /dev/asm_disk15 MEMBER CACHED
    1 DATA_GRP01_0008 /dev/asm_disk16 MEMBER CACHED
    1 DATA_GRP01_0009 /dev/asm_disk2 MEMBER CACHED
    1 DATA_GRP01_0010 /dev/asm_disk3 MEMBER CACHED
    1 DATA_GRP01_0011 /dev/asm_disk4 MEMBER CACHED
    1 DATA_GRP01_0012 /dev/asm_disk5 MEMBER CACHED
    1 DATA_GRP01_0013 /dev/asm_disk6 MEMBER CACHED
    1 DATA_GRP01_0014 /dev/asm_disk7 MEMBER CACHED
    1 DATA_GRP01_0015 /dev/asm_disk8 MEMBER CACHED
    1 DATA_GRP01_0016 /dev/asm_disk9 MEMBER CACHED
    0 /dev/asm_disk65 MEMBER IGNORED
    0 /dev/asm_disk66 MEMBER CLOSED
    Please guide us with next step, we are running out of space. The disks are added to non-existing group name (group number zero in above query).How to drop the disks and add it again.
    Is there query to drop disk with pathname like alter diskgroup <disk group name> drop disk '<path of the disk?'; -> is it right???
    Thanks in advance.
    Sakthi
    Edited by: SAKTHIVEL on Nov 18, 2010 10:08 PM

    1. just use one alter diskgroup command, rebalance would be executed just one time
    2. on node 2, do you see the disks as on node 1?
    select GROUP_NUMBER,NAME,PATH,HEADER_STATUS,MOUNT_STATUS from v$asm_disk order by name;3. try this:
    ALTER DISKGROUP DATA_GRP01
    ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017,
                   '/dev/asm_disk66' name DATA_GRP01_0018
    REBALANCE POWER 5;

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Difference between ASM Disk Group, ADVM Volume and ACFS File system

    Q1. What is the difference between an ASM Disk Group and an ADVM Volume ?
    To my mind, an ASM Disk Group is effectively a logical volume for Database files ( including FRA files ).
    11gR2 seems to have introduced the concepts of ADVM volumes and ACFS File Systems.
    An 11gR2 ASM Disk Group can contain :
    ASM Disks
    ADVM volumes
    ACFS file systems
    Q2. ADVM volumes appear to be dynamic volumes.
    However is this therefore not effectively layering a logical volume ( the ADVM volume ) beneath an ASM Disk Group ( conceptually a logical volume as well ) ?
    Worse still if you have left ASM Disk Group Redundancy to the hardware RAID / SAN level ( as Oracle recommend ), you could effectively have 3 layers of logical disk ? ( ASM on top of ADVM on top of RAID/SAN ) ?
    Q3. if it is 2 layers of logical disk ( i.e. ASM on top of ADVM ), what makes this better than 2 layers using a 3rd party volume manager ( eg ASM on top of 3rd party LVM ) - something Oracle encourages against ?
    Q4. ACFS File systems, seem to be clustered file systems for non database files including ORACLE_HOMEs, application exe's etc ( but NOT GRID_HOME, OS root, OCR's or Voting disks )
    Can you create / modify ACFS file systems using ASM.
    The oracle toplogy diagram for ASM in the 11gR2 ASM Admin guide, shows ACFS as part of ASM. I am not sure from this if ACFS is part of ASM or ASM sits on top of ACFS ?
    Q5. Connected to Q4. there seems to be a number of different ways, ACFS file systems can be created ? Which of the below are valid methods ?
    through ASM ?
    through native OS file system creation ?
    through OEM ?
    through acfsutil ?
    my head is exploding
    Any help and clarification greatly appreciated
    Jim

    Q1 - ADVM volume is a type of special file created in the ASM DG.  Once created, it creates a block device on the OS itself that can be used just like any other block device.  http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmfilesystem.htm#OSTMG30000
    Q2 - the asm disk group is a disk group, not really a logical volume.  It combines attributes of both when used for database purposes, as the database and certain other applications know how to talk "ASM" protocol.  However, you won't find any general purpose applications that can do so.  In addition, some customers prefer to deal directly with file systems and volume devices, which ADVM is made to do.  In your way of thinking, you could have 3 layers of logical disk, but each of them provides different attributes and characteristics.  This is not a bad thing though, as each has a slightly different focus - os file system\device, database specific, and storage centric.
    Q3 - ADVM is specifically developed to extend the characteristics of ASM for use by general OS applications.  It understands the database performance characteristics and is tuned to work well in that situation.  Because it is developed in house, it takes advantage of the ASM design model.  Additionally, rather than having to contact multiple vendors for support, your support is limited to calling Oracle, a one-stop shop.
    Q4 - You can create and modify ACFS file systems using command line tools and ASMCA.  Creating and modifying logical volumes happens through SQL(ASM), asmcmd, and ASMCA.  EM can also be used for both items.  ACFS sits on top of ADVM, which is a file in an ASM disk group.  ACFS is aware of the characteristics of ASM\ADVM volumes, and tunes it's IO to make best use of those characteristics. 
    Q5 - several ways:
    1) Connect to ASM with SQL, use 'alter diskgroup add volume' as Mihael points out.  This creates an ADVM volume.  Then, format the volume using 'mkfs' (*nix) or acfsformat (windows).
    2) Use ASMCA - A gui to create a volume and format a file system.  Probably the easiest if your head is exploding.
    3) Use 'asmcmd' to create a volume, and 'mkfs' to format the ACFS file system.
    Here is information on ASMCA, with examples:
    http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmca_acfs.htm#OSTMG94348
    Information on command line tools, with examples:
    Basic Steps to Manage Oracle ACFS Systems

  • ASM Disk Sizes in a Disk Group

    I have a Diskgroup that contains one ASMDisk that is 500gb. It needs to grow and my question is - Does the next disk added to the group need to be of the same size?
    For example -
    ASM_Disk1 = 500gb and is in ASM_Diskgrp1
    ASM_Disk2 - 250gb >> can it be added to ASM_Diskgrp1
    There is concern about balancing, etc. done by ASM
    Wondering what experiences anyone has had with this?
    Thanks!

    Hi,
    It's not mandatory, You can use disks with diferent sizes, but is extremally recomended that You put disks with the same size in a disk group.
    you'd better use disks with the same storage capacity and performance.
    The main reason is that ASM balances the I/O from all disks in a disk group due to this, put disks with the same same storage capacity and performance helps to improve the I/O and storage efficiently within the ASM disk group.
    Regards,
    Cerreia

  • DBCA did not see the ASM disk group in NODE 2 but see in NODE 1

    Are there anyone who encountered creating a database using DBCA with ASM as file system?
    Our issue before is in both nodes the DBCA did not see the ASM disk group.
    But after setting the TNS_ADMIN in both nodes and running the DBCA as administrator in Node 1, the DBCA able to see now the ASM disk group. Unfortunately, in Node 2 it didn't work out?
    So we didn't know why is it from Node 2, the DBCA still didn't see the ASM disk group?Since it is both the same.
    Any ideas? Please advise.
    For you information, we are using Windows 64-bit, Oracle 11g R2
    Thank you in advance for those who will respond.
    Edited by: 822505 on Dec 20, 2010 7:47 PM

    822505 wrote:
    Are there anyone who encountered creating a database using DBCA with ASM as file system?
    Our issue before is in both nodes the DBCA did not see the ASM disk group.
    But after setting the TNS_ADMIN in both nodes and running the DBCA as administrator in Node 1, the DBCA able to see now the ASM disk group. Unfortunately, in Node 2 it didn't work out?
    So we didn't know why is it from Node 2, the DBCA still didn't see the ASM disk group?Since it is both the same.
    Any ideas? Please advise.
    For you information, we are using Windows 64-bit, Oracle 11g R2
    Thank you in advance for those who will respond.
    Are the disks given to the ASM are visible from Node2?
    Aman....

  • How to create ASM device disk groups

    I have new Oracle unbreakable linux installed. I have Oracle 11g running. Now I'm wanting to install ASM. When going through dbca - there are no available disk groups. When I click on "Create New" - there are no member disks for me to use. Normally these are already created for me by unix admin, but this is home set-up. Appreciate any help.

    Hi,
    You need to apply below RPM's and configure the ASMLIB, follow below steps, after the setup install the ASM and then it will show the avaliable disks.
    Required RPM for ASM Configuration
    oracleasm-support-2.0.1-1.i386.rpm
    oracleasm-2.4.21-37.EL-1.0.4-1.i686.rpm
    oracleasmlib-2.0.1-1.i386.rpm
    Configuring ASMLib
    Before using ASMLib, you must run a configuration script to prepare the driver. Run the following command as root, and answer the prompts as shown in the example below. Run this on each node in the cluster.
    /etc/init.d/oracleasm configure
    Configuring the Oracle ASM library driver
    This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values
    will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort.
    Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: [ OK ] Creating /dev/oracleasm mount point: [ OK ] Loading module "oracleasm": [ OK ] Mounting ASMlib driver filesystem: [ OK ] Scanning system for ASM disks: [ OK ]
    Next you tell the ASM driver which disks you want it to use. Oracle recommends that each disk contain a single partition for the entire disk. See Partitioning the Disks at the beginning of this section for an example of creating disk partitions.
    You mark disks for use by ASMLib by running the following command as root from one of the cluster nodes, run it from node 1:
    /etc/init.d/oracleasm createdisk DISK_NAME device_name
    Tip: Enter the DISK_NAME in UPPERCASE letters.
    Ex:
    [root@vmractest1 ASMlib]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
    Marking disk "/dev/sdd1" as an ASM disk: [ OK ]
    [root@vmractest1 ASMlib]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1
    Marking disk "/dev/sdd1" as an ASM disk: [ OK ]
    [root@vmractest1 ASMlib]# /etc/init.d/oracleasm createdisk VOL3 /dev/sde1
    Marking disk "/dev/sde1" as an ASM disk: [ OK ]
    Verify that ASMLib has marked the disks:
    [root@vmractest1 ASMlib]# /etc/init.d/oracleasm listdisks
    VOL1
    VOL2
    VOL3
    On all other cluster nodes, run the following command as root to scan for configured ASMLib disks:
    /etc/init.d/oracleasm scandisks
    Ex:
    [root@vmractest2 ASMlib_install]# /etc/init.d/oracleasm scandisks
    Scanning system for ASM disks: [ OK ]
    [root@vmractest3 ASMlib_install]# /etc/init.d/oracleasm scandisks
    Scanning system for ASM disks: [ OK ]
    And then check that everything is as in the first node:
    [root@vmractest2 ASMlib_install]# /etc/init.d/oracleasm listdisks
    VOL1
    VOL2
    VOL3
    [root@vmractest3 ASMlib_install]# /etc/init.d/oracleasm listdisks
    VOL1
    VOL2
    VOL3
    The above steps are clearly explained in the document " http://blogs.oracle.com/content/dav/oracle/mtblog/A/Al/AlejandroVargas/gems/StepbyStepRAConLinux3.pdf " under section *11- Configure ASMlib for ASM Management*
    If you still have issues please do let us know.
    Regards,
    Satya.

  • ASM how many Gigs per disk group for performance.

    We are migrating from a file system based database 1Tbyte 10.2.0.3 on IBM AIX to a database on Solaris 9 10.2.0.3 with ASM.
    Our old system functioned well by breaking up the IO into 15 file system 72.5 GB each over three SAN arrays.
    We have no experience of ASM and are wondering if we should break up the IO in a similar way for performance or just have one large disk group? There must be advantages/disadvantages one way or another. Looking for advise.
    Thanks very much in advance for any help.

    ASM performs striping automatically so you don't need to care about creating multiple disk groups for performance. But if you have different disks in terms of speed and size, then create individual groups for each disk.
    http://youngcow.net/doc/oracle10g/server.102/b14231/storeman.htm#i1014729

  • Share an ASM disk group among multiple nodes

    According to Oracle documentation:
    *“To share an ASM disk group among multiple nodes, you must install Oracle Cluster ware on all of the nodes, regardless of whether you install Oracle RAC on the nodes”.*
    And if I understand it right to share the same ASM storage group from multiple nodes from separate RACs or multiple non-RAC nodes ASM instances in those nodes need to communicate to synchronize ASM related metadata using same technique like cache fusions.
    My question is how this ASM communication take place among different ASM instances located in different RACs and standalone servers. Do we have to have some kind of Interconnect settings among the nodes?

    Hi,
    ASM and database instances require shared access to the disks in a disk group. ASM instances manage the metadata of the disk group and provide file layout information to the database instances.
    ASM instances can be clustered using Oracle Clusterware; there is one ASM instance for each cluster node. If there are several database instances for different databases on the same node, then the database instances share the same single ASM instance on that node.
    If the ASM instance on a node fails, then all of the database instances on that node also fail. Unlike a file system failure, an ASM instance failure does not require restarting the operating system. In an Oracle RAC environment, the ASM and database instances on the surviving nodes automatically recover from an ASM instance failure on a node.
    see this link
    http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmcon.htm :)

Maybe you are looking for