Question on adding LUNs to disk group

GI version :11.2.0.3.0
Platform : OEL
I need to add 2TB to the following disk group. Each LUN is 100gb in size.
100gb x 20 = 2 Tera Byte. Which means I have to run 20 commands like below
SQL> ALTER DISKGROUP DATA_DG1 ADD DISK '/dev/sdc1' rebalance power 4;Each ALTER DISKGROUP...ADD DISK command's rebalance takes around 30 minutes to complete.
So, this operation will take 10 hours to complete.
ie. 20 x 30 = 600 minutes = 10 hours !!!I have a 3-node RAC. Can I run ALTER DISKGROUP...ADD DISK commands in parallel from each node ? Will ASM instance allow
multiple rebalance operations in a single Disk group ?

http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm
Oracle ASM can perform one disk group rebalance at a time on a given instance. If you have initiated multiple rebalances on different disk groups on a single node, then Oracle processes these operations in parallel on additional nodes if available; otherwise the rebalances are performed serially on the single node. You can explicitly initiate rebalances on different disk groups on different nodes in parallel.
Why do you need to run 20 of these commands??
Can you not just add 20 of the disks in one go:
ALTER DISKGROUP DATADG ADD DISK
'/devices/diska5' NAME disk1,
'/devices/diska6' NAME disk2,
'/devices/diska7' NAME disk3,
...etc
You shouldn't need to manually rebalance as Oracle will automatically do the rebalancing when the configuration changes unless you want to use a non-default value of power as defined by the ASM_POWER_LIMIT parameter. You can run the above command with a rebalance option as well if you you need to do so.

Similar Messages

  • Question about using mixed size disk groups with ASM

    I support a 10g RAC cluster running 10.2.0.3 and using ASM. We currently have a data disk group that is 600 Gb in size, made up of 3 individual 200 Gb raw disk devices. The RAC cluster is running on an IBM pSeries server using AIX 5.3
    I need to add more space, but the system adminstrator can only give me 100 Gb in the new raw device. What will be the impact, if any, of adding this smaller raw disk device into the same ASM data disk group? I understand that ASM tries to "balance" the overall usage of each raw device. Will ASM still work okay, or will it be constantly trying to rebalance the disk group because the one raw device is only half the size of the other three? Any help and advice given will be greatly appreciated. Thank you!

    I need to add more space, but the system adminstrator can only give me 100 Gb in the new raw device. What will be the impact, if any, of adding this smaller raw disk device into the same ASM data disk group? I understand that ASM tries to "balance" the overall usage of each raw device. Will ASM still work okay, or will it be constantly trying to rebalance the disk group because the one raw device is only half the size of the other three? Any help and advice given will be greatly appreciated. Thank you!Hi ,
    There will be no impact and ASM will automatically balance the things.
    Also check :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • Question about adding keywords to a group of images in Aperture

    Hi all,
    Not sure what I am doing incorrectly, but I am unable to add a keyword to more than 1 image at a time, even though a group of images is selected. So, for example, in Browser view, with 4 images selected, using the keyword HUD, I drag a keyword to one of the selected images, and only the one dragged to gets that keyword, not the others in the group.
    Any ideas?
    Thanks
    Aperture 3.1.1

    Hi Pi,
    Not quite sure what the purpose of that button is.
    You'll laugh about this sometime not too long from now. Sometime, you will have a bunch of photos selected and *not want to change your selection*, but you'll notice that you need to add some metadata or adjust just 1 of those pictures. That's when you use the "primary only". When you find yourself in that situation, it'll be like a "eureka" since you're currently wondering what it's all about.
    nathan

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Pros/Cons for a Seperate ASM Archivelog Disk Group

    We have a non-ASM best practice of having our archivelogs on their own filesystem. When there is a major problem that fills up this space the datafiles (separate filesystem) themselves are not affected and the backup area used by RMAN (separate filesystem) is fully functional to perform archivelog sweeps.
    My DBA team and I have the same concern about having the archivelogs in the FRA along with backups, etc., in the ASM. We are proposing a third disk group just for archivelogs. Also a best practice of always having at least 1 spare disk available that can be added to a disk group in an emergency.
    Is there a reason why this is not a good idea or unnecessary? My team is new to ASM and I don't pretend to understand all the real world intracies of Oracle managing the FRA space/archivelogs/rman backups/etc.Thanks for any insight you can offer.

    I have read and am quite aware of Oracle's recommendations and have been an Oracle DBA since the venerable 7.0.16 release. In fact I have read through some 20 or so related Oracle white papers concerning ASM. However, in the 24 years I have been working with databases I am also well aware things don't always go as planned. You will fill up a disk group eventually whether that is from unexpectedly high data activity, human error, monitoring being down or any number of possibilities.
    So if we make the assumption that the FRA disk group will be out of space due to excessive numbers of archivelogs and/or RMAN retention widow and backup growth problems how do we quickly solve the issue while my prod DB is unavailable? Ah the real world ... If archivelogs and backups are in the FRA I really only have three choices, 1. add a disk to the disk group if I have one, or 2. manually delete files thus potentially impacting recoverability or 3. possibly temporarily reducing my RMAN recovery window to allow RMAN to clean out "old" backup sets (thus impacting my SLA recovery window. Yes there are probably other variations on these also.
    Therefore, we are proposing having a best practice of a spare disk available and a seperate disk group for archivelogs so we have two potential methods to recover from this scenario. Now back to the original question, is there a reason why a seperate disk group for archivelogs is not a good idea or unnecessary?
    Thanks

  • Could you provide sccripts to add old SAN disk groups to New SAN

    Hi,
    I'll giv u a brief about my setup. It is 2 node RAC connected to a SAN storage and size of DB is 500GB on Windows 2003. So now they r having a new storage box with 3TBand they asked to migrate this entire setup to new one. So please provide me the detailed doc on this.
    I've to migrate Database from OLD SAN to New SAN. Some guys helped me the solution, but i couldnt get clearly. They have told me that
    Simply add new LUNs to the existing disk groups. Drop the old SAN LUNs in the disk groups(this merely marks the LUNs to be dropped). Issue a re balance common to move and re stripe the data from the OLD SAN LUNs onto new SAN LUNs.once this operation is completed , ASM will Drop old LUNs from Disk groups and these LUNs can be physically removed. All this can happen with the instance running -it will be obvious of the underlying storage system change being made.
    So please provide me the scripts for the above with detailed process how we need to do that and i ma the only person DBA in organisation but i am not much aware about ASM. So please help me out to complete this task.
    Even i am new DBA...pls help me out
    Regards
    Poorna

    Hi
    Please drop me an email in the below given email id so that I can send you the note which I compiled just now for this.
    There is another question also from my end.....
    Where is OCR and VD stored ? That also will be in the old storage right ? If that also needs migration to the new storage, then there are some additional steps to migrate OCR and VD.
    But for that you will need a small downtime.
    Regards,
    Mahesh.
    [email protected]

  • How to move Tablespace from One disk group to another disk group in RAC

    Hi All,
    I have 11gR2 RAC env on Linux.
    As ofnow I have problem with disk group. I have 3 disk group which is almost full - 98%. I have added one NEW disk group and want to move some of the Tablespace(TBS) from OLD disk group to NEW diskgroup and make some free space in OLD disk group.
    Can any one suggest me how to move TBS from one disk group to another disk grup without shutting down the instance.
    DB is in Noarchive mode.
    Thanks...

    user12039625 wrote:
    Hi Helios,
    Thanks for doc id but I am looking for noarchive mode solution. becaues I got ORA-
    "ORA-01145: offline immediate disallowed unless media recovery enabled " when run alter database datafile '...' offline.
    Hence I am trying something and findout below steps but not sure how useful it is:
    1- put tablespace offine
    2- Copy the file to new diskgroup using Either RMAN or DBMS_FILE_TRANSFER.
    3- Rename the file to point to new location.
    4- Recover the file.
    5- Bring the file online.
    I had move 240M TBS from OLE to NEW.
    These steps run successfully so I think this is valid for noarchive mode.Hence want to confirm..so inform me please.
    Thanks :)I have doubt in my mind:
    1. You database is in noarchivelog mode
    2. You're taking tablespace offline
    3. Suppose you're moving a file of size 10GB(or any larger filesize) to another disk group
    4. Now after moving the file, you're trying to bring the tablespace online............NOW
    tablespace will need recovery. if the required data is inside the SGA then it is ok. But if the data has been flushed, then?
    if step 2 and 3 has taken significant time, then most probably you'll not be able to bring that tablespace online.
    Regards,
    S.K.

  • Adding disk to existing disk group in ASM  on the windows 2003 server.

    Guys,
    Can someone please share some opinions of adding an extra disk to an existing disk group ( DG_FRA ) in ASM on a windows 2003 server ( like any issues when bringing db backup ). This is a production database and I want to be careful , so that I don't have any issues with the database. Existing DG_FRA is 550GB ( size of the image copy is 325GB ),recently we have been running heavy loads and this is filling up the archive logs and forcing us to take incremental backups couple of times before deleting the archive logs. As a result of this we decided to add an extra 200GB so that we don't have to do this backups frequently during the day. Storage is on EMC SAN and the database is in 11G.
    Thanks for all your help.

    There will not be any issues..
    stamp the disk and then add the disk.
    You can also follow this approach -
    stamp the disk.
    create a new group and then add a datafile in this new group for the existing tablespace.

  • Question on Asm Disk Groups

    Hello,
    I have five 200gb (Total 1 TB) disks on my prod rac environment and we are using 11g 11.1.0.6. These disks have been mounted a while back and the db is currently 50gb full.
    My question is i need to unmount one of the disks of 200gb since we are having a shortage of disks at the moment, so i can use it as the flash recovery drive. My understanding with asm is that even if once disk fails the rebalnce act would happen and it would stripe data across four disks without any loss of data or downtime.
    We are using external redudancy on these disks and they use Raid 5.
    Can i attempt this, if this is possible could you guys give me the best command i can use to do this.
    Thx All

    Just do an
    alter diskgroup <name> drop disk <name>;The documentation states:
    DROP DISK The DROP DISK clause lets you drop one or more disks from the disk group and automatically rebalance the disk group. When you drop a disk, Automatic Storage Management relocates all the data from the disk and clears the disk header so that it no longer is part of the disk group.
    Note:
    If you need to drop more than one disk i strongly recommend to do it in a single step, e.g.
    alter diskgroup <name> drop disk <nameA>,<nameB>;Thsi will do the rebalance only once.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • Adding 2 disks to existing production ASM disk group - (urgent)

    Environment: AIX
    ORACLE: 10.2.0.4 RAC ( 2 node)
    We want to add 2 disks(/dev/asm_disk65 & /dev/asm_disk66) to existing production ASM diskgroup (DATA_GRP01).
    SQL> /
    DISK_GROUP_NAME DISK_FILE_PATH DISK_FILE_NAME DISK_FILE_FAIL_GROUP
    DATA_GRP01 /dev/asm_disk14 DATA_GRP01_0006 DATA_GRP01_0006
    DATA_GRP01 /dev/asm_disk15 DATA_GRP01_0007 DATA_GRP01_0007
    DATA_GRP01 /dev/asm_disk13 DATA_GRP01_0005 DATA_GRP01_0005
    DATA_GRP01 /dev/asm_disk3 DATA_GRP01_0010 DATA_GRP01_0010
    DATA_GRP01 /dev/asm_disk12 DATA_GRP01_0004 DATA_GRP01_0004
    DATA_GRP01 /dev/asm_disk11 DATA_GRP01_0003 DATA_GRP01_0003
    DATA_GRP01 /dev/asm_disk10 DATA_GRP01_0002 DATA_GRP01_0002
    DATA_GRP01 /dev/asm_disk4 DATA_GRP01_0011 DATA_GRP01_0011
    DATA_GRP01 /dev/asm_disk1 DATA_GRP01_0001 DATA_GRP01_0001
    DATA_GRP01 /dev/asm_disk9 DATA_GRP01_0016 DATA_GRP01_0016
    DATA_GRP01 /dev/asm_disk0 DATA_GRP01_0000 DATA_GRP01_0000
    DATA_GRP01 /dev/asm_disk7 DATA_GRP01_0014 DATA_GRP01_0014
    DATA_GRP01 /dev/asm_disk5 DATA_GRP01_0012 DATA_GRP01_0012
    DATA_GRP01 /dev/asm_disk8 DATA_GRP01_0015 DATA_GRP01_0015
    DATA_GRP01 /dev/asm_disk2 DATA_GRP01_0009 DATA_GRP01_0009
    DATA_GRP01 /dev/asm_disk16 DATA_GRP01_0008 DATA_GRP01_0008
    DATA_GRP01 /dev/asm_disk6 DATA_GRP01_0013 DATA_GRP01_0013
    [CANDIDATE] /dev/asm_disk65
    [CANDIDATE] /dev/asm_disk66
    We issue,
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65'; name DATA_GRP01_0017 REBALANCE POWER 5;
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk66'; name DATA_GRP01_0018 REBALANCE POWER 5;
    SQL> ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017 REBALANCE POWER 5;
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017 REBALANCE POWER 5
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15075: disk(s) are not visible cluster-wide
    Node1:
    crw-rw---- 1 oracle dba 38, 68 Nov 18 02:33 /dev/asm_disk65
    crw-rw---- 1 oracle dba 38, 13 Nov 18 02:53 /dev/asm_disk13
    crw-rw---- 1 oracle dba 38, 10 Nov 18 03:00 /dev/asm_disk10
    crw-rw---- 1 oracle dba 38, 3 Nov 18 03:00 /dev/asm_disk3
    crw-rw---- 1 oracle dba 38, 7 Nov 18 03:00 /dev/asm_disk7
    crw-rw---- 1 oracle dba 38, 2 Nov 18 03:00 /dev/asm_disk2
    crw-rw---- 1 oracle dba 38, 8 Nov 18 03:00 /dev/asm_disk8
    crw-rw---- 1 oracle dba 38, 15 Nov 18 03:00 /dev/asm_disk15
    crw-rw---- 1 oracle dba 38, 14 Nov 18 03:00 /dev/asm_disk14
    crw-rw---- 1 oracle dba 38, 12 Nov 18 03:00 /dev/asm_disk12
    crw-rw---- 1 oracle dba 38, 11 Nov 18 03:00 /dev/asm_disk11
    crw-rw---- 1 oracle dba 38, 5 Nov 18 03:00 /dev/asm_disk5
    crw-rw---- 1 oracle dba 38, 4 Nov 18 03:00 /dev/asm_disk4
    crw-rw---- 1 oracle dba 38, 69 Nov 18 03:39 /dev/asm_disk66
    crw-rw---- 1 oracle dba 38, 9 Nov 18 04:39 /dev/asm_disk9
    crw-rw---- 1 oracle dba 38, 6 Nov 18 06:36 /dev/asm_disk6
    crw-rw---- 1 oracle dba 38, 16 Nov 18 06:36 /dev/asm_disk16
    Node 2 :
    crw-rw---- 1 oracle dba 38, 68 Nov 15 17:59 /dev/asm_disk65
    crw-rw---- 1 oracle dba 38, 69 Nov 15 17:59 /dev/asm_disk66
    crw-rw---- 1 oracle dba 38, 8 Nov 17 19:55 /dev/asm_disk8
    crw-rw---- 1 oracle dba 38, 7 Nov 17 19:55 /dev/asm_disk7
    crw-rw---- 1 oracle dba 38, 5 Nov 17 19:55 /dev/asm_disk5
    crw-rw---- 1 oracle dba 38, 3 Nov 17 19:55 /dev/asm_disk3
    crw-rw---- 1 oracle dba 38, 2 Nov 17 19:55 /dev/asm_disk2
    crw-rw---- 1 oracle dba 38, 10 Nov 17 19:55 /dev/asm_disk10
    crw-rw---- 1 oracle dba 38, 4 Nov 17 21:04 /dev/asm_disk4
    crw-rw---- 1 oracle dba 38, 12 Nov 17 21:45 /dev/asm_disk12
    crw-rw---- 1 oracle dba 38, 14 Nov 17 22:21 /dev/asm_disk14
    crw-rw---- 1 oracle dba 38, 15 Nov 17 23:06 /dev/asm_disk15
    crw-rw---- 1 oracle dba 38, 11 Nov 17 23:18 /dev/asm_disk11
    crw-rw---- 1 oracle dba 38, 9 Nov 18 04:39 /dev/asm_disk9
    crw-rw---- 1 oracle dba 38, 6 Nov 18 04:41 /dev/asm_disk6
    crw-rw---- 1 oracle dba 38, 16 Nov 18 06:20 /dev/asm_disk16
    crw-rw---- 1 oracle dba 38, 13 Nov 18 06:20 /dev/asm_disk13
    node1 SQL> select GROUP_NUMBER,NAME,STATE from v$asm_diskgroup;
    GROUP_NUMBER NAME STATE
    1 DATA_GRP01 MOUNTED
    2 FB_GRP01 MOUNTED
    SQL> select GROUP_NUMBER,NAME,PATH,HEADER_STATUS,MOUNT_STATUS from v$asm_disk order by name;
    GROUP_NUMBER NAME PATH HEADER_STATUS MOUNT_STATUS
    1 DATA_GRP01_0000 /dev/asm_disk0 MEMBER CACHED
    1 DATA_GRP01_0001 /dev/asm_disk1 MEMBER CACHED
    1 DATA_GRP01_0002 /dev/asm_disk10 MEMBER CACHED
    1 DATA_GRP01_0003 /dev/asm_disk11 MEMBER CACHED
    1 DATA_GRP01_0004 /dev/asm_disk12 MEMBER CACHED
    1 DATA_GRP01_0005 /dev/asm_disk13 MEMBER CACHED
    1 DATA_GRP01_0006 /dev/asm_disk14 MEMBER CACHED
    1 DATA_GRP01_0007 /dev/asm_disk15 MEMBER CACHED
    1 DATA_GRP01_0008 /dev/asm_disk16 MEMBER CACHED
    1 DATA_GRP01_0009 /dev/asm_disk2 MEMBER CACHED
    1 DATA_GRP01_0010 /dev/asm_disk3 MEMBER CACHED
    1 DATA_GRP01_0011 /dev/asm_disk4 MEMBER CACHED
    1 DATA_GRP01_0012 /dev/asm_disk5 MEMBER CACHED
    1 DATA_GRP01_0013 /dev/asm_disk6 MEMBER CACHED
    1 DATA_GRP01_0014 /dev/asm_disk7 MEMBER CACHED
    1 DATA_GRP01_0015 /dev/asm_disk8 MEMBER CACHED
    1 DATA_GRP01_0016 /dev/asm_disk9 MEMBER CACHED
    0 /dev/asm_disk65 MEMBER IGNORED
    0 /dev/asm_disk66 MEMBER CLOSED
    Please guide us with next step, we are running out of space. The disks are added to non-existing group name (group number zero in above query).How to drop the disks and add it again.
    Is there query to drop disk with pathname like alter diskgroup <disk group name> drop disk '<path of the disk?'; -> is it right???
    Thanks in advance.
    Sakthi
    Edited by: SAKTHIVEL on Nov 18, 2010 10:08 PM

    1. just use one alter diskgroup command, rebalance would be executed just one time
    2. on node 2, do you see the disks as on node 1?
    select GROUP_NUMBER,NAME,PATH,HEADER_STATUS,MOUNT_STATUS from v$asm_disk order by name;3. try this:
    ALTER DISKGROUP DATA_GRP01
    ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017,
                   '/dev/asm_disk66' name DATA_GRP01_0018
    REBALANCE POWER 5;

  • ASM Disk Sizes in a Disk Group

    I have a Diskgroup that contains one ASMDisk that is 500gb. It needs to grow and my question is - Does the next disk added to the group need to be of the same size?
    For example -
    ASM_Disk1 = 500gb and is in ASM_Diskgrp1
    ASM_Disk2 - 250gb >> can it be added to ASM_Diskgrp1
    There is concern about balancing, etc. done by ASM
    Wondering what experiences anyone has had with this?
    Thanks!

    Hi,
    It's not mandatory, You can use disks with diferent sizes, but is extremally recomended that You put disks with the same size in a disk group.
    you'd better use disks with the same storage capacity and performance.
    The main reason is that ASM balances the I/O from all disks in a disk group due to this, put disks with the same same storage capacity and performance helps to improve the I/O and storage efficiently within the ASM disk group.
    Regards,
    Cerreia

  • How many disk groups for +DATA?

    Hi All,
    Does oracle recommends having one big/shared asm disk group for all of the databases?
    In our case we going to have 11.2 and 10g rac running against 11.2 ask...
    Am I correct in saying that I have to set asm’s compatibility parameter to 10 in order to be able to use the same disk?
    Is this is a good idea? Or should i create another disk group for the 10g db’s?
    I’m assuming there are feature that will not be available when the compatibility is reduced to 10g...

    Oviwan wrote:
    what kind of storage system do you have? nas? what is your protocol between server and storage? tcp/ip (=>nfs)? fc?....
    if you have a storage with serveral disks then you create mostly more than one lun (raid 0, 1, 5 or whatever). if the requirement is, that you need a 1 TB diskgroup, then I would not create 1 1TB lun, I would create 5x200GB lun's for example, just for the case that you have to extend the diskgroup with a same lun size. if its 1 TB then you have to add another 1TB lun, if there are 5x200GB luns then you can simply add 200GB.
    I have nowhere found a document that says: if you have exactly 16 lun's for a diskgroup it's best. it depends on os, storage, etc...
    so if you create a 50gb diskgroup I would create just one 50gb lun for example.
    hthyes its NAS, connectd using Iscsi. it has 5 disks 1TB each and configued with RAID5. I found below requirments on asm ... it indicates 4luns as minimum per diskgroup, but it doesnt clearify whether its for external redundancy or as mredundancy types.
    •A minimum of four LUNs (Oracle ASM disks) of equal size and performance is recommended for each disk group.
    •Ensure that all Oracle ASM disks in a disk group have similar storage performance and availability characteristics. In storage configurations with mixed speed drives, such as 10K and 15K RPM, I/O performance is constrained by the slowest speed drive.
    •Oracle ASM data distribution policy is capacity-based. Ensure that Oracle ASM disks in a disk group have the same capacity to maintain balance.
    •Maximize the number of disks in a disk group for maximum data distribution and higher I/O bandwidth.
    •Create LUNs using the outside half of disk drives for higher performance. If possible, use small disks with the highest RPM.
    •Create large LUNs to reduce LUN management overhead.
    •Minimize I/O contention between ASM disks and other applications by dedicating disks to ASM disk groups for those disks that are not shared with other applications.
    •Choose a hardware RAID stripe size that is a power of 2 and less than or equal to the size of the ASM allocation unit.
    •Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant. However, there are situations where certain multipathing or third party cluster solutions require an LVM. In these situations, use the LVM to represent a single LUN without striping or mirroring to minimize the performance impact.
    •For Linux, when possible, use the Oracle ASMLIB feature to address device naming and permission persistency.
    ASMLIB provides an alternative interface for the ASM-enabled kernel to discover and access block devices. ASMLIB provides storage and operating system vendors the opportunity to supply extended storage-related features. These features provide benefits such as improved performance and greater data integrity.
    one more question about fdisk partitioning. is it correct that we should only create one partition per luns (5x200Gb luns in my case) is it because this way i will have more consistent set of luns( in term of performance)?

  • How reInstall Gride Infrastructure and Use old ASM disk groups

    <pre>Hello to all
    I installed Grig Infrastructure 11gR2 on a standalone server (OS is Linux)
    and I configured ASM and my database created on ASM
    Conceive that my OS disk corrupted and OS doesn't start and the Gride Home is on that disk,
    and I have to install OS again
    My ASM disks are safe , Now how can I install Grig Infrastructure again somehow that it can use previous ASM disks
    and disk groups and I don't oblige to create my database again ?
    In the step 2 of installing Gride Infrastructure it has four options
    <pre>
    1.Install and configure Oracle Grid Infrastructure for a Cluster
    2.Configure Oracle Grid Infrastructure for a Standalone Server
    3.Upgrade Oracle Gride Infrastructure or Oracle Automatice Storage Management
    4.Install Oracle Gride Infrastructure Software Only
    </pre>
    If I select the option 2 it wants to create a disk group again
    I guess that I need to select option 4 and then do some configuration but I don't know what I must configure
    Do you know answer of my question , if yes please explain it's stages
    Thank you so much
    </pre>

    Hi,
    no you are not obliged to recreate your database again. However there is a small flaw in the installation procedure, which does not make it 100% easy...
    When you installed the Oracle Restart (Standalone GI), your ASM diskgroup will contain the SPFILE of the ASM instance. And this is exactly the small flaw you will be encountering. So you have 2 options for "recovery":
    1.) Do a software only install (4), and run roothas.pl. This however will not create any ASM entries. You would have to add it manually (using srvctl) and you can specify the ASM Spfile with the srvctl command. Problem here is however to have to know where your ASM spfile has been. If you have a backup of your OLR and a backup of the GPNP profile, this might be easier to find out.
    2.) Do a new installation (2) and configure a new diskgroup (with a "spare" disk or small lun and a new name), that Oracle restart creates ASM instance and the new ASMSpfile for you.
    Then you can simply mount the diskgroup containing your database additionally. You then shoudl however move your new ASMSpfile to the new diskgroup (or simply exchange it with the existing one). In this case it is easier to find out where it was - however you will need a spare (though small) LUN for the new spfile (temporarily, until you exchange it).
    In either case after you have your ASM instance back (and access to your old diskgroup), you have to reregister your database and services - if you do not have an OLR backup.
    Again => It is doable and you can simply mount the ASM diskgroup containing your database. However I suggest you try this one time to know what really needs to be done in this case.
    Regards
    Sebastian

  • Reinstalling Oracle 11gR2 RAC Grid Problem - ASM Disks Group

    Folks,
    Hello.
    I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer.
    I have been installing Grid Infrastructure using runInstaller in the first VM rac1 from step 1 to step 9 of 10.
    On the step 9 of 10 in the Wizard, accidentally, I touch the Mouse, and the Wizard is gone.
    The directory for installing Grid in the 2 VMs is the same: /u01
    In order to make sure everything is correct, I delete entire directory /u01 in the 2 VMs and install Grid in rac1 again.
    I have understood it's not the right way to delete /u01. The right way is to follow the tutorial
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/rem_orcl.htm#CBHEFHAC
    But I have deleted /u01 and need to fix one by one. I install Grid again and get the error message on step 5 of 9 as follows:
    [INS - 30516] Please specify unique disk groups.
    [INS-3050] Empty ASM disk group.
    Cause - Installer has detected the disk group name provided already exists on the system.
    Action - Specify different disk group.
    In Wizard, the previous Disk Group name is "DATA" and its Candidate disks (5 ASMDISKs) are gone. I try to use a different name "DATA2", but no ASMDISKs come up under "Candidate disks". For "ALL Disks", all ASMDISKs cannot be selected.
    I want to use the same ASM disk group "DATA" and don't want to create a new disk group.
    My question is:
    How to have the previous ASM disks and its group "DATA" come up under "Candidate Disks" so that can use it again ?
    Thanks.

    Hi, in case this helps anyone else. I got this INS-30516 error too was stumped for little while. I have 2 x 2-node RAC which are hitting same SAN. The first-built RAC has a DATA diskgroup. When went to build second RAC on new ASM disk new DIskgroup (but same diskgroup name DATA) got INS-30516 about diskgroup name already in use etc. Finally figured out all that was required was to restrict diskstring using button in installer to only retrieve the LUNS for this RAC (this was quick and dirty - all LUNS for both RAC being presented to both RAC). Once diskstring only searched for the LUNS required for this RAC only, e.g.
    ORCL:DATA_P* (for DATA_PD and FRA_PD)
    the error went away.
    I also have DATA_DR and FRA_DR presenting to both RAC. Apparently it scans the header and if it finds a diskgroup name that is already in use based on the diskstring scan it will not allow reuse of the diskgroup name since it has no way of knowing that the other ASM disks are for a different RAC.
    HTH

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

Maybe you are looking for