Understanding distribution of space in a disk group - HP EVA

Dear Friends,
I have been ripping my hairs apart for the last 1 month trying to understand how EVA works in order to split the space into different RSS (vRAID0,vRAID1,vRAID5,vRAID6) in a disk group. I work for a company who has got HP EVA P6550 SAN in their environment. We have M6625 enclosures and 300GB 15K SAS disks. Yesterday we installed the 13th enclosure (our boss has ordered 5 new disk enclosures), i have told him that we can only add 18 enclosures on the P6550. We are getting multiple requests from different projects for storage space (10TB, 15TB, 13TB and so on). Most of these requests mention that they require RAID5 redundancy from SAN level.  We have 3 disk groups defined in our SAN and we usually add new disks to these disk groups. What i do not understand is that the moment i add disks to existing disk groups , i can see vRAID0 RSS in the disk group is going on increasing , whereas vRAID5 and vRAID1 increase rarely. As i mentioned before whenever we create LUNs (vDISKS) we assign it to be vRAID5. In this situation what can i do in case i do not want to add any further space on vRAID0 since i am not goign to use it, also how can i mirate this hude space from vRAID0 onto my highly demand vRAID5 RSS....
Sorry i may not be very clear in my explanation , please ask me in case you require any further information... Your help is appreciated as i failed miserably to find any info on this. Please help....

i think it is not possible to migrate the space from one RSS to the other , however if some one can confirm that will be great

Similar Messages

  • How to increase space in ASM disk groups?

    Hi All,
    I am a newbie DBA in my team and today morning got a OEM alert that:
    Target name=+ASM_ZEUS.techiebros.com
    Message=Disk Group DATA requires rebalance because at least one disk is low on space.
    Any solution?

    To solve it you can run this command: Alter diskgroup ASMDB Add Disk 'here the way of the new disk' Rebalance power 4 WAIT;
    After that, you can see through the v$asm_operation;
    Read : <http://jarneil.wordpress.com/2008/10/27/asm-rebalance-io-saturation/> or < http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_1006.htm>

  • What is the usable space of normal redundancy disk group with two uneven capacity fail groups

    Hi,
    I have a normal redundancy disk group (DATA) with two uneven capacity fail groups (FG1 and FG2). FG1 size is 10GB and FG2 size is 100GB.
    In this case what will be the usable space of the disk group (DATA)? is it 10G or 55G?
    Thanks,
    Mahi

    Please, don't duplicate post with same matter.
    This question was answered in your previous thread.
    Re: ASM normal redundancy disk group

  • How to move Tablespace from One disk group to another disk group in RAC

    Hi All,
    I have 11gR2 RAC env on Linux.
    As ofnow I have problem with disk group. I have 3 disk group which is almost full - 98%. I have added one NEW disk group and want to move some of the Tablespace(TBS) from OLD disk group to NEW diskgroup and make some free space in OLD disk group.
    Can any one suggest me how to move TBS from one disk group to another disk grup without shutting down the instance.
    DB is in Noarchive mode.
    Thanks...

    user12039625 wrote:
    Hi Helios,
    Thanks for doc id but I am looking for noarchive mode solution. becaues I got ORA-
    "ORA-01145: offline immediate disallowed unless media recovery enabled " when run alter database datafile '...' offline.
    Hence I am trying something and findout below steps but not sure how useful it is:
    1- put tablespace offine
    2- Copy the file to new diskgroup using Either RMAN or DBMS_FILE_TRANSFER.
    3- Rename the file to point to new location.
    4- Recover the file.
    5- Bring the file online.
    I had move 240M TBS from OLE to NEW.
    These steps run successfully so I think this is valid for noarchive mode.Hence want to confirm..so inform me please.
    Thanks :)I have doubt in my mind:
    1. You database is in noarchivelog mode
    2. You're taking tablespace offline
    3. Suppose you're moving a file of size 10GB(or any larger filesize) to another disk group
    4. Now after moving the file, you're trying to bring the tablespace online............NOW
    tablespace will need recovery. if the required data is inside the SGA then it is ok. But if the data has been flushed, then?
    if step 2 and 3 has taken significant time, then most probably you'll not be able to bring that tablespace online.
    Regards,
    S.K.

  • Disk Group from normal to external in a RAC environment

    Hello,
    my environment is based on 11.2.0.3.7 RAC SE with two nodes.
    Currently I have 4 DG, all in NORMAL redundancy, to contain respectively:
    ARCH
    REDO
    DATA
    VOTING
    At the moment I focus only with non-VOTING DG.
    Each of them has 2 failure groups that physically maps to disks in 2 different server rooms.
    The storage arrays are EMC VNX (one in each server room).
    They will be substituted by a VPLEX system that will be configured as a single storage entity with DGs in external redundancy.
    I see from document id
    How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy) (Doc ID 438580.1)
    that it is not possbile to do this online apparently.
    Can you confirm?
    I also read the thread in this forum:
    https://community.oracle.com/message/10173887#10173887
    that seems to confirm this too.
    I have some pressure to free the old storage arrays, but in short time I will not be able to stop the RAC RDBMSs I have in place.
    So the question is: can I proceed into two steps so that
    1) add a third failure group composed by the VPLEX disks
    2) wait for data sync of the third failure group
    3) drop one of the two old failure groups (ASM should be let me make this, correct?)
    3) brutally remove all disks of the remaining old storage failure group
    and proceed in reduced redundancy for some time until I can afford the maintenance window
    Inside the ASM administrator guide I see this:
    Normal redundancy disk group - It is best to have enough free space in your disk
    group to tolerate the loss of all disks in one failure group. The amount of free
    space should be equivalent to the size of the largest failure group.
    and also
    Normal redundancy
    Oracle ASM provides two-way mirroring by default, which means that all files are
    mirrored so that there are two copies of every extent. A loss of one Oracle ASM
    disk is tolerated. You can optionally choose three-way or unprotected mirroring.
    spare

    When you are creating external table you must specify location that external table will use to manipulate with external data.
    This is done with LOCATION and/or DEFAULT_DIRECTORY parameter.
    If you want that every instance in your cluster is able to use one specific external table, then you would need to have the location specified in the create external table command visible/accessible to all servers in your cluster, propably by some specific shared os disks/storage configuration; e.g. mounting remote disks, and this could very easy cause slower ext. table performance than it would be when the specified location is on the db server.
    This will be the one and only way because it is impossible to specify remote location, either when creating directory or in any specification parameters when creating external table.

  • Disk Group requires rebalance because at least one disk is low on space.

    I am getting alert over OEM Grid as below.
    Message=Disk Group DSKGRP_UAT_01 requires rebalance because at least one disk is low on space.
    Metric=Disk Minimum Free (%) without Rebalance
    I tried query with;
    select GROUP_NUMBER, DISK_NUMBER ,TOTAL_MB,FREE_MB ,(FREE_MB/TOTAL_MB)*100 from v$asm_disk order by 5 desc;
    to find the disk group breached the threshold.
    Let me know how can i query and find each file size on ASM Disk.
    Edited by: Sivaprasad S on Jun 3, 2011 3:25 AM

    Try this one:
    select a.group_number , a.name , b.bytes , count(*) over ( partition by a.group_number , a.file_number , a.file_incarnation) doublecount
    from v$asm_alias a , v$asm_file b
    where a.group_number=b.group_number and a.file_number=b.file_number and a.file_incarnation = b.incarnation Note that there is a column named doublecount - if the value in that column is larger than one then we have two (or more) asm synonyms pointing to the same file. This should be taken into consideration when computing disk space.

  • Multiple disk group pros/cons

    hello all,
    This is with regards to 11.2.0.3 DB(RAC) on RHEL 6
    i am trying to identify the pro's/con's of using multiple ASM Diskgroup.  I understand oracle recommends/best practice is to have 2 DG (one data and one flash) and you can place multiple copies of control files/online redo logs(and thats the way i want to go).  But would that same be true if i use different set of DISK.  For example we have multiple RAID 10 devices and multiple of SSD devices for us that we can use for this ASM instance.  And i was thinking to create 2 more Disk group (call it DG_SYS1 and DG_SYS2)  and use that to put my online redo logs, control file and temp and system table space there. 
    i understand in a standalone system(where regular file system is being used), they(online redo/control file) are usually on there own drives, but with ASM when i am already using external RAID 10 config + ASM striping i assume the IO would faster or am i better of using the SSD that i can have for my redo/control?  What would be the pro's/cons of it (besides managing multiple DG)..

    Reason that Oracle suggests to have two disk groups is because the very idea of ASM is the storage consolidation and to take the best advantage of that storage for all the databases. But having two dg's is not a norm. If you have different kinds of databases, if you have different capacity disks, you probably should have more dg's. Also, I am not sure why you are using RAID 0 along with ASM striping?
    user13454469 wrote:
    hello all,
    This is with regards to 11.2.0.3 DB(RAC) on RHEL 6
    i am trying to identify the pro's/con's of using multiple ASM Diskgroup.  I understand oracle recommends/best practice is to have 2 DG (one data and one flash) and you can place multiple copies of control files/online redo logs(and thats the way i want to go).  But would that same be true if i use different set of DISK.  For example we have multiple RAID 10 devices and multiple of SSD devices for us that we can use for this ASM instance.  And i was thinking to create 2 more Disk group (call it DG_SYS1 and DG_SYS2)  and use that to put my online redo logs, control file and temp and system table space there.
    i understand in a standalone system(where regular file system is being used), they(online redo/control file) are usually on there own drives, but with ASM when i am already using external RAID 10 config + ASM striping i assume the IO would faster or am i better of using the SSD that i can have for my redo/control?  What would be the pro's/cons of it (besides managing multiple DG)..
    Aman....

  • Pros/Cons for a Seperate ASM Archivelog Disk Group

    We have a non-ASM best practice of having our archivelogs on their own filesystem. When there is a major problem that fills up this space the datafiles (separate filesystem) themselves are not affected and the backup area used by RMAN (separate filesystem) is fully functional to perform archivelog sweeps.
    My DBA team and I have the same concern about having the archivelogs in the FRA along with backups, etc., in the ASM. We are proposing a third disk group just for archivelogs. Also a best practice of always having at least 1 spare disk available that can be added to a disk group in an emergency.
    Is there a reason why this is not a good idea or unnecessary? My team is new to ASM and I don't pretend to understand all the real world intracies of Oracle managing the FRA space/archivelogs/rman backups/etc.Thanks for any insight you can offer.

    I have read and am quite aware of Oracle's recommendations and have been an Oracle DBA since the venerable 7.0.16 release. In fact I have read through some 20 or so related Oracle white papers concerning ASM. However, in the 24 years I have been working with databases I am also well aware things don't always go as planned. You will fill up a disk group eventually whether that is from unexpectedly high data activity, human error, monitoring being down or any number of possibilities.
    So if we make the assumption that the FRA disk group will be out of space due to excessive numbers of archivelogs and/or RMAN retention widow and backup growth problems how do we quickly solve the issue while my prod DB is unavailable? Ah the real world ... If archivelogs and backups are in the FRA I really only have three choices, 1. add a disk to the disk group if I have one, or 2. manually delete files thus potentially impacting recoverability or 3. possibly temporarily reducing my RMAN recovery window to allow RMAN to clean out "old" backup sets (thus impacting my SLA recovery window. Yes there are probably other variations on these also.
    Therefore, we are proposing having a best practice of a spare disk available and a seperate disk group for archivelogs so we have two potential methods to recover from this scenario. Now back to the original question, is there a reason why a seperate disk group for archivelogs is not a good idea or unnecessary?
    Thanks

  • Question about using mixed size disk groups with ASM

    I support a 10g RAC cluster running 10.2.0.3 and using ASM. We currently have a data disk group that is 600 Gb in size, made up of 3 individual 200 Gb raw disk devices. The RAC cluster is running on an IBM pSeries server using AIX 5.3
    I need to add more space, but the system adminstrator can only give me 100 Gb in the new raw device. What will be the impact, if any, of adding this smaller raw disk device into the same ASM data disk group? I understand that ASM tries to "balance" the overall usage of each raw device. Will ASM still work okay, or will it be constantly trying to rebalance the disk group because the one raw device is only half the size of the other three? Any help and advice given will be greatly appreciated. Thank you!

    I need to add more space, but the system adminstrator can only give me 100 Gb in the new raw device. What will be the impact, if any, of adding this smaller raw disk device into the same ASM data disk group? I understand that ASM tries to "balance" the overall usage of each raw device. Will ASM still work okay, or will it be constantly trying to rebalance the disk group because the one raw device is only half the size of the other three? Any help and advice given will be greatly appreciated. Thank you!Hi ,
    There will be no impact and ASM will automatically balance the things.
    Also check :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Difference between ASM Disk Group, ADVM Volume and ACFS File system

    Q1. What is the difference between an ASM Disk Group and an ADVM Volume ?
    To my mind, an ASM Disk Group is effectively a logical volume for Database files ( including FRA files ).
    11gR2 seems to have introduced the concepts of ADVM volumes and ACFS File Systems.
    An 11gR2 ASM Disk Group can contain :
    ASM Disks
    ADVM volumes
    ACFS file systems
    Q2. ADVM volumes appear to be dynamic volumes.
    However is this therefore not effectively layering a logical volume ( the ADVM volume ) beneath an ASM Disk Group ( conceptually a logical volume as well ) ?
    Worse still if you have left ASM Disk Group Redundancy to the hardware RAID / SAN level ( as Oracle recommend ), you could effectively have 3 layers of logical disk ? ( ASM on top of ADVM on top of RAID/SAN ) ?
    Q3. if it is 2 layers of logical disk ( i.e. ASM on top of ADVM ), what makes this better than 2 layers using a 3rd party volume manager ( eg ASM on top of 3rd party LVM ) - something Oracle encourages against ?
    Q4. ACFS File systems, seem to be clustered file systems for non database files including ORACLE_HOMEs, application exe's etc ( but NOT GRID_HOME, OS root, OCR's or Voting disks )
    Can you create / modify ACFS file systems using ASM.
    The oracle toplogy diagram for ASM in the 11gR2 ASM Admin guide, shows ACFS as part of ASM. I am not sure from this if ACFS is part of ASM or ASM sits on top of ACFS ?
    Q5. Connected to Q4. there seems to be a number of different ways, ACFS file systems can be created ? Which of the below are valid methods ?
    through ASM ?
    through native OS file system creation ?
    through OEM ?
    through acfsutil ?
    my head is exploding
    Any help and clarification greatly appreciated
    Jim

    Q1 - ADVM volume is a type of special file created in the ASM DG.  Once created, it creates a block device on the OS itself that can be used just like any other block device.  http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmfilesystem.htm#OSTMG30000
    Q2 - the asm disk group is a disk group, not really a logical volume.  It combines attributes of both when used for database purposes, as the database and certain other applications know how to talk "ASM" protocol.  However, you won't find any general purpose applications that can do so.  In addition, some customers prefer to deal directly with file systems and volume devices, which ADVM is made to do.  In your way of thinking, you could have 3 layers of logical disk, but each of them provides different attributes and characteristics.  This is not a bad thing though, as each has a slightly different focus - os file system\device, database specific, and storage centric.
    Q3 - ADVM is specifically developed to extend the characteristics of ASM for use by general OS applications.  It understands the database performance characteristics and is tuned to work well in that situation.  Because it is developed in house, it takes advantage of the ASM design model.  Additionally, rather than having to contact multiple vendors for support, your support is limited to calling Oracle, a one-stop shop.
    Q4 - You can create and modify ACFS file systems using command line tools and ASMCA.  Creating and modifying logical volumes happens through SQL(ASM), asmcmd, and ASMCA.  EM can also be used for both items.  ACFS sits on top of ADVM, which is a file in an ASM disk group.  ACFS is aware of the characteristics of ASM\ADVM volumes, and tunes it's IO to make best use of those characteristics. 
    Q5 - several ways:
    1) Connect to ASM with SQL, use 'alter diskgroup add volume' as Mihael points out.  This creates an ADVM volume.  Then, format the volume using 'mkfs' (*nix) or acfsformat (windows).
    2) Use ASMCA - A gui to create a volume and format a file system.  Probably the easiest if your head is exploding.
    3) Use 'asmcmd' to create a volume, and 'mkfs' to format the ACFS file system.
    Here is information on ASMCA, with examples:
    http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmca_acfs.htm#OSTMG94348
    Information on command line tools, with examples:
    Basic Steps to Manage Oracle ACFS Systems

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • How many disk groups for +DATA?

    Hi All,
    Does oracle recommends having one big/shared asm disk group for all of the databases?
    In our case we going to have 11.2 and 10g rac running against 11.2 ask...
    Am I correct in saying that I have to set asm’s compatibility parameter to 10 in order to be able to use the same disk?
    Is this is a good idea? Or should i create another disk group for the 10g db’s?
    I’m assuming there are feature that will not be available when the compatibility is reduced to 10g...

    Oviwan wrote:
    what kind of storage system do you have? nas? what is your protocol between server and storage? tcp/ip (=>nfs)? fc?....
    if you have a storage with serveral disks then you create mostly more than one lun (raid 0, 1, 5 or whatever). if the requirement is, that you need a 1 TB diskgroup, then I would not create 1 1TB lun, I would create 5x200GB lun's for example, just for the case that you have to extend the diskgroup with a same lun size. if its 1 TB then you have to add another 1TB lun, if there are 5x200GB luns then you can simply add 200GB.
    I have nowhere found a document that says: if you have exactly 16 lun's for a diskgroup it's best. it depends on os, storage, etc...
    so if you create a 50gb diskgroup I would create just one 50gb lun for example.
    hthyes its NAS, connectd using Iscsi. it has 5 disks 1TB each and configued with RAID5. I found below requirments on asm ... it indicates 4luns as minimum per diskgroup, but it doesnt clearify whether its for external redundancy or as mredundancy types.
    •A minimum of four LUNs (Oracle ASM disks) of equal size and performance is recommended for each disk group.
    •Ensure that all Oracle ASM disks in a disk group have similar storage performance and availability characteristics. In storage configurations with mixed speed drives, such as 10K and 15K RPM, I/O performance is constrained by the slowest speed drive.
    •Oracle ASM data distribution policy is capacity-based. Ensure that Oracle ASM disks in a disk group have the same capacity to maintain balance.
    •Maximize the number of disks in a disk group for maximum data distribution and higher I/O bandwidth.
    •Create LUNs using the outside half of disk drives for higher performance. If possible, use small disks with the highest RPM.
    •Create large LUNs to reduce LUN management overhead.
    •Minimize I/O contention between ASM disks and other applications by dedicating disks to ASM disk groups for those disks that are not shared with other applications.
    •Choose a hardware RAID stripe size that is a power of 2 and less than or equal to the size of the ASM allocation unit.
    •Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant. However, there are situations where certain multipathing or third party cluster solutions require an LVM. In these situations, use the LVM to represent a single LUN without striping or mirroring to minimize the performance impact.
    •For Linux, when possible, use the Oracle ASMLIB feature to address device naming and permission persistency.
    ASMLIB provides an alternative interface for the ASM-enabled kernel to discover and access block devices. ASMLIB provides storage and operating system vendors the opportunity to supply extended storage-related features. These features provide benefits such as improved performance and greater data integrity.
    one more question about fdisk partitioning. is it correct that we should only create one partition per luns (5x200Gb luns in my case) is it because this way i will have more consistent set of luns( in term of performance)?

  • Perfromance impact if mulitple database share the same disk group.

    Hi All,
    I need help in understanding what would be the performance impact if multiple databases share the same ASM disk group.
    Is there any documentation that explains the impact on performance if there is any in doing so.
    Your help is very much appreciated.
    Thanks,
    Ravi.

    I need help in understanding what would be the performance impact if multiple databases share the same ASM disk group.application performance could be impacted; or not depending upon total disk I/O activity

  • EM Alert: Warning:+ASM Disk Group requires rebalance

    Environment:
    O.S Version : HP-UX B.11.31 U ia64
    Oracle DB Version : 11.2.0.3.0 , Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Database files are on : ASM Disk Group
    It is about the ASM Diskgroup low disk space alert by Oracle Enterprise Manager.
    Message=Disk Group DG_FLASH_01 requires rebalance because at least one disk is low on space.
    Metric=Disk Minimum Free (%) without Rebalance
    Metric value=18.961548
    Disk Group Name=DG_FLASH_01
    Severity=Warning
    Target Name=+ASM_dbsrver.siva.com
    Target type=Automatic Storage Management
    There is only 1 LUN assigned to this diskgroup +DG_FLASH_01_.
    We have a Oracle Enterprise Manager Grid Control Job MN_DBSRVR_DEL_ARCHIVELOGS runs every 12 hours by 1 AM and again at 1PM daily. It cleans-up the archivelogs older than 3 days old. The FLASH diskgroup is continuously being written with new files for both archivelogs and flashback logs.
    If there was multiple disks and a vast difference between the files on the different LUNs then a rebalance would be good to run.
    How to address such recurring alert of " *Disk Group DG_FLASH_01 requires rebalance because at least one disk is low on space*. " with _only one LUN on the +ASM Diskgroup_?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    As I stated earlier there is only one disk on this diskgroup
    DISK_NUMBER      OS_MB   TOTAL_MB    FREE_MB NAME                           PATH
              0      65536      65536      12995 DG_FLASH_01_0000        /devasm/gc/ora_asm_gc_b03_a14_d10_08_36
              0      65536      65536      43064 DG_DATA_01_0000         /devasm/gc/ora_asm_gc_b03_a13_d12_08_35So disk REBALANCE not required
    Edited by: Sivaprasad S on Feb 14, 2013 11:46 PM

Maybe you are looking for

  • Obiee linux and unix

    Do you know how to open OBIEE on UNIX? How would you start and stop services in UNIX? Is it possible to open the OBIEE Admin Tool in UNIX platform? Is it possible to deploy the code (RPD) in UNIX platform? and also in linux

  • Please any one tell me what is cache purging and seeding

    hi all what is cache purging and what is cache seeding i am getting full confuse using obits we can seed the cache ,means we are going to store the report result in cache memory or what please any one make me bit clear on purging and seeding thanks

  • KM Navigation iview not showing contents

    Hi,    I am using the KM Navigation iview to show contents in the desktop innerpage below detailed Navigation.When the propert 'height type = Automatic' its not showing any content its showing a single white line.But when i change the 'hieght type= F

  • Timeline in dvdsp

    Hi. Any way to get the time line in DVDSP to start at a number other than 00:00:00:00? I am having problems importing a .scc file, and cannot reencode the project.

  • Flash Player Install failed with error code: 30

    Hi, I want to report a problem with the Adobe Flash Player 13.0.0.182 (Mac OS X, English) installer on OS X 10.9.2. Mavericks.  I'm posting this so the engineers will know what to fix in the next update version. I have a 2013 iMac so according to Ado