ASM striping & Failgroups

Hi Gurus,
After Googl-ing also i got only blur image about ASM striping and Fail groups.
Could you please expalin 1.ASM striping Concept,
2.ASM Fail Group Concept. At least guide me through some URL.

udayjampani wrote:
Understand the following scenario and explain me,
I have chosen normal redundancy level and didn't provided any fail group specification .
So I have two disk groups,DATA and FRA , which in turn having four disks, DATA_0000,DATA_0001,FRA_0000,FRA_0001.
1.From the document, DATA_0000,DATA_0001 will be two failure groups automatically.is that right?Yes, both the disks would be in two different fail groups and each one would be owned by one disk.
2.But from striping concept,mirror file will be stored across disks,say for a table some data is stored in one disk,and remaining will be on the other disks.But this scenario will not serve data when there is disk failure right?You are mixing two things. Don't combine striping and mirroring together. You have got two disks over which the mirroring would be done but since you have just two disks, if one is gone, there won't be any other disks available to support the redundancy level of the disk group, loss of one disk would eventually lead to the dismounting of the entire disk group. To confirm this, create a disk group with 3 disks and then lose one of the disks .
Aman....

Similar Messages

  • ASM Quorum Failgroup Setup is Mandatory for Normal and High Redundancy?

    Hi all,
    Since I have worked with version 11.2 I had a concept about Quorum Failgroup and its purpose, now reading the documentation 12c I'm  confuse about some aspect and want your views on this subject.
    My Concept About Quorum Failgroup:
    The Quorum Failgroup was introduced in 11.2 for setup with Extended RAC and/or for setup with Diskgroups that have only 2 ASMDISK using Normal redundancy or 3 ASMDISK using High redundancy.
    But if we are not using Extended RAC and/or have a Diskgroup Normal Redundancy with 3 or more ASMDISK  or Diskgroup High Redundancy with 5 or more ASMDISK the use of Quorum Failgroup is optional, most like not used.
    ==============================================================================
    Documentation isn't clear about WHEN we must to use Quorum Failgroup.
    https://docs.oracle.com/database/121/CWLIN/storage.htm#CWLIN287
    7.4.1 Configuring Storage for Oracle Automatic Storage Management
      2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
    Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure groups within a disk group. A quorum failure group, a special type of failure group, contains mirror copies of voting files when voting files are stored in normal or high redundancy disk groups. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
    As mentioned in documentation above, I could understand that in ANY diskgroup that use Normal or High Redundancy MUST have a Quorum failgroup. (does not matter what setup)
    In my view, if a Quorum Failgroup is used to ENSURE that a quorum of the specified failure groups are available, then we must use it, in other words is mandatory.
    What's your view on this matter?
    ==============================================================================
    Another Issue:
    Suppose the following scenario (example using NORMAL Redundancy).
    Example 1
    Diskgroup Normal Redundancy  with 3 ASMDIKS.
    DSK_0000  - FG1 (QUORUM FAILGROUP)
    DSK_0001  - FG2 (REGULAR FAILGROUP)
    DSK_0002  - FG3 (REGULAR FAILGROUP)    
    The ASM will allow create only one Quorum Failgroup, and two Regular Failgroup ( a failgroup to each asm disk)    
    Storing Votedisk on this diskgroup the all three asmdisk will be used one votedisk in each asm disk.    
    Storing OCR on this diskgroup the two Regular Failgroup will be used, only one OCR and primary extents and  mirror of its extents accross two failgroup. (quorum failgroup will not be used to OCR)
    Example 2
    Diskgroup Normal Redundancy  with 5 ASMDIKS.
    DSK_0000  - FG1 (REGULAR FAILGROUP)
    DSK_0001  - FG2 (REGULAR FAILGROUP)
    DSK_0002  - FG3 (QUORUM FAILGROUP) 
    DSK_0003  - FG4 (QUORUM FAILGROUP)
    DSK_0004  - FG5 (QUORUM FAILGROUP)
    The ASM will allow create up to three Quorum Failgroup, and two Regular Failgroup.
    Storing Votedisk on this diskgroup the all three QUORUM FAILGROUP will be used. REGULAR FAILGROUP will not be used.
    Storing OCR on this diskgroup the two Regular Failgroup will be used, only one OCR and primary extents and  mirror of its extents accross two failgroup. (none quorum failgroup will not be used to OCR).
    This right here below is confuse to me.
    https://docs.oracle.com/database/121/CWLIN/storage.htm#CWLIN287
    7.4.1 Configuring Storage for Oracle Automatic Storage Management
      2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
    The quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups.
    Normal redundancy: For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three disks are used by failure groups and all three disks are used by the quorum failure group) and provides three voting files and one OCR and mirror of the OCR. When using a normal redundancy disk group, the cluster can survive the loss of one failure group.For most installations, Oracle recommends that you select normal redundancy disk groups.
    High redundancy:  For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (three of the five disks are used by failure groups and all five disks are used by the quorum failure group) and provides five voting files and one OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.
    Documentation says:
    minimum of three disk devices:  two of the three disks are used by failure groups and all three disks are used by the quorum failure group for normal redundancy.
    minimum of five disk devices: three of the five disks are used by failure groups and all five disks are used by the quorum failure group for high redudancy.
    Questions :
    What this USED mean?
    How the all disk are USED by quorum failgroup?
    This USED mean used to determine if the disk group can be mounted?
    How Quorum Failgroup determine if a diskgroup can be mounted, what is the math?
    Consider following scenery:
    Diskgroup Normal Redundancy with 3 ASM Disks. (Two Regular failgroup and One Quorum Failgroup)
    If we lost the Quorum failgroup group. We can mount that diskgroup using force option.
    If we lost one Regular failgroup group. We can mount that diskgroup using force option.
    We can't lost two Failgroup at same time.
    If I don't use Quorum failgroup (i.e only Regular Failgroup) the result of test is the same.
    I see no difference between use Quorum Failgroup and only Regular Failgroup on this matter.
    ======================================================================================
    When Oracle docs says:
    one OCR and mirror of the OCR for normal redundancy
    one OCR and two mirrors of the OCR for high redundancy
    What this means is we have ONLY ONE OCR File and mirror of its extents, but oracle in documentation says 1 mirror of OCR (normal redundancy) and 2 mirror of OCR (high redudancy).
    What is sound like? a single file or two or more files ?
    Please don't confuse it with ocrconfig mirror location.

    Hi Levi Pereira,
    Sorry for the late answer. And as per 12c1 documentation, yes you are right, only the VD will be placed on the quorum fail groups:
    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    Managing Oracle Cluster Registry and Voting Files
    Regarding your question "I want answer about is mandatory to use Quorum Failgroup when use Normal or High Redundancy?" No it isn't, I have a normal redundancy diskgroup that I store VD with no Quorum Failgroup, indeed, it would prevent you to store data on the disks within this kind of failgroup as per the documentation:
    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted if there is a loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
    Managing Oracle Cluster Registry and Voting Files
    And as per the documentation, my answer are with red color:
    Could you explain what documentation meant:
    minimum of three disk devices:  two of the three disks are used by failure groups and all three disks are used by the quorum failure group for normal redundancy.
    how all three disk are used by the quorum failgroup? [I don't think this is correct, sounds a bit strange and it is the opposite for what is right before...]
    Regards.

  • Question on ASM an Failgroups

    Hello everybody,
    I am having a ASM Diskgroup that is running out of space. The diskgroup has redundancy NORMAL and has three failure groups with two disks each. All of the disks have the same size (50G).
    Now the diskgroup is almost full and I have to add space. And I have two raw devices of 36G each available.
    I read that Oracle recommmends to have all the failure groups of the same size. So does that mean I can not make use of those two free disks for ASM ?
    Because if I create another Failgroup with them, the group would have a different size than the others. And if I add them to an existing group (if that is possible at all), this group would also have a different size.
    Thanks for your thoughts.
    Mario

    Hello Kuljeet Pal Singh,
    thank you for your quick reply. Maybe I have not made clear what my exact problem/question is:
    I have one Diskgroup: DG1.
    This Diskgroup has three Failure Groups: FG1,FG2,FG3.
    Each Failure Group has two disks DISK1A (belongs to FG1) , DISK1B (belongs to FG1) , DISK2A (belongs to FG2) ,DISK2B (FG2) , DISK3A (FG3) ,DISK3B (FG3).
    Each of this disks 50G in size.
    Now I have two more disks to add, let's say DISK4A and DISK4B, but they are only 36G each.
    If I create a new failure group FG4, it would not have the same size as the other FGs.
    If I add it to FG1,FG2 or FG3 there would also be a misbalance in the size.
    I am not sure if I interpret the statement from Oracle "Failure groups should all be of the same size" correct.
    Sounds clear, but on the other hand that would mean once I start with 50G disks, I always have to use 50G disks in the future. What if they are out of stock etc. ?
    Mario

  • ASM instance wont mount diskgroup..

    HI, I have 10g release 2 installed on CENTOS 4.4, I use ASM striping with with 4 raw disks.
    I had a system crash due to a power failure and now the the ASM wont mount the diskgroup.
    export $ORACLE_HOME=+ASM
    SQL> startup mount;
    ASM instance started
    Total System Global Area 130023424 bytes
    Fixed Size 2071000 bytes
    Variable Size 102786600 bytes
    ASM Cache 25165824 bytes
    ORA-15110: no diskgroups mounted
    SQL> alter diskgroup RESEARCH1 mount;
    alter diskgroup RESEARCH1 mount
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15063: ASM discovered an insufficient number of disks for diskgroup
    "RESEARCH1"
    now when I use /etc/init.d/oracleasm listdisks I can see all my disks:
    DISK1
    DISK2
    DISK3
    DISK4
    then i tried to change asm_diskstring to point the mounting point, here is my ora file:
    *.asm_diskgroups='RESEARCH1'
    +ASM.asm_diskgroups='RESEARCH1' #Manual Dismount
    *.asm_diskstring='/dev/oracleasm/disks'
    *.background_dump_dest='/home/oracle/product/10.2.0/db_1/admin/+ASM/bdump'
    *.core_dump_dest='/home/oracle/product/10.2.0/db_1/admin/+ASM/cdump'
    *.instance_type='asm'
    *.large_pool_size=12M
    *.remote_login_passwordfile='EXCLUSIVE'
    *.user_dump_dest='/home/oracle/product/10.2.0/db_1/admin/+ASM/udump'
    any ideas?
    Thanks
    Assaf

    Hi,
    by oracleasm lib utility you can configure as below
    # /etc/init.d/oracleasm configure
    Default user to own the driver interface [oracle]: oracle
    Default group to own the driver interface [dba]: dba
    Start Oracle ASM library driver on boot (y/n) [y]: y
    Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: [ OK ]
    Scanning system for ASM disks: [ OK ]
    # /etc/init.d/oracleasm enable
    Thanks

  • I need help on how to setup hardware raid for ASM.

    In the « Recommendations for Storage Preparation” section in the following documentation: http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmprepare.htm
    It mentions:
    --Use the storage array hardware RAID 1 mirroring protection when possible to reduce the mirroring overhead on the server.
    Which is a good raid 1 configuration considering my machine setup?
    “I put my Machine info below.”
    Should I go for something like:
    5 * raid 1 of 2 disks in each raid: disk group DATA
    5 * raid 1 of 2 disks in each raid: disk group FRA
    Then ASM will take care of all the striping between the 5 raids inside a disk group right?
    OR, I go for:
    1 * raid 1 of 10 disks: disk group DATA
    1 * raid 1 of 10 disks: disk group FRA
    In the second configuration, does ASM recognize that there are 10 disks in my raid configuration and stripes on those disks? Or to use ASM striping, I need to have lots of raid in a disks group?
    Here is my Machine Characteristics:
    O/s is Oracle Enterprise Linux 4.5 64 bit
    Single instance on Enterprise Edition 10g r2
    200 GIG database size.
    High "oltp" environment.
    Estimated growth of 60 to 80GIG per year
    50-70GIG archivelogs generation per Day
    Flashback time is 24 hours: 120GIG of flashback space in avg
    I keep a Local backup. Then push to another disk storage, then on tape.
    General Hardware Info:
    Dell PowerEdge 2950
    16 GIG RAM
    2 * 64 bit dual core CPU's
    6 * local 300G/15rpm disks
    Additional Storage:
    Dell PowerVault MD1000
    15 * 300G/15rpm Disks
    So I have 21 Disks in total.

    I would personally prefer the first configuration and let ASM stripe the disks. Generally speaking, many RAID controllers will stripe then mirror (0+1) when you tell it to build a striped and mirrored RAID set on 10 disks. Some will mirror then stripe (1+0) which is what most people prefer. That's because when a 1+0 configuration has a disk failure, only a single RAID 1 set needs to be resync'd. The other members of the stripe won't have to be resynchronized.
    So, I'd prefer to have ASM manage 5 luns and let ASM stripe across those 5 luns in each disk group. It also increases your ability to reorganize your storage if you need 20% more info in DATA and can afford 20% less in FRA, you can move one of your RAID 1 luns from FRA to DATA easily.
    That's my 0.02.

  • Multiple disk group pros/cons

    hello all,
    This is with regards to 11.2.0.3 DB(RAC) on RHEL 6
    i am trying to identify the pro's/con's of using multiple ASM Diskgroup.  I understand oracle recommends/best practice is to have 2 DG (one data and one flash) and you can place multiple copies of control files/online redo logs(and thats the way i want to go).  But would that same be true if i use different set of DISK.  For example we have multiple RAID 10 devices and multiple of SSD devices for us that we can use for this ASM instance.  And i was thinking to create 2 more Disk group (call it DG_SYS1 and DG_SYS2)  and use that to put my online redo logs, control file and temp and system table space there. 
    i understand in a standalone system(where regular file system is being used), they(online redo/control file) are usually on there own drives, but with ASM when i am already using external RAID 10 config + ASM striping i assume the IO would faster or am i better of using the SSD that i can have for my redo/control?  What would be the pro's/cons of it (besides managing multiple DG)..

    Reason that Oracle suggests to have two disk groups is because the very idea of ASM is the storage consolidation and to take the best advantage of that storage for all the databases. But having two dg's is not a norm. If you have different kinds of databases, if you have different capacity disks, you probably should have more dg's. Also, I am not sure why you are using RAID 0 along with ASM striping?
    user13454469 wrote:
    hello all,
    This is with regards to 11.2.0.3 DB(RAC) on RHEL 6
    i am trying to identify the pro's/con's of using multiple ASM Diskgroup.  I understand oracle recommends/best practice is to have 2 DG (one data and one flash) and you can place multiple copies of control files/online redo logs(and thats the way i want to go).  But would that same be true if i use different set of DISK.  For example we have multiple RAID 10 devices and multiple of SSD devices for us that we can use for this ASM instance.  And i was thinking to create 2 more Disk group (call it DG_SYS1 and DG_SYS2)  and use that to put my online redo logs, control file and temp and system table space there.
    i understand in a standalone system(where regular file system is being used), they(online redo/control file) are usually on there own drives, but with ASM when i am already using external RAID 10 config + ASM striping i assume the IO would faster or am i better of using the SSD that i can have for my redo/control?  What would be the pro's/cons of it (besides managing multiple DG)..
    Aman....

  • Oracle Performance tunning genral question

    Hi,
    Below is the list of Areas of Oracle db for which tunning activities are done. You are invited to comment to it weather this is complete list or need some addition or deletion. As I'm learning PT for Oracle now a days, therefore I want to expand my knowledge by sharing what I'm learning and what I need to learn.
    So comment with Open hearts on it. Espically from experts and Gurus.
    Here is the List
    1-Planning for Performance, include Storage consideration( Weather it is SAN, NAS, DAS), Network planning and host OS planning with proper configuration for running Oracle.
    2-Database desining (Not under-Normalized and not Over-Normalized with proper usage of Indexes, views and Stored Procedures)
    3- Instance tunning (Memory structure + B.g Processes)
    4- Session tunning.
    5- Segment Space tunning.
    6- SQL tunning.
    This is what uptill what I've learned. If it needs addition kindly tell me what are these. Please also provide me links(good and precise one) for PT tutorials on web.Also note that I'm discussing this w.r.t Single instance non-rac db.
    Looking for Good sugessions
    Regards,
    Abbasi

    Hello,
    This is the oracle course contents:
    Contents
    Preface
    1 Introduction
    Course Objectives 1-2
    Organization 1-3
    Agenda 1-4
    What Is Not Included 1-6
    Who Tunes? 1-7
    What Does the DBA Tune? 1-8
    How to Tune 1-10
    Tuning Methodology 1-11
    Effective Tuning Goals 1-13
    General Tuning Session 1-15
    Summary 1-17
    2 Basic Tuning Tools
    Objectives 2-2
    Performance Tuning Diagnostics 2-3
    Performance Tuning Tools 2-4
    Tuning Objectives 2-5
    Top Wait Events 2-6
    DB Time 2-7
    CPU and Wait Time Tuning Dimensions 2-8
    Time Model: Overview 2-9
    Time Model Statistics Hierarchy 2-10
    Time Model Example 2-12
    Dynamic Performance Views 2-13
    Dynamic Performance Views: Usage Examples 2-14
    Dynamic Performance Views: Considerations 2-15
    Statistic Levels 2-16
    Statistics and Wait Events 2-18
    System Statistic Classes 2-19
    Displaying Statistics 2-20
    Displaying SGA Statistics 2-22
    Wait Events 2-23
    Using the V$EVENT_NAME View 2-24
    Wait Classes 2-25
    Displaying Wait Event Statistics 2-26
    Oracle Internal & Oracle Academy Use Only
    iv
    Commonly Observed Wait Events 2-28
    Using the V$SESSION_WAIT View 2-29
    Precision of System Statistics 2-31
    Using Features of the Packs 2-32
    Accessing the Database Home Page 2-34
    Enterprise Manager Performance Pages 2-35
    Viewing the Alert Log 2-37
    Using Alert Log Information as an Aid in Tuning 2-38
    User Trace Files 2-40
    Background Processes Trace Files 2-41
    Summary 2-42
    Practice 2 Overview: Using Basic Tools 2-43
    3 Using Automatic Workload Repository
    Objectives 3-2
    Automatic Workload Repository: Overview 3-3
    Automatic Workload Repository Data 3-4
    Workload Repository 3-5
    Database Control and AWR 3-6
    AWR Snapshot Purging Policy 3-7
    AWR Snapshot Settings 3-8
    Manual AWR Snapshots 3-9
    Managing Snapshots with PL/SQL 3-10
    Generating AWR Reports in EM 3-11
    Generating AWR Reports in SQL*Plus 3-12
    Reading the AWR Report 3-13
    Snapshots and Periods Comparisons 3-14
    Compare Periods: Benefits 3-15
    Compare Periods: Results 3-16
    Compare Periods: Report 3-17
    Compare Periods: Load Profile 3-18
    Compare Periods: Top Events 3-19
    Summary 3-20
    Practice 3 Overview: Using AWR-Based Tools 3-21
    4 Defining Problems
    Objectives 4-2
    Defining the Problem 4-3
    Limit the Scope 4-4
    Setting the Priority 4-5
    Top Wait Events 4-6
    Oracle Internal & Oracle Academy Use Only
    v
    Setting the Priority: Example 4-7
    Top SQL Reports 4-8
    Common Tuning Problems 4-9
    Tuning Life Cycle Phases 4-11
    Tuning During the Life Cycle 4-12
    Application Design and Development 4-13
    Testing: Database Configuration 4-14
    Deployment 4-15
    Production 4-16
    Migration, Upgrade, and Environment Changes 4-17
    ADDM Tuning Session 4-18
    Performance Versus Business Requirements 4-19
    Performance Tuning Resources 4-20
    Filing a Performance Service Request 4-21
    RDA Report 4-22
    Monitoring and Tuning Tool: Overview 4-23
    Summary 4-25
    Practice 4 Overview: Identifying the Problem 4-26
    5 Using Metrics and Alerts
    Objectives 5-2
    Metrics, Alerts, and Baselines 5-3
    Limitation of Base Statistics 5-4
    Typical Delta Tools 5-5
    Oracle Database 11g Solution: Metrics 5-6
    Benefits of Metrics 5-7
    Viewing Metric History Information 5-8
    Using EM to View Metric Details 5-9
    Statistic Histograms 5-10
    Histogram Views 5-11
    Server-Generated Alerts 5-12
    Database Control Usage Model 5-13
    Setting Thresholds 5-14
    Creating and Testing an Alert 5-15
    Metric and Alert Views 5-16
    View User-Defined SQL Metrics 5-17
    Create User-Defined SQL Metrics 5-18
    View User-Defined Host Metrics 5-19
    Create User-Defined Host Metrics 5-20
    Summary 5-21
    Practice Overview 5: Working with Metrics 5-22
    Oracle Internal & Oracle Academy Use Only
    vi
    6 Baselines
    Objectives 6-2
    Comparative Performance Analysis with AWR Baselines 6-3
    Automatic Workload Repository Baselines 6-4
    Moving Window Baseline 6-5
    Baselines in Performance Page Settings 6-6
    Baseline Templates 6-7
    AWR Baselines 6-8
    Creating AWR Baselines 6-9
    Single AWR Baseline 6-10
    Creating a Repeating Baseline Template 6-11
    Managing Baselines with PL/SQL 6-12
    Generating a Baseline Template for a Single Time Period 6-13
    Creating a Repeating Baseline Template 6-14
    Baseline Views 6-15
    Performance Monitoring and Baselines 6-17
    Defining Alert Thresholds Using a Static Baseline 6-19
    Using EM to Quickly Configure Adaptive Thresholds 6-20
    Changing Adaptive Threshold Settings 6-22
    Summary 6-23
    Practice 6: Overview Using AWR Baselines 6-24
    7 Using AWR-Based Tools
    Objectives 7-2
    Automatic Maintenance Tasks 7-3
    Maintenance Windows 7-4
    Default Maintenance Plan 7-5
    Automated Maintenance Task Priorities 7-6
    Tuning Automatic Maintenance Tasks 7-7
    ADDM Performance Monitoring 7-8
    ADDM and Database Time 7-9
    DBTime-Graph and ADDM Methodology 7-10
    Top Performance Issues Detected 7-12
    Database Control and ADDM Findings 7-13
    ADDM Analysis Results 7-14
    ADDM Recommendations 7-15
    Database Control and ADDM Task 7-16
    Changing ADDM Attributes 7-17
    Retrieving ADDM Reports by Using SQL 7-18
    Active Session History: Overview 7-19
    Active Session History: Mechanics 7-20
    Oracle Internal & Oracle Academy Use Only
    vii
    ASH Sampling: Example 7-21
    Accessing ASH Data 7-22
    Dump ASH to File 7-23
    Analyzing the ASH Data 7-24
    Generating ASH Reports 7-25
    ASH Report Script 7-26
    ASH Report: General Section 7-27
    ASH Report Structure 7-28
    ASH Report: Activity Over Time 7-29
    Summary 7-30
    Practice 7 Overview: Using AWR-Based Tools 7-31
    8 Monitoring an Application
    Objectives 8-2
    What Is a Service? 8-3
    Service Attributes 8-4
    Service Types 8-5
    Creating Services 8-6
    Managing Services in a Single-Instance Environment 8-7
    Everything Switches to Services 8-8
    Using Services with Client Applications 8-9
    Using Services with the Resource Manager 8-10
    Services and Resource Manager with EM 8-11
    Services and the Resource Manager: Example 8-12
    Using Services with the Scheduler 8-13
    Services and the Scheduler with EM 8-14
    Services and the Scheduler: Example 8-16
    Using Services with Parallel Operations 8-17
    Using Services with Metric Thresholds 8-18
    Changing Service Thresholds by Using EM 8-19
    Services and Metric Thresholds: Example 8-20
    Service Aggregation and Tracing 8-21
    Top Services Performance Page 8-22
    Service Aggregation Configuration 8-23
    Service Aggregation: Example 8-24
    Client Identifier Aggregation and Tracing 8-25
    trcsess Utility 8-26
    Service Performance Views 8-27
    Summary 8-29
    Practice 8 Overview: Using Services 8-30
    Oracle Internal & Oracle Academy Use Only
    viii
    9 Identifying Problem SQL Statements
    Objectives 9-2
    SQL Statement Processing Phases 9-3
    Parse Phase 9-4
    SQL Storage 9-5
    Cursor Usage and Parsing 9-6
    SQL Statement Processing Phases: Bind 9-8
    SQL Statement Processing Phases: Execute and Fetch 9-9
    Processing a DML Statement 9-10
    COMMIT Processing 9-12
    Role of the Oracle Optimizer 9-13
    Identifying Bad SQL 9-15
    TOP SQL Reports 9-16
    What Is an Execution Plan? 9-17
    Methods for Viewing Execution Plans 9-18
    Uses of Execution Plans 9-19
    DBMS_XPLAN Package: Overview 9-20
    EXPLAIN PLAN Command 9-22
    EXPLAIN PLAN Command: Example 9-23
    EXPLAIN PLAN Command: Output 9-24
    Reading an Execution Plan 9-25
    Using the V$SQL_PLAN View 9-26
    V$SQL_PLAN Columns 9-27
    Querying V$SQL_PLAN 9-28
    V$SQL_PLAN_STATISTICS View 9-29
    Querying the AWR 9-30
    SQL*Plus AUTOTRACE 9-32
    Using SQL*Plus AUTOTRACE 9-33
    SQL*Plus AUTOTRACE: Statistics 9-34
    SQL Trace Facility 9-35
    How to Use the SQL Trace Facility 9-37
    Initialization Parameters 9-38
    Enabling SQL Trace 9-40
    Disabling SQL Trace 9-41
    Formatting Your Trace Files 9-42
    TKPROF Command Options 9-43
    Output of the TKPROF Command 9-45
    TKPROF Output with No Index: Example 9-50
    TKPROF Output with Index: Example 9-51
    Generate an Optimizer Trace 9-52
    Oracle Internal & Oracle Academy Use Only
    ix
    Summary 9-53
    Practice Overview 9: Using Execution Plan Utilities 9-54
    10 Influencing the Optimizer
    Objectives 10-2
    Functions of the Query Optimizer 10-3
    Selectivity 10-5
    Cardinality and Cost 10-6
    Changing Optimizer Behavior 10-7
    Using Hints 10-8
    Optimizer Statistics 10-9
    Extended Statistics 10-10
    Controlling the Behavior of the Optimizer with Parameters 10-11
    Enabling Query Optimizer Features 10-13
    Influencing the Optimizer Approach 10-14
    Optimizing SQL Statements 10-15
    Access Paths 10-16
    Choosing an Access Path 10-17
    Full Table Scans 10-18
    Row ID Scans 10-20
    Index Operations 10-21
    B*Tree Index Operations 10-22
    Bitmap Indexes 10-23
    Bitmap Index Access 10-24
    Combining Bitmaps 10-25
    Bitmap Operations 10-26
    Join Operations 10-27
    Join Methods 10-28
    Nested Loop Joins 10-29
    Hash Joins 10-31
    Sort-Merge Joins 10-32
    Join Performance 10-34
    How the Query Optimizer Chooses Execution Plans for Joins 10-35
    Sort Operations 10-37
    Tuning Sort Performance 10-38
    Reducing the Cost 10-39
    Index Maintenance 10-40
    Dropping Indexes 10-42
    Creating Indexes 10-43
    SQL Access Advisor 10-44
    Table Maintenance for Performance 10-45
    Oracle Internal & Oracle Academy Use Only
    x
    Table Reorganization Methods 10-46
    Summary 10-47
    Practice 10 Overview: Influencing the Optimizer 10-48
    11 Using SQL Performance Analyzer
    Objectives 11-2
    Real Application Testing: Overview 11-3
    Real Application Testing: Use Cases 11-4
    SQL Performance Analyzer: Process 11-5
    Capturing the SQL Workload 11-7
    Creating a SQL Performance Analyzer Task 11-8
    SQL Performance Analyzer: Tasks 11-9
    Optimizer Upgrade Simulation 11-10
    SQL Performance Analyzer Task Page 11-11
    Comparison Report 11-12
    Comparison Report SQL Detail 11-13
    Tuning Regressing Statements 11-14
    Preventing Regressions 11-16
    Parameter Change Analysis 11-17
    Guided Workflow Analysis 11-18
    SQL Performance Analyzer: PL/SQL Example 11-19
    SQL Performance Analyzer: Data Dictionary Views 11-21
    Summary 11-22
    Practice 11: Overview 11-23
    12 SQL Performance Management
    Objectives 12-2
    Maintaining SQL Performance 12-3
    Maintaining Optimizer Statistics 12-4
    Automated Maintenance Tasks 12-5
    Statistic Gathering Options 12-6
    Setting Statistic Preferences 12-7
    Restore Statistics 12-9
    Deferred Statistics Publishing: Overview 12-10
    Deferred Statistics Publishing: Example 12-12
    Automatic SQL Tuning: Overview 12-13
    SQL Statement Profiling 12-14
    Plan Tuning Flow and SQL Profile Creation 12-15
    SQL Tuning Loop 12-16
    Using SQL Profiles 12-17
    SQL Tuning Advisor: Overview 12-18
    Oracle Internal & Oracle Academy Use Only
    xi
    Using the SQL Tuning Advisor 12-19
    SQL Tuning Advisor Options 12-20
    SQL Tuning Advisor Recommendations 12-21
    Using the SQL Tuning Advisor: Example 12-22
    Using the SQL Access Advisor 12-23
    View Recommendations 12-25
    View Recommendation Details 12-26
    SQL Plan Management: Overview 12-27
    SQL Plan Baseline: Architecture 12-28
    Loading SQL Plan Baselines 12-30
    Evolving SQL Plan Baselines 12-31
    Important Baseline SQL Plan Attributes 12-32
    SQL Plan Selection 12-34
    Possible SQL Plan Manageability Scenarios 12-36
    SQL Performance Analyzer and SQL Plan Baseline Scenario 12-37
    Loading a SQL Plan Baseline Automatically 12-38
    Purging SQL Management Base Policy 12-39
    Enterprise Manager and SQL Plan Baselines 12-40
    Summary 12-41
    Practice 12: Overview Using SQL Plan Management 12-42
    13 Using Database Replay
    Objectives 13-2
    Using Database Replay 13-3
    The Big Picture 13-4
    System Architecture: Capture 13-5
    System Architecture: Processing the Workload 13-7
    System Architecture: Replay 13-8
    Capture Considerations 13-9
    Replay Considerations: Preparation 13-10
    Replay Considerations 13-11
    Replay Options 13-12
    Replay Analysis 13-13
    Database Replay Workflow in Enterprise Manager 13-15
    Capturing Workload with Enterprise Manager 13-16
    Capture Wizard: Plan Environment 13-17
    Capture Wizard: Options 13-18
    Capture Wizard: Parameters 13-19
    Viewing Capture Progress 13-20
    Viewing Capture Report 13-21
    Export Capture AWR Data 13-22
    Oracle Internal & Oracle Academy Use Only
    xii
    Viewing Workload Capture History 13-23
    Processing Captured Workload 13-24
    Using the Preprocess Captured Workload Wizard 13-25
    Using the Replay Workload Wizard 13-26
    Replay Workload: Prerequisites 13-27
    Replay Workload: Choose Initial Options 13-28
    Replay Workload: Customize Options 13-29
    Replay Workload: Prepare Replay Clients 13-30
    Replay Workload: Client Connections 13-31
    Replay Workload: Replay Started 13-32
    Viewing Workload Replay Progress 13-33
    Viewing Workload Replay Statistics 13-34
    Packages and Procedures 13-36
    Data Dictionary Views: Database Replay 13-37
    Database Replay: PL/SQL Example 13-38
    Calibrating Replay Clients 13-40
    Summary 13-41
    Practice 13: Overview 13-42
    14 Tuning the Shared Pool
    Objectives 14-2
    Shared Pool Architecture 14-3
    Shared Pool Operation 14-4
    The Library Cache 14-5
    Latch and Mutex 14-7
    Latch and Mutex: Views and Statistics 14-9
    Diagnostic Tools for Tuning the Shared Pool 14-11
    AWR/Statspack Indicators 14-13
    Load Profile 14-14
    Instance Efficiencies 14-15
    Top Waits 14-16
    Time Model 14-17
    Library Cache Activity 14-19
    Avoid Hard Parses 14-20
    Are Cursors Being Shared? 14-21
    Sharing Cursors 14-23
    Adaptive Cursor Sharing: Example 14-25
    Adaptive Cursor Sharing Views 14-27
    Interacting with Adaptive Cursor Sharing 14-28
    Avoiding Soft Parses 14-29
    Sizing the Shared Pool 14-30
    Oracle Internal & Oracle Academy Use Only
    xiii
    Shared Pool Advisory 14-31
    Shared Pool Advisor 14-33
    Avoiding Fragmentation 14-34
    Large Memory Requirements 14-35
    Tuning the Shared Pool Reserved Space 14-37
    Keeping Large Objects 14-39
    Data Dictionary Cache 14-41
    Dictionary Cache Misses 14-42
    SQL Query Result Cache: Overview 14-43
    Managing the SQL Query Result Cache 14-44
    Using the RESULT_CACHE Hint 14-46
    Using the DBMS_RESULT_CACHE Package 14-47
    Viewing SQL Result Cache Dictionary Information 14-48
    SQL Query Result Cache: Considerations 14-49
    UGA and Oracle Shared Server 14-50
    Large Pool 14-51
    Tuning the Large Pool 14-52
    Summary 14-53
    Practice Overview 14: Tuning the Shared Pool 14-54
    15 Tuning the Buffer Cache
    Objectives 15-2
    Oracle Database Architecture 15-3
    Buffer Cache: Highlights 15-4
    Database Buffers 15-5
    Buffer Hash Table for Lookups 15-6
    Working Sets 15-7
    Tuning Goals and Techniques 15-9
    Symptoms 15-11
    Cache Buffer Chains Latch Contention 15-12
    Finding Hot Segments 15-13
    Buffer Busy Waits 15-14
    Calculating the Buffer Cache Hit Ratio 15-15
    Buffer Cache Hit Ratio Is Not Everything 15-16
    Interpreting Buffer Cache Hit Ratio 15-17
    Read Waits 15-19
    Free Buffer Waits 15-21
    Solutions 15-22
    Sizing the Buffer Cache 15-23
    Buffer Cache Size Parameters 15-24
    Dynamic Buffer Cache Advisory Parameter 15-25
    Oracle Internal & Oracle Academy Use Only
    xiv
    Buffer Cache Advisory View 15-26
    Using the V$DB_CACHE_ADVICE View 15-27
    Using the Buffer Cache Advisory with EM 15-28
    Caching Tables 15-29
    Multiple Buffer Pools 15-30
    Enabling Multiple Buffer Pools 15-32
    Calculating the Hit Ratio for Multiple Pools 15-33
    Multiple Block Sizes 15-35
    Multiple Database Writers 15-36
    Multiple I/O Slaves 15-37
    Use Multiple Writers or I/O Slaves 15-38
    Private Pool for I/O Intensive Operations 15-39
    Automatically Tuned Multiblock Reads 15-40
    Flushing the Buffer Cache (for Testing Only) 15-41
    Summary 15-42
    Practice 15: Overview Tuning the Buffer Cache 15-43
    16 Tuning PGA and Temporary Space
    Objectives 16-2
    SQL Memory Usage 16-3
    Performance Impact 16-4
    Automatic PGA Memory 16-5
    SQL Memory Manager 16-6
    Configuring Automatic PGA Memory 16-8
    Setting PGA_AGGREGATE_TARGET Initially 16-9
    Monitoring SQL Memory Usage 16-10
    Monitoring SQL Memory Usage: Examples 16-12
    Tuning SQL Memory Usage 16-13
    PGA Target Advice Statistics 16-14
    PGA Target Advice Histograms 16-15
    Automatic PGA and Enterprise Manager 16-16
    Automatic PGA and AWR Reports 16-17
    Temporary Tablespace Management: Overview 16-18
    Temporary Tablespace: Best Practice 16-19
    Configuring Temporary Tablespace 16-20
    Temporary Tablespace Group: Overview 16-22
    Temporary Tablespace Group: Benefits 16-23
    Creating Temporary Tablespace Groups 16-24
    Maintaining Temporary Tablespace Groups 16-25
    View Tablespace Groups 16-26
    Monitoring Temporary Tablespace 16-27
    Oracle Internal & Oracle Academy Use Only
    xv
    Temporary Tablespace Shrink 16-28
    Tablespace Option for Creating Temporary Table 16-29
    Summary 16-30
    Practice Overview 16: Tuning PGA Memory 16-31
    17 Automatic Memory Management
    Objectives 17-2
    Oracle Database Architecture 17-3
    Dynamic SGA 17-4
    Granule 17-5
    Memory Advisories 17-6
    Manually Adding Granules to Components 17-7
    Increasing the Size of an SGA Component 17-8
    Automatic Shared Memory Management: Overview 17-9
    SGA Sizing Parameters: Overview 17-10
    Dynamic SGA Transfer Modes 17-11
    Memory Broker Architecture 17-12
    Manually Resizing Dynamic SGA Parameters 17-13
    Behavior of Auto-Tuned SGA Parameters 17-14
    Behavior of Manually Tuned SGA Parameters 17-15
    Using the V$PARAMETER View 17-16
    Resizing SGA_TARGET 17-17
    Disabling Automatic Shared Memory Management 17-18
    Configuring ASMM 17-19
    SGA Advisor 17-20
    Monitoring ASMM 17-21
    Automatic Memory Management: Overview 17-22
    Oracle Database Memory Parameters 17-24
    Automatic Memory Parameter Dependency 17-25
    Enabling Automatic Memory Management 17-26
    Monitoring Automatic Memory Management 17-27
    DBCA and Automatic Memory Management 17-29
    Summary 17-30
    Practice 17: Overview Using Automatic Memory Tuning 17-31
    Oracle Internal & Oracle Academy Use Only
    xvi
    18 Tuning Segment Space Usage
    Objectives 18-2
    Space Management 18-3
    Extent Management 18-4
    Locally Managed Extents 18-5
    Large Extents: Considerations 18-6
    How Table Data Is Stored 18-8
    Anatomy of a Database Block 18-9
    Minimize Block Visits 18-10
    The DB_BLOCK_SIZE Parameter 18-11
    Small Block Size: Considerations 18-12
    Large Block Size: Considerations 18-13
    Block Allocation 18-14
    Free Lists 18-15
    Block Space Management 18-16
    Block Space Management with Free Lists 18-17
    Automatic Segment Space Management 18-19
    Automatic Segment Space Management at Work 18-20
    Block Space Management with ASSM 18-22
    Creating an Automatic Segment Space Management Segment 18-23
    Migration and Chaining 18-24
    Guidelines for PCTFREE and PCTUSED 18-26
    Detecting Migration and Chaining 18-27
    Selecting Migrated Rows 18-28
    Eliminating Migrated Rows 18-29
    Shrinking Segments: Overview 18-31
    Shrinking Segments: Considerations 18-32
    Shrinking Segments by Using SQL 18-33
    Segment Shrink: Basic Execution 18-34
    Segment Shrink: Execution Considerations 18-35
    Using EM to Shrink Segments 18-36
    Table Compression: Overview 18-37
    Table Compression Concepts 18-38
    Using Table Compression 18-39
    Summary 18-40
    19 Tuning I/O
    Objectives 19-2
    I/O Architecture 19-3
    File System Characteristics 19-4
    I/O Modes 19-5
    Oracle Internal & Oracle Academy Use Only
    xvii
    Direct I/O 19-6
    Bandwidth Versus Size 19-7
    Important I/O Metrics for Oracle Databases 19-8
    I/O Calibration and Enterprise Manager 19-10
    I/O Calibration and the PL/SQL Interface 19-11
    I/O Statistics: Overview 19-13
    I/O Statistics and Enterprise Manager 19-14
    Stripe and Mirror Everything 19-16
    Using RAID 19-17
    RAID Cost Versus Benefits 19-18
    Should I Use RAID 1 or RAID 5? 19-20
    Diagnostics 19-21
    Database I/O Tuning 19-22
    What Is Automatic Storage Management? 19-23
    Tuning ASM 19-24
    How Many Disk Groups per Database 19-25
    Which RAID Configuration for Best Availability? 19-26
    ASM Mirroring Guidelines 19-27
    ASM Striping Granularity 19-28
    What Type of Striping Works Best? 19-29
    ASM Striping Only 19-30
    Hardware RAID Striped LUNs 19-31
    ASM Guidelines 19-32
    ASM Instance Initialization Parameters 19-33
    Dynamic Performance Views 19-34
    Monitoring Long-Running Operations by Using V$ASM_OPERATION 19-36
    ASM Instance Performance Diagnostics 19-37
    ASM Performance Page 19-38
    Database Instance Parameter Changes 19-39
    ASM Scalability 19-40
    Summary 19-41
    20 Performance Tuning Summary
    Objectives 20-2
    Necessary Initialization Parameters with Little Performance Impact 20-3
    Important Initialization Parameters with Performance Impact 20-4
    Sizing Memory Initially 20-6
    Database High Availability: Best Practices 20-7
    Undo Tablespace: Best Practices 20-8
    Temporary Tablespace: Best Practices 20-9
    General Tablespace: Best Practices 20-11
    Internal Fragmentation Considerations 20-12
    Oracle Internal & Oracle Academy Use Only
    xviii
    Block Size: Advantages and Disadvantages 20-13
    Automatic Checkpoint Tuning 20-14
    Sizing the Redo Log Buffer 20-15
    Sizing Redo Log Files 20-16
    Increasing the Performance of Archiving 20-17
    Automatic Statistics Gathering 20-19
    Automatic Statistics Collection: Considerations 20-20
    Commonly Observed Wait Events 20-21
    Additional Statistics 20-22
    Top 10 Mistakes Found in Customer Systems 20-23
    Summary 20-25
    Appendix A: Practices and Solutions
    Appendix B: Using Statspack
    Index

  • Disk still marked as MISSING/HUNG

    SR 6603696.993
    We have a stretch cluster using normal redundancy in ASM, one failgroup on the main site and one on the contingency site. There are 3 disk groups. At some poin there was a problem with the link which caused the failgroup at the contingency site to get dropped and leaving the disks in HUNG/MISSING state with duplicate entries in v$asm_disk (as documented on metalink). To fix this I dropped the disks, zapped the disk headers and then added them back in with the 2nd failgroup. I did this successfully on another system yesterday and it worked totoday as well. However, for the STRMP_DATADG01 disk group the problem disks did not get fully dropped as far as ASM is concerned even though I successfully added them back in. Can you advise? Trying to drop the disks again has not helped. Reluctant to use force option without advice.
    No errors. It seems OK.
    SQL> select GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,NAME,STATE,PATH
    from v$asm_disk
    order by GROUP_NUMBER,DISK_NUMBER; 2 3
    GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU NAME STATE PATH
    1 0 CACHED MEMBER STRMP_ARCHDG01_0000 NORMAL /dev/oracle/strmp/strmp_archdg01_fg01_01_p1
    1 2 CACHED MEMBER STRMP_ARCHDG01_0002 NORMAL /dev/oracle/strmp/strmp_archdg01_fg02_01_c1
    2 0 CACHED MEMBER STRMP_DATADG01_0000 NORMAL /dev/oracle/strmp/strmp_datadg01_fg01_01_p1
    2 1 CACHED MEMBER STRMP_DATADG01_0001 NORMAL /dev/oracle/strmp/strmp_datadg01_fg01_02_p1
    2 2 CACHED MEMBER STRMP_DATADG01_0002 NORMAL /dev/oracle/strmp/strmp_datadg01_fg01_03_p1
    2 3 CACHED MEMBER STRMP_DATADG01_0003 NORMAL /dev/oracle/strmp/strmp_datadg01_fg01_04_p1
    2 4 CACHED MEMBER STRMP_DATADG01_0004 NORMAL /dev/oracle/strmp/strmp_datadg01_fg01_05_p1
    2 5 MISSING CANDIDATE STRMP_DATADG01_0005 HUNG
    2 6 MISSING CANDIDATE STRMP_DATADG01_0006 HUNG
    2 7 MISSING CANDIDATE STRMP_DATADG01_0007 HUNG
    2 8 MISSING CANDIDATE STRMP_DATADG01_0008 HUNG
    2 9 MISSING CANDIDATE STRMP_DATADG01_0009 HUNG
    2 10 CACHED MEMBER STRMP_DATADG01_0010 NORMAL /dev/oracle/strmp/strmp_datadg01_fg02_05_c1
    2 11 CACHED MEMBER STRMP_DATADG01_0011 NORMAL /dev/oracle/strmp/strmp_datadg01_fg02_04_c1
    2 12 CACHED MEMBER STRMP_DATADG01_0012 NORMAL /dev/oracle/strmp/strmp_datadg01_fg02_03_c1
    2 13 CACHED MEMBER STRMP_DATADG01_0013 NORMAL /dev/oracle/strmp/strmp_datadg01_fg02_02_c1
    2 14 CACHED MEMBER STRMP_DATADG01_0014 NORMAL /dev/oracle/strmp/strmp_datadg01_fg02_01_c1
    3 0 CACHED MEMBER STRMP_DUPXDG01_0000 NORMAL /dev/oracle/strmp/strmp_dupxdg01_fg01_01_p1
    3 1 CACHED MEMBER STRMP_DUPXDG01_0001 NORMAL /dev/oracle/strmp
    Any help would be great.
    Thanks
    Loren

    MISSING/HUNG status is caused by insufficient number of disks in the failgroups within a normal redundancy diskgroup. In other words, this is a kind of warning of ASM. If the online disk is crashed unfortunately, then data will be lost.
    This is already discussed in Bug 4998643 and bug 4910037 which are closed as not a bug. There is actually no loss of service or data as a result. When a new failure group is added back then redundancy will be restored. Redundancy requirements are also satisfied and data remains unchanged after rebalance is completed.

  • RAC uses lun from both EVA and XP array

    Hi,
    i Have created a RAC database on EVA array raw volume. Now i am trying to create a new tablespace for the same RAC database on XP raw volume but it hangs.
    Is it possible to configure the database to use lun from different different array say EVA and Xp.Single database uses lun from different arrays?

    TMR wrote:
    Is it possible to configure the database to use lun from different different array say EVA and Xp.Single database uses lun from different arrays?To answer this question alone - yes. Oracle does not care whether the disk you use is local, remote, SAN, storage array, SATA, SCSI or whatever.
    Why? Because the kernel does not expose different I/O routines depending on whether the device is a LUN, local SATA drive, local SCSI drive, remote network drive, etc. It exposes a single set of I/O routines that are device agnostic.
    Oracle calls these standard kernel I/O routines to create a data file, read data and write data. If these work, no problem. Oracle is not concerned about the exact device used by the kernel to perform that I/O. Nor does it know the type of device being written to. E.g. a LUN on the storage array is typically exposed as a SCSI device by the kernel - s/w using that device will be oblivious to the fact that it is actually a LUN on a storage array.
    PS. Just note that the right devices and combination of devices must be used to optimise performance. It will be not so bright to create an ASM striped diskgroup on 3 devices where 1 is a different device type all together and a lot slower than the other 2.

  • 11gR2 - External redundany. RAID1 or 5

    Leaving IOP's out of fray for now, is it frowned upon if I was to use external redundancy on 11gr2 ASM diskgroups with my disks backed by SAN and RAID5. In other words, is ASM striping combined with RAID5 striping deemed unnecessary particularly if RAID1(mirroring) is possible on the SAN?

    People have seen the best performance with the way ASM spreads the blocks out.. no matter what kind of RAID at the storage..
    if you have mirroring / RAID1 SAN, its not striped at the hardware.. just the redundancy.. ASM would do great job of striping.
    It might seem two layers of striping with RAID5 with ASM, but ASM seems to be growing very strong these days..

  • ASM IOstats versus Rebalance & Striping

    Hi,
    We recently have set up a 2 node RAC with ASM.
    The CRS and ASM are on 11.1.0.7
    The database is on 10.2.0.4
    All is on two AIX 5.3 TL10 nodes
    We are kind of struggling to understand the striping of ASM. The two diskgroups DB01 & FB01 exist both of multiple disks. DB01 has 6 LUNs, FB01 has three.
    These LUNs are virtual disks, assigned/configured by the VIO server of the systems and are logical disks somewhere on our SAN
    Using the ASM diskgroups now we see that some disks are used more than others, and especially that some disks have a more worse average access-time than others.
    Can anyone explain me if the below output tells me some of the disks behave better than others.
    Does anyone of you guru's have a similar system with comparable stats on the disks?
    (Unfortunately I can't format the out more readable)
    Out ASM_POWER_LIMIT is set to three (3)
    select inst_id, group_number gn, disk_number dn ,
    2 writes, write_time, write_time/writes avg_write_time
    3 from GV$ASM_DISK_IOSTAT order by inst_id, group_number, disk_number;
    INST_ID GN DN WRITES WRITE_TIME AVG_WRITE_TIME
    1 1 0 319 .000029 .000000090909
    1 1 1 334 .000242 .000000724551
    1 1 2 2,531 .000392 .000000154879
    1 1 3 2,579 .000397 .000000153936
    1 1 4 328 .000219 .000000667683
    1 1 5 398 .000039 .000000097990
    1 2 0 2,351 .000477 .000000202892
    1 2 1 42 .000111 *.000002642857*
    1 2 2 51 .000112 *.000002196078*
    2 1 0 8,728 .003417 .000000391499
    2 1 1 4,728 .001547 .000000327200
    2 1 2 157,582 .030946 .000000196380
    2 1 3 149,422 .030384 .000000203344
    2 1 4 6,427 .002109 .000000328147
    2 1 5 9,567 .002872 .000000300199
    2 2 0 145,576 .029803 .000000204725
    2 2 1 1,450 .001023 .000000705517
    2 2 2 4,820 .002475 *.000000513485*

    The output (tried to format it, but no luck ):
    INSTNAME   |GN   |DN   |READS   |WRITES   |RERRS   |WERRS   |BYTES_READ   |BYTES_WRITTEN   |READ_TIME/READS   |WRITE_TIME/WRITES
    WCSPRD1      |1   |0   |40,984   |2,559   |0   |0   |740,172,288   |19,907,072   |.000000143690      |.000000120750
    WCSPRD1      |1   |1   |2,245   |2,039   |0   |0   |89,171,456   |20,096,000   |.000000864588      |.000000377146
    WCSPRD1      |1   |2   |72,980   |36,721   |0   |0   |1,223,776,768   |584,610,816   |.000000119636      |.000000221780
    WCSPRD1      |1   |3   |7,296   |35,942   |0   |0   |124,814,336   |579,739,648   |.000000144874      |.000000222136
    WCSPRD1      |1   |4   |170,136|2,048   |0   |0   |2,858,811,392   |16,337,920   |.000000061316      |.000000474121
    WCSPRD1      |1   |5   |2,020   |4,768   |0   |0   |138,240,000   |35,438,592   |.000001297030      |.000000097525
    WCSPRD1      |2   |0   |6,453   |34,147   |0   |0   |106,086,400   |577,748,992   |.000000091430      |.000000230855
    WCSPRD1      |2   |1   |4   |42   |0   |0   |360,448   |18,938,880   |.000000250000      |.000002642857
    WCSPRD1      |2   |2   |2   |51   |0   |0   |262,144   |19,120,128   |.000001500000      |.000002196078
    WCSPRD2      |1   |0   |255,702|12,493   |0   |0   |5,329,231,872   |369,942,528   |.000000203569      |.000000584807
    WCSPRD2      |1   |1   |77,829   |8,913   |0   |0   |2,363,526,656   |303,967,232   |.000000556464      |.000000723999
    WCSPRD2      |1   |2   |445,364|196,013|0   |0   |8,289,651,200   |3,264,385,024   |.000000279183      |.000000233699
    WCSPRD2      |1   |3   |103,254|185,410|0   |0   |2,685,607,936   |3,189,159,936   |.000000249007      |.000000230128
    WCSPRD2      |1   |4   |972,911|11,013   |0   |0   |17,041,897,984   |307,239,424   |.000000090843      |.000000679833
    WCSPRD2      |1   |5   |73,327   |13,987   |0   |0   |2,422,184,960   |379,976,704   |.000001152986      |.000000622721
    WCSPRD2      |2   |0   |34,462   |177,479|0   |0   |564,985,856   |3,012,513,792   |.000000069265      |.000000215755
    WCSPRD2      |2   |1   |4   |1,530   |0   |0   |360,448   |127,465,984   |.000000500000      |.000000892157
    WCSPRD2      |2   |2   |2   |4,973   |0   |0   |262,144   |183,618,048   |.000002000000      |.000000577720
    Edit: There is 'No Imbalance' in the diskgroups according to the queries.

  • Oracle ASM failgroup confused....

    hello,
    iam pretty much confused with the fail groups here. see i have three 4 disks configured for asM
    /dev/sdc1
    /dev/sdd1
    /dev/sde1
    /dev/sdf1
    i create a asm diskgroup using two disks with normal redundancy as follows:
    create diskgroup asdf nomal redundancy
    failgroup fail1 DISK
    '/dev/sdc1' name disk01
    failgroup fail2 DISK
    '/dev/sdd1' name disk02
    attribute 'au_size'='4M','compatible.asm'='11.1','compatible.rdbms'='11.1';
    sorry for not including this statement before...
    alter diskgroup asdf add disk '/dev/sdd1';
    after creating the diskgroup the views the shows the information as
    select name,group_number,total_mb from V$ASM_DISKGROUP;
    NAME GROUP_NUMBER TOTAL_MB
    ASDF 1 12276
    select group_number,disk_number from v$asm_disk;
    GROUP_NUMBER DISK_NUMBER
    0 3
    0 4
    1 0
    1 1
    1 2
    can any one please explain what is going on here... what is the group with group number 0
    and as far as the oracle documentation is considered if we dont specify the failgroup during the datagroup creation time then each disk we specify in the CREATE DISKGROUP STATEMENT will be assigned one fail group.. if it is this case then how are the data being distributed and is the existing data in the diskgroup evenly distributed amoung all the disks in the diskgroup...
    help please....
    Edited by: vishnusivathej on Sep 18, 2010 7:35 PM

    select GROUP_NUMBER, DISK_NUMBER, MODE_STATUS, STATE, NAME, PATH from v$asm_disk;
    GROUP_NUMBER DISK_NUMBER MODE_ST STATE NAME
    PATH
    0 3 ONLINE NORMAL
    /dev/sde1
    0 4 ONLINE NORMAL
    /dev/sdf1
    1 0 ONLINE NORMAL DISK01
    /dev/sdb1
    GROUP_NUMBER DISK_NUMBER MODE_ST STATE NAME
    PATH
    1 1 ONLINE NORMAL DISK02
    /dev/sdc1
    1 2 ONLINE NORMAL ASDF_0002
    /dev/sdd1

  • Should SSD drives use ASM fine striping instead of coarse?

    By default, ASM wants to stripe data files across disks in 1MB chunks. This makes perfect sense for rotating disk drives, with their very limited IOPS rates, but for SSD drives, which can handle 100X more IOPS than spinning disks, wouldn't the 128kb stripe size be better, expecially for relatively small (<1TB) data file sizes?
    J
    Edited by: user5273070 on Dec 29, 2011 7:37 PM

    Agreed, it very much depends on the user needs.  For example, I took 10 days of video footage on vacation in Hawaii - 720p at 60fps.  The sum size of that footage is 20gb.  So it would fit no problem on the 256gb SSD.  I could fit several other vacation footages as well, along with some of my animation project footages, on the 256gb SSD.  I could be working several of these projects at once if needed.  (Although I would eventually move footage to disk backup storage on a server anyway, so it wouldn't need to clutter up the SSD forever.) 
    So for my usage, terabytes of platter drive arrays for footage is probably not needed.  Why not take advantage of fast access times of the SSD for reading footage.  Isn't that what folks are doing when they RAID 0 platter drives to hold footage - trying to improve the read speeds for performance?  In the past, with SSDs in the $300+ range - might not have made sense to go SSD.  But with fast 256gb SSDs going for $150 or lower, if you don't need the platter space, seems like you can get a nice performance boost using an SSD - without having to resort to raid arrays of platter drives.  I might give it a try and see how it goes.

  • ASM Failgroup best configuration?

    Hi experts,
    I have a quick question,
    If you create a normal redundancy diskgroup through asmca in 11gR2, and choose 4 disks, it will create the diskgroup with failgroup to its own disk.
    But in documentation example, we create a diskgroup with specify the failgroup to one or more disks.
    my question is what is the best option for specifying failgroup? should it be 1 failgroup for each disk (I don't see why the failgroup is its own disk?) ? or should 1 failgroup consists multiple disk?
    Hope you understand my rubbish question
    Tx

    Hi,
    my question is what is the best option for specifying failgroup? should it be 1 failgroup for each disk (I don't see why the failgroup is its own disk?) ? or should 1 failgroup consists multiple disk?Choosing the number of failure groups to create depends on the types of failures that must be tolerated without data loss. For small numbers of disks, such as fewer than 20, it is usually best to use the default failure group creation that puts every disk in its own failure group.
    Using the default failure group creation for small numbers of disks is also applicable for large numbers of disks where your main concern is disk failure. For example, a disk group might be configured from several small modular disk arrays. If the system must continue operating when an entire modular array fails, then a failure group should consist of all of the disks in one module. If one module fails, then all of the data on that module is relocated to other modules to restore redundancy. Disks should be placed in the same failure group if they depend on a common piece of hardware whose failure must be tolerated with no loss of availability.
    I don't see why the failgroup is its own disk? - -AddFor me it's the opposite. You will take advantages putting disk on same failure group only when you use two or more Storages System, or have common point of failure (like disks sharing the same SCSI controller). If you have no reason to use same failure group, I recommend you puts every disk in its own failure group.
    PS: A simultaneous failure can occur if there is a failure of a piece of hardware used by multiple failure groups. This type of failure usually forces a dismount of the disk group if all disks are unavailable.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on May 16, 2011 9:20 PM

  • How ASM Manages Disk Failures

    Hi,
    Was going through Oracle ASM document and came across this topic, need some clarification on this please.
    "The disks are first taken offline and then automatically dropped. In this case, the disk group remains mounted and serviceable. In addition, because of mirroring, all of the disk group data remains accessible. After the disk drop operation, Oracle ASM performs a rebalance to restore full redundancy for the data on the failed disks".   -- What does this mean?
    Thanks

    Simple example.
    Single normal redundancy diskgroup DATA1 with 2 failgroups called MIRROR1 and MIRROR2. Each failgroup has 3 x disks (let's say 512GB).
    Disk 2 in failgroup MIRROR1 fails. The MIRROR1 failgroup is not available as a result. DB operations continue unaware of the problem as MIRROR2 is online and working fine.
    The dba cannot fix disk 2 within the meantime for repair period of ASM. ASM forces disk 2 out the MIRROR1 failgroup by force dropping it. MIRROR1 has now 2 disk remaining.
    ASM starts a rebalance process to stripe the contents of the MIRROR1 failgroup across 2 disks - as oppose to 3 disks. If the space in use fits on 2 disks the rebalance operation will succeed. MIRROR1 will become online and available. But it will only be able to carry 1TB of data, as oppose to MIRROR2 with it 3 disks and 1.5TB capacity.
    If the space used cannot be striped across the remaining disks in MIRROR1, the rebalance will fail due to insufficient capacity and the failgroup will remain offline.
    In either scenario, the recommended action is to add a new 512GB disk to MIRROR1, reverting it back to a 3 disk failgroup, and then rebalance the failgroup.

Maybe you are looking for