ASM on RAID 5 dirives

hello,
We are building VLDB (15TB) 10.2.0.2 DB using ASM . Network team has provided
drive sets in a companation of RAID5, 0+1 Mirrors.
what is impact running ASM on top of RAID configuration?. Network guys will not
provide non RAID drive sets. what is best practice for applying ASM to RAID
configurations?.
thanks
james

thanks for your reply.
I have another questions could please verify this for me.
we have 2 disks mirror with 136G each has single spindle.
we partioned the disks into 5 partions
DISK0 1 2 3 4 5
| 25 | 25 | 25 | 25 | 25 |
DISK1 1 2 3 4 5
| 25 | 25 | 25 | 25 | 25 |
when i create my ASM disk groups I grouped
DISK0 1 AND DISK1 1 as ASMGRP1 AND
DISK0 2 AND DISK1 2 as ASMGRP2
Is this right way of creating disk groups?... if I do that will affect the ASM I/O performance ??
OR
DISK0 1
| 136gb |
DISK2 1
| 136gb |
Can I create one disk group as one ASM diskgroups and put 2 databases in the single ASM groups??

Similar Messages

  • ASM vs RAID for 11gR2 RAC Environment

    Hi There!
    We are planning to install 11GR2 RAC with two nodes Cluster on LINUX in Our Environment.
    Operating System: OEL 5.4
    In our hardware we got two dell Servers with 16GB RAM on each plus On SAN side we have only 8 disk (173GB) left for RAC Cluster Setup. I am going to create to database (LIVE/UAT) on This Cluster Setup. Currently Our Production DB size is 6GB and I assume for coming 5 year it will not go beyond 100GB And I keep UAT size 15GB fix with No changes.So how you get the best ASM performance by using my all Resources.
    My question:
    1)     Which is best solution for ASM and RAID in our storage Environment?
    2)     How many Disk group I Create for Both Databases (UAT/LIVE)?
    3)     How many disks should I allocate in Each Disk Group with which RAID Option or if any suggesting for LUN, How do I create LUNS across the Disk which I got?
    4)     I know oracle recommended Two DISK Group DATA&FRA is there any other suggestion for CRS, REDO, and TEMP FILE?
    Thanks for your Assintance.
    Hemesh.

    My first question was : Which RAID Option(0,1,5,0+1) I choose with ASM ?Well, it doesnt matter for ASM. At least in your configuration with 8 disks.
    RAID0is not an option - forget about it. RAID1 (or combined with more than two disks and an overlayed RAID0 which makes an RAID 1+0) might be an option for write-intensive databases. RAID5 is more for read-intensive due to the RAID5-write-hole but offers "more" capacity at the cost of slower write speed.
    I recommended to stick with RAID1 (thus mirroring two disks) and exporting them to ASM rather than creating one big RAID1+0 over all of your disks and exporting the storage as one big chuck to ASM for manageability. If you want to add storage lateron your perfect in line with Oracles recommendations to have equal size LUNs in ASM with two mirrored disks. If you create on big RAID 1+0 and lateron add two disks you have a LUN of 600 GB size and one of 170 GB size...thats a big mismatch.
    But If i create TWO disk group then, is there good practice to offer them to both (UAT/LIVE) databases.?Normally there is a separation between UAT and P on storage and on server level. In your case it might be "ok" to place everything in the same disk group. This mainly depends on which database puts the most load on the disk subsystem.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • 6 x 73GB disks : ASM or RAID ?

    Hi,
    I am going to install Oracle db on a server with 6 x 73GB disks. What is the best configuration
    - ASM with no RAID,
    - ASM with RAID, or
    - RAID with no ASM.
    To be able to install ASM i need to install OS & Oracle-binaries on at least one disk, that leaves me with 5 disks to install ASM on.
    Also, is it safe to install OS and Oracle-binaries on one disk only ?
    If i go for a non-ASM solution, should i use RAID10 or RAID5 ?
    reg,
    Amitoj.

    Hi,
    ASM is a filesystem that manage by ORACLE that against of OS filesystem.
    and RAID is a technology to redudancy of data.
    RAID may be handling by hardware RAIDcontroller or by ASM of ORACLE.
    if you have RAIDcontroller in hardware, you can using ASM or OS file system. in case of using ASM then selecting EXTERNAL REDUNDANCY in installing ASM.
    but if you haven't RAIDcontroller in hardware , you can select REDUNDANCY handling by ASM in installing it.
    paryab.

  • ASM like RAID 1 between two storages

    In my production environment instances of Oracle are under the file system JFS2. Soon we will have to relocate space for these file or switch to ASM. Our preference is going to the ASM, but we do need some tests we are conducting.
    Today, in a production environment, data from storage1 are replicated via AIX / HACMP for storage2.
    Our tests with the ASM has to contemplate the use of a set of disks in storage1 and another set in storage2.
    Below the details of the environment:
    In AIX 5.3 TL8+
    root@suorg06_BKP:/> lspv+
    hdisk17 none None
    hdisk18 none None
    hdisk19 none None
    hdisk16 none None
    root@suorg06_BKP:/> fget_config -Av*
    ---dar0---
    User array name = 'STCATORG01'
    dac0 ACTIVE dac1 ACTIVE
    Disk DAC LUN Logical Drive
    hdisk17 dac0 15 ASMTST_02
    hdisk16 dac0 14 ASMTST_01
    ---dar1---
    User array name = 'STCATORG02'
    dac4 ACTIVE dac5 ACTIVE
    Disk DAC LUN Logical Drive
    hdisk18 dac5 16 ASMTST_B01
    hdisk19 dac5 17 ASMTST_B02
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB
    0 0 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk16 30720 0
    0 1 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk17 30720 0
    0 2 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk18 30720 0
    0 3 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk19 30720 0
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    no rows selected
    SQL> CREATE DISKGROUP 'DB_DG_TESTE' NORMAL REDUNDANCY DISK '/dev/rhdisk16', '/dev/rhdisk18';+
    Diskgroup created.
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB CREATE_DAT MOUNT_DATE
    DB_DG_TESTE_000 1 0 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk16 30720 30669 09/12/2008 09/12/2008
    0 1 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk17 30720 0 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 1 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk18 30720 30669 09/12/2008 09/12/2008
    0 3 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk19 30720 0 09/12/2008 09/12/2008
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    GROUP_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE TYPE TOTAL_MB FREE_MB OFFLINE_DISKS U USABLE_FILE_MB
    1 DB_DG_TESTE 512 4096 1048576 MOUNTED NORMAL 61440 61338 0 N _30669_
    SQL> ALTER DISKGROUP 'DB_DG_TESTE' ADD DISK '/dev/rhdisk17', '/dev/rhdisk19';+
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB CREATE_DAT MOUNT_DATE
    DB_DG_TESTE_000 1 0 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk16 30720 30681 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 1 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk18 30720 30681 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 2 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk17 30720 30682 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 3 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk19 30720 30681 09/12/2008 09/12/2008
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    GROUP_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE TYPE TOTAL_MB FREE_MB OFFLINE_DISKS U USABLE_FILE_MB
    1 DB_DG_TESTE 512 4096 1048576 MOUNTED NORMAL 122880 122725 0 N 46002
    At the end of the creation of diskgroup you can see the query that the space available for the diskgroup is 30669 MB, but after the addition of two other discs the size in MB available 46002.
    It was not expected that the space available for use was approximately 50% of total discs?
    How should I proceed with the creation of diskgroup to have it in the
    mirror storage1 with storage2 without this great loss of space
    Edited by: scarlosantos on Dec 9, 2008 4:39 PM

    Maybe my phrasing was bad in the last post.
    You can do the RAID on IDE 3 by creating the array with either SATA 1 or 2. To install the driver, you must bootup with your Windows CD and hit F6 when being prompt to install 3rd party Drivers for RAID/SCSI.
    You should have the SATA driver Floppy disk with you and it is required to install the drivers.
    After installing the drivers, exit Windows installation and reboot, during reboot press Ctrl F to enter the Promise RAID array menu and you are up to do the RAID. Please read through the Serial ATA Raid manual for more info.

  • Redologs on ASM or RAID 10

    Hi Experts,
    I would like to know the best storage configuration for redologs. My OS is RHEL 6.3 and database release is 11.2.0.4 2-node RAC.
    1. Is it good and recommended practice to place the redologs on ASM?
    or
    2. RAID 10 is the best for redologs in terms of performance?
    Thanks, Suvesh

    Performance generally depends on your configuration, for instance, how many devices are involved during a read and write operation.
    Other factors that have a great influence are data caching. If your RAID 10 configuration provides better caching or striping performance than ASM, you could configure ASM disk groups with external redundancy comprising of devices using a hardware RAID solution for best performance. I would however not recommend or split a database between using ASM for some and a regular filesystem using a RAID configuration for other database files.
    Performance depends on your hardware and setup, which is unknown and therefore your question cannot be answered. However, since ASM is the recommended solution anyway for all database files, I would consider RAID an optional solution that can add performance on top of your ASM configuration.

  • Oracle ASM on RAID 5

    This is more of a planning side question. My company need to install oracle ASM 10G R2 with 10.2.3 patchset on a 32bit linux machine. I am comparatively new to this so doesn't have a thorough understanding.
    The hardware configuration I am going to have is 6 x 300GB 15K RPM SAS 3Gb/s Drives in a RAID5 to create a single logical drive. Is this a good configuration to install ASM? I can change the hardware configuration as per the requirement.
    I heard that ASM requires two logical drives. If so, do I need to make this into two logical drives? If I am making into 2 logical drives, I will loose 300G for parity drive and I must avoid that. Is there any other raid combination which is good for ASM?
    RAID5 provides the striping feature that takes care of contiguous data stored in different disks. Does ASM do the same thing? What additional benefits I will get from ASM? I have done some googling but I couldn't understand most of them. Can anybody help?

    Read this first :
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm
    and then post if you have questions left .

  • Oracle ASM and RAID 5

    Hi All..
    I have an OLTP database about 80 GB in size and is running on Oracle RAC with ASM on MSW2K3 Server 64 bit.
    The diskgroups were not balanced as some disks in the DATA diskgroup were of different capacity. So I suggested to add disks with equal capacities but the problem is that Storage Admin has give given me 3 X 100GB LUNs on RAID 5 for the DATA diskgroup which contains datafiles only. Below is the distribution of diskgroups
    DATA - 4 disks - 2X50 , 2X100 RAID 1_0
    ONLINELOGS- 2 disks - 2X50 GB RAID 1_0
    UNDO -- 2 disks 2X50 GB RAID 5
    As RAID 5 is not suggested for OLTP databases because of more I/O and parity reasons, my questions are
    1. Whether ASM performs well with RAID 5 or not?
    2. Will it impact the performance if only datafiles will be on RAID 5 and Redo Logs are on RAID 10?
    Please suggest...

    Yes, of course there is a penalty with RAID5 and of course if you have a disk failure there is an EVEN larger penalty.
    But not everyone has huge transactional burdens to bear, so for some people the trade off in terms of less performance is worth the benefit in terms of higher capacity.
    Only you know the number of IOPS you need to sustain, and the best way of determining if your configuration will meet your needs is to test it. Do you have a similar setup with a database already that undergoes a similar workload? If so how does the disk setup compare?
    Using external redundancy, all ASM will do is stripe the data across all devices within the diskgroup, and attempt to keep each device within the diskgroup at the same capacity. It will be blissfully unaware if it's RAID1, RAID3, RAID5, RAID6 - ASM just won't have any idea what the underlying RAID level is.
    jason.
    http://jarneil.wordpress.com

  • ASM vs. RAID

    We want to use ASM. The SUN STORAGE has RAID. Unfortunately the statements about the combination of ASM with STORAGE systems of Oracle are not clear. In principle I have two possibilities:
    1. Using ASM with mirroring and striping without RAID.
    2. Using ASM only with striping and RAID (10, 5).
    What I do not understand, RAID accomplishes striping and ASM also. So the striping will be done 2 times. That cannot be good? If nevertheless, are ASM and RAID(10, 5) rather the choice or ASM without RAID?
    Thank you in advance,
    Michael

    are ASM and RAID(10, 5) rather the choice or ASM without RAID? Check this thread ASM vs RAID5/RAID10

  • ASM diskgroups and RAID

    I've been reading and reading about this, but can't seem to find a definitive answer...
    We have a new implementation which will be using ASM with RAID. The data area needs to be 3TB, and the recovery area, to be used for archive logs and RMAN backups, needs to be 1TB.
    The configuration i'm thinking about now is:
    DATA diskgroup: 5*600G disks, using RAID 10 (this will include control files)
    +FRA diskgroup: 2*500G disks, using RAID 1
    +LOG diskgroup: 2*1G disks, using RAID 1 (this is only for redo logs)
    So here are my questions:
    1. Am I right in supposing that we would get the best performance on the FRA and LOG diskgroups by not using RAID 1+0? (i.e. hardware mirror, no hardware stripe)
    2. It has already been agreed to use RAID 1+0 for the +DATA diskgroup, but I can't see the added benefit of this. ASM will already stripe the data, so surely RAID 0 will just stripe it again (i.e double striping the data). Would it not be the better just to mirror the data at a hardware level?
    Any other notes about my proposed configuration would also be greatly appreciated.
    Thanks
    Rup

    user573914 wrote:
    I've been reading and reading about this, but can't seem to find a definitive answer...
    We have a new implementation which will be using ASM with RAID. The data area needs to be 3TB, and the recovery area, to be used for archive logs and RMAN backups, needs to be 1TB.
    The configuration i'm thinking about now is:
    DATA diskgroup: 5*600G disks, using RAID 10 (this will include control files)I'm guessing that you mean 600G LUNs, not physical disks, right? Are the LUNs carved from one RAID-10 array, or multiple arrays?
    +FRA diskgroup: 2*500G disks, using RAID 1
    +LOG diskgroup: 2*1G disks, using RAID 1 (this is only for redo logs)
    So here are my questions:
    1. Am I right in supposing that we would get the best performance on the FRA and LOG diskgroups by not using RAID 1+0? (i.e. hardware mirror, no hardware stripe)RAID-1 is exactly 2 physical devices, by definition - no more, no less. Since REDO and FRA are (generally) sequential write, you're only getting the throughput benefit of 1 physical disk per mirrored pair due to RAID-1 overhead (2x write overhead due to mirroring). Of course, that doesn't take into account write cache on the storage device, but that's an entirely different conversation. Maybe 1 disk's throughput is enough for your environment - it really depends on your requirements. Check your redo I/O and throughput against the rated disk IOPS and throughput to determine if you're configured correctly. Be sure to consider RAID overhead in your calcs!
    It might make more sense to carve a couple of 1G LUNs from multiple RAID-10 arrays, which stripes data across multiple disks for better throughput, assuming the arrays are not IO-bound.
    2. It has already been agreed to use RAID 1+0 for the +DATA diskgroup, but I can't see the added benefit of this. ASM will already stripe the data, so surely RAID 0 will just stripe it again (i.e double striping the data). Would it not be the better just to mirror the data at a hardware level?Again, it depends on how that RAID-10 is being presented to you by the storage admins. The 5x600GB LUNs could be carved out of 5 different RAID-10 arrays, which would increase the total number of spindles behind your diskgroup and therefore improve performance (assuming no other contention). Or, they may have carved all 5 LUNs out of a single dedicated array, which can still provide benefit, assuming the array has more than 10 disks. Or, even if the array only has 10 disks, it may be more cost effective for the storage admin due to a lower number of hot spares required to support a single array vs. multiple arrays (which doesn't benefit you - and this is where a potential performance different due to the double-striping vs. RAID-1 comes in). There are many variables that come into play.
    All things being equal, you're generally going to get better performance out of more physical disks, as long as the disks are managed properly - the additional spindles will easily negate any striping overhead (as long as the IOPS are not all happening to multiple LUNs in the same array).
    It all comes back to your need and how your storage guys are configuring things.
    Let us know if you have more specific questions - there are so many combinations of configuration that it's difficult to determine a "best" configuration without understanding your storage setup... :)
    K

  • ASM and FRA

    Installed Oracle 11g R2. Configured ASM +DATA (for data files)  and FRA (for backups).
    If the FRA diskgroup (which is non-raided) stops working due to the disk failure, assume all other disks (OS and DATA) are still operating normally, will the Oracle database can still be used/accessed by the user?

    MORE INFO to add:
    Ideally, the FRA should be on RAID 1. But due to the configuration limitation, I can only use one disk for FRA and the spare one for holding the multiplexing files such as REDO, CONTROL, and ARCHIVE.
    By this way, DATA (Two ASM disks + raid 1), FRA (One ASM disk, non-raid), and Multiplexing (one regular disk, non-raid).
    DATA (redo log + control + others) --> multiplexes to --> Multiplex Disk (mentioned above to hold redo log + control)
    FRA (archive + all other backups) --> multiplexes to --> Multiplex Disk (mentioned above to hold archive)
    Like to hear your opinion and comments.
    P.S. The FRA will hold the backups only, nothing else. And DATA (Two ASM disks + raid 1) will hold the regular oracle data files + redo, control and other related files, nothing else.
    Edited by: scottjhn on Aug 27, 2012 3:18 PM

  • Asm Benefits

    Dear All Guru's
    I have question's to clarify me doubt why we need to implement Asm while we get all feature in Raid 10.Why any organization should implement ASM.Can you please elaborate the benefilt of asm and raid level.It's help me alot.
    Regards,
    Merri

    If you currently have RAID10 then you can specify external redundancy when creating disk groups.
    Benefits of ASM are:
    You will have control of what disks are in what volume groups and there will be less reliance on System Administrators as compared to filesystems and raw devices.
    Automatic rebalancing when you add a disk.
    You will save on licensing if you plan to use clustered file systems. ASM is free !!
    Support from Oracle - one stop shop for any Oracle issues rather than relying on SA's or storage teams to troubleshoot issues.
    Makes administration easier, especially compared to raw devices.
    ASM can co-exist with filesystems or raw devices but I would find this pointless for the most part although there may be some exceptions.
    With 11gR2 you also have ACFS where your OCR, voting disk and binaries can sit on ACFS, although personally I would have binaries on filesystems on each host.
    Disadvantages:
    You can't backup using OS commands as can be done at filesystem level. You would need to use RMAN which could be considered an advantage in all honesty.
    I'm sure there are more of each.
    With regards to RAID level it really depends on what you want to achieve and how much you are willing to spend.
    As a general rule. redo logs should be on your fastest disks.
    Any data which will be accessed less frequently or which will be archived can be on slower disks with a lower RAID level to save money. So you can have different diskgroups on different types of RAID if this is part of your requirements.

  • Oracle 10G New Feature........Part 1

    Dear all,
    from last couple of days i was very busy with my oracle 10g box,so i think this is right time to
    share some intresting feature on 10g and some internal stuff with all of you.
    Have a look :-
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Oracle 10g Memory and Storage Feature.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1.Automatic Memory Management.
    2.Online Segment Shrink
    3.Redolog Advisor, checkpointing
    4.Multiple Temporary tablespace.
    5.Automatic Workload Repository
    6.Active Session History
    7.Misc
    a)Rename Tablespace
    b)Bigfile tablespace
    c)flushing buffer cache
    8.ORACLE INTERNAL
    a)undocumented parameter (_log_blocks_during_backup)
    b)X$ view (x$messages view)
    c)Internal Structure of Controlfile
    1.Automatic memory management
    ================================
    This feature reduce the overhead of oracle DBA.previously mostly time we need to set diff oracle SGA parameter for
    better performance with the help of own experience,advice views and by monitoring the behaviour
    of oracle database.
    this was just time consuming activity.........
    Now this feature makes easy life for oracle DBA.
    Just set SGA_TARGET parameter and it automatically allocate memory to different SGA parameter.
    it focus on DB_CACHE_SIZE
    SHARED_POOL_SIZE
    LARGE_POOL
    JAVA_POOL
    and automatically set it as
    __db_cache_size
    __shared_pool_size
    __large_pool_size
    __java_pool_size
    check it in alert_log
    MMAN(memory manager) process is new in 10g and this is responsible for sga tuning task.
    it automatically increase and decrease the SGA parameters value as per the requirement.
    Benefit:- Maximum utlization of available SGA memory.
    2.Online Segment Shrink.
    ==========================
    hmmmmm again a new feature by oracle to reduce the downtime.Now oracle mainly focus on availablity
    thats why its always try to reduce the downtime by intrducing new feature.
    in previous version ,reducing High water mark of table was possible by
    Exp/imp
    or
    alter table move....cmd. but on these method tables was not available for normal use for long hrs if it has more data.
    but in 10g with just few command we can reduce the HWmark of table.
    this feature is available for ASSM tablespaces.
    1.alter table emp enable row movement.
    2.alter table emp shrink space.
    the second cmd have two phases
    first phase is to compact the segment and in this phase DML operations are allowed.
    second phase(shrink phase)oracle shrink the HWM of table, DML operation will be blocked at that time for short duration.
    So if want to shrink the HWM of table then we should use it with two diff command
    first compact the segment and then shrink it on non-peak hrs.
    alter table emp shrink space compact. (This cmd doesn't block the DML operation.)
    and alter table emp shrink space. (This cmd should be on non-peak hrs.)
    Benefit:- better full table scan.
    3.Redolog Advisor and checkpointing
    ================================================================
    now oracle will suggest the size of redo log file by V$INSTANCE_RECOVERY
    SELECT OPTIMAL_LOGFILE_SIZE
    FROM V$INSTANCE_RECOVERY
    this value is influence with the value of FAST_START_MTTR_TARGET .
    Checkpointing
    Automatic checkpointing will be enable after setting FAST_START_MTTR_TARGET to non-zero value.
    4.Multiple Temporary tablespace.
    ==================================
    Now we can manage multiple temp tablespace under one group.
    we can create a tablespace group implicitly when we include the TABLESPACE GROUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist.
    For example, if group1 is not exists,then the following statements create this groups with new tablespace
    CREATE TEMPORARY TABLESPACE temp1 TEMPFILE '/u02/oracle/data/temp01.dbf'
    SIZE 50M
    TABLESPACE GROUP group1;
    --Add Existing temp tablespace into group by
    alter tablespace temp2 tablespace group group1.
    --we can also assign the temp tablespace group on database level as default temp tablespace.
    ALTER DATABASE <db name> DEFAULT TEMPORARY TABLESPACE group1;
    benefit:- Better I/O
    One sql can use more then one temp tablespace
    5.AWR(Automatic Workload Repository):-
    ================================== AWR is built in Repository and Central point of Oracle 10g.Oracle self managing activities
    is fully dependent on AWR.by default after 1 hr, oracle capure all database uses information and store in AWR with the help of
    MMON process.we called it Memory monitor process.and all these information are kept upto 7 days(default) and after that it automatically purge.
    we can generate a AWR report by
    SQL> @?/rdbms/admin/awrrpt
    Just like statspack report but its a advance and diff version of statspack,it provide more information of Database as well as OS.
    it show report in Html and Text format.
    we can also take manually snapshot for AWR by
    BEGIN
    DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
    END;
    **The STATISTICS_LEVEL initialization parameter must be set to the TYPICAL or ALL to enable the Automatic Workload Repository.
    [oracle@RMSORA1 oracle]$ sqlplus / as sysdba
    SQL*Plus: Release 10.1.0.2.0 - Production on Fri Mar 17 10:37:22 2006
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> @?/rdbms/admin/awrrpt
    Current Instance
    ~~~~~~~~~~~~~~~~
    DB Id DB Name Inst Num Instance
    4174002554 RMSORA 1 rmsora
    Specify the Report Type
    ~~~~~~~~~~~~~~~~~~~~~~~
    Would you like an HTML report, or a plain text report?
    Enter 'html' for an HTML report, or 'text' for plain text
    Defaults to 'html'
    Enter value for report_type: text
    Type Specified: text
    Instances in this Workload Repository schema
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    DB Id Inst Num DB Name Instance Host
    * 4174002554 1 RMSORA rmsora RMSORA1
    Using 4174002554 for database Id
    Using 1 for instance number
    Specify the number of days of snapshots to choose from
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Entering the number of days (n) will result in the most recent
    (n) days of snapshots being listed. Pressing <return> without
    specifying a number lists all completed snapshots.
    Listing the last 3 days of Completed Snapshots
    Snap
    Instance DB Name Snap Id Snap Started Level
    rmsora RMSORA 16186 16 Mar 2006 17:33 1
    16187 16 Mar 2006 18:00 1
    16206 17 Mar 2006 03:30 1
    16207 17 Mar 2006 04:00 1
    16208 17 Mar 2006 04:30 1
    16209 17 Mar 2006 05:00 1
    16210 17 Mar 2006 05:31 1
    16211 17 Mar 2006 06:00 1
    16212 17 Mar 2006 06:30 1
    16213 17 Mar 2006 07:00 1
    16214 17 Mar 2006 07:30 1
    16215 17 Mar 2006 08:01 1
    16216 17 Mar 2006 08:30 1
    16217 17 Mar 2006 09:00 1
    Specify the Begin and End Snapshot Ids
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Enter value for begin_snap: 16216
    Begin Snapshot Id specified: 16216
    Enter value for end_snap: 16217
    End Snapshot Id specified: 16217
    Specify the Report Name
    ~~~~~~~~~~~~~~~~~~~~~~~
    The default report file name is awrrpt_1_16216_16217.txt. To use this name,
    press <return> to continue, otherwise enter an alternative.
    Benefit:- Now DBA have more free time to play games.....................:-)
    Advance version of statspack
    more DB and OS information with self managing capabilty
    New Automatic alert and database advisor with the help of AWR.
    6.Active Session History:-
    ==========================
    V$active_session_history is view that contain the recent session history.
    the memory for ASH is comes from SGA and it can't more then 5% of Shared pool.
    So we can get latest and active session report from v$active_session_history view and also get histortical data of
    of session from DBA_HIST_ACTIVE_SESS_HISTORY.
    v$active_session_history include some imp column like:-
    ~SQL identifier of SQL statement
    ~Object number, file number, and block number
    ~Wait event identifier and parameters
    ~Session identifier and session serial number
    ~Module and action name
    ~Client identifier of the session
    7.Misc:-
    ========
    Rename Tablespace:-
    =================
    in 10g,we can even rename a tablespace by
    alter tablespace <tb_name> rename to <tb_name_new>;
    This command will update the controlfile,data dictionary and datafile header,but dbf filename will be same.
    **we can't rename system and sysaux tablespace.
    Bigfile tablespace:-
    ====================
    Bigfile tablespace contain only one datafile.
    A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile.
    Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment-space management.
    we can take the advantage of bigfile tablespace when we are using ASM or other logical volume with RAID.
    without ASM or RAID ,it gives poor response.
    syntax:-
    CREATE BIGFILE TABLESPACE bigtbs
    Flushing Buffer Cache:-
    ======================
    This option is same as flushing the shared pool,but only available with 10g.
    but i don't know, whats the use of this command in prod database......
    anyway we can check and try it on test server for tuning n testing some query etc....
    SQL> alter system flush buffer_cache;
    System altered.
    ++++++++++++++++++
    8.Oracle Internal
    ++++++++++++++++++
    Here is some stuff that is not related with 10g but have some intresting things.
    a)undocumented parameter "_log_blocks_during_backup"
    ++++++++++++++++++++++++
    as we know that oracle has generate more redo logs during hotbackup mode because
    oracle has to maintain the a complete copy of block into redolog due to split block.
    we can also change this behaviour by setting this parameter to False.
    If Oracle block size equals the operating system block size.thus reducing the amount of redo generated
    during a hot backup.
    WITHOUT ORACLE SUPPORT DON'T SET IT ON PROD DATABASE.THIS DOCUMENT IS JUST FOR INFORMATIONAL PURPOSE.
    b)some X$ views (X$messages)
    ++++++++++++++++
    if you are intresting in oracle internal architecture then x$ view is right place for getting some intresting things.
    X$messages :-it show all the actions that a background process do.
    select * from x$messages;
    like:-
    lock memory at startup MMAN
    Memory Management MMAN
    Handle sga_target resize MMAN
    Reset advisory pool when advisory turned ON MMAN
    Complete deferred initialization of components MMAN
    lock memory timeout action MMAN
    tune undo retention MMNL
    MMNL Periodic MQL Selector MMNL
    ASH Sampler (KEWA) MMNL
    MMON SWRF Raw Metrics Capture MMNL
    reload failed KSPD callbacks MMON
    SGA memory tuning MMON
    background recovery area alert action MMON
    Flashback Marker MMON
    tablespace alert monitor MMON
    Open/close flashback thread RVWR
    RVWR IO's RVWR
    kfcl instance recovery SMON
    c)Internal Structure of Controlfile
    ++++++++++++++++++++++++++++++++++++
    The contents of the current controlfile can be dumped in text form.
    Dump Level Dump Contains
    1 only the file header
    2 just the file header, the database info record, and checkpoint progress records
    3 all record types, but just the earliest and latest records for circular reuse record types
    4 as above, but includes the 4 most recent records for circular reuse record types
    5+ as above, but the number of circular reuse records included doubles with each level
    the session must be connected AS SYSDBA
    alter session set events 'immediate trace name controlf level 5';
    This dump show lots of intresting information.
    it also show rman recordes if we used this controlfile in rman backup.
    Thanks
    Kuljeet Pal Singh

    You can find each doc in html and pdf format on the Documentation Library<br>
    You can too download all the documentation in html format to have all on your own computer here (445.8MB)<br>
    <br>
    Nicolas.

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Log file sync  during RMAN archive backup

    Hi,
    I have a small question. I hope someone can answer it.
    Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
    During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
    During a backup reponse time can be much longer then this 0.5 seconds.
    Below an typical example of responsetime.
    EVENT: log file sync
    WAIT_CLASS: Commit
    TIME_WAITED: 10,774
    It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
    I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
    - We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
    - Autobackup of controlfile is off.
    - Dataguard: LogXptMode = 'arch'
    Greetings,

    Hi,
    Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
    -Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
    -The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
    As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
    select ASH.INST_ID INST,
    ASH.EVENT EVENT,
    ASH.P2TEXT,
    ASH.WAIT_CLASS,
    DE.OWNER OWNER,
    DE.OBJECT_NAME OBJECT_NAME,
    DE.OBJECT_TYPE OBJECT_TYPE,
    ASH.TIJD,
    ASH.TIME_WAITED TIME_WAITED
    from (SELECT INST_ID,
    EVENT,
    CURRENT_OBJ#,
    ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
    TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
    WAIT_CLASS,
    P2TEXT
    FROM gv$active_session_history
    WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
    (SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
    WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
    AND ASH.TIME_WAITED > 2
    ORDER BY 8,6
    - Our logfiles are 250M and we have 8 groups of 2 members.
    - Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
    - I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
    - I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
    Thank you all very much for still responding even if I put this in the totally wrong category.
    Greetings,

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

Maybe you are looking for