ASM RAID 6

We might have to break the current OCFS2 implementation (on SAN) and replace it with ASM on raw devices. What is the difference when it comes to storage consumption? Do we gain or lose disk space, either case - how much?
SLES9
10.1.0.5
Appreciate your thoughts

-Did external redundancy for ASM exist in 10.1?Yes, it exists in 10.1 also.
-ASM vs. OCFS2 from a file system perspective - apart
from the manageability issue, why would ASM be
recommended over OCFS2?You can say that there are many new features which are being added into ASM but there has not been much development in OCFS.
-Amit
http://askoracledba.wordpress.com/

Similar Messages

  • San vs asm raid which to adopt

    Hi,
    we have a system using ASM and San drive. San has a hardware raid, while ASM has also do the same thing. Since according to my understand its based on SAME. Strip and Mirror everything. do you think we have multiple raids in the system and we should disable hardware raid. Also i thing hardware raid is raid 4. please correct me if i am wrong
    regards
    Nick

    RAID 1+0 or 0+1 is an implementation of striping and mirroring.
    RAID 5 is an implementation of striping with parity (where the parity stripe is scattered among the disks, with RAID 4 the parity stripes are on specific disks)
    see: http://www.baarf.com
    please mind the baarf website emphasizes on the (write) performance penalty of using RAID levels with parity. Even with SAN's this is still very true. (it's inherent on the way parity is implemented)
    ASM normal redundancy is essentially a mirror implementation. The implementation of ASM normal redundancy differs subtly from the way RAID mirroring works.
    This means having RAID 1+0/1+0 on the SAN level is a way of mirroring, and having ASM normal redundancy is another way of having mirror copies of blocks, which would mean that having RAID mirroring on the SAN level and normal redundancy on the ASM level means you have got 4 copies on the same storage box.
    So I would say there's little benefit of having ASM normal redundancy on top of a mirrored stripeset.
    As an advise what to do: most ASM implementations use external redundancy which means the redundancy of the SAN is used. I think this makes sense.
    Using normal redundancy makes sense when using local (non RAID) disks, or when having multiple SAN's.

  • How can I use strip by ASM

    Hello buddy
    I am puzzle about strip.
    Do I need to create PV->VG->stripped Raw LV and set up ASM base on it or just use ASM strip.
    How can I use ASM strip not LVM strip ?
    Thanks

    Hi,
    A quote from MoS
    - ASM & RAID striping are complimentary to each other. When a SAN or disk array provides striping, that can be used in a manner which is complementary to ASM.
    - Oracle ideally suggest that the RAID stripe size at the SAN layer should match ASM stripe size (1MB by default). However, if the above is not possible (1MB stripe at storage level),then a stripe size of 256K/128K/512k should be ok. As long ASM 1MB stripe size is a multiple of hardware stripe size, I/O is aligned at hardware level. Otherwise, a single I/O can be split into multiple disks and cause multiple read writes and excessive i/o operations.
    - ASM mirroring has a small overhead on the server (specially on write performance) where external hardware mirroring performs the function on the storage controller.
    With external mirroring, you need to reserve disks as hot spares.  With ASM, hot spares are not necessary and therefore, more efficient use of the storage capacity.
    - ASM reduces the chance of mis-configuation and human error because of failure groups. With external RAID, you have to carefully plan your redundant controllers and paths which requires higher admin overhead.
    Cheers

  • ASM Disk Allocation

    Dear All,
    I need one more piece of advice from you guru's.
    I have 20 TB of SAN available. Our database is currently 1 TB in size and will grow to 20 TB very soon. We have the following env:
    OS: RHEL 5.5
    DB: Oracle Database 11g R2 Patch - 1
    RAC: 2-node RAC using ASM
    RAID: SAN is Raided as "1+0"
    The question:
    How do I allocate the 20 TB of storage to ASM. How many LUNs should I create? As SAN is already RAIDed, I don't think I should go ASM redundancy.
    Please share your experience.
    Thanks
    P

    Hi,
    max LUN size ASM can handle on non Exadata platform is 2TB.
    So I would choose any size between 1 to 2TB luns.
    With 2 TB Luns you are left with a total of 10 .
    You could use more (smaller), but then you have to administer and oversee more luns. I think 10 is a pretty nice number..
    Max I would to is 1TB lun and end up with 20 luns.
    However keep in mind, that when you want to grow, Oracle recommends adding luns with the same size.
    So if you don't wan't to grow in 2 TB steps in the future, choose a smaller lun size.
    As you stated you can stay with external (no) redundancy in ASM.
    Regards
    Sebastian

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Install Recommendations (RAID, ASM, Block Size etc)

    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    The way I usually handle databases of that size if you don't feel like migrating to ASM redundancy is to use RAID-10. RAID5 is HORRIBLY slow (your redo logs will hate you) and if your controller is any good, a RAID-10 will be the same speed as a RAID-0 on reads, and almost as fast on writes. Also, when you create your array, make the stripe blocks as close to 1MB as you can. Modern disks can usually cache 1MB pretty easily, and that will speed the performance of your array by a lot.
    I just never got into ASM, not sure why. But I'd say build your array as a RAID-10 (you have the capacity) and you'll notice a huge difference.
    16k block size should be good enough. If you have recordsets that are that large, you might want to consider tweaking your multiblock read count.
    ~Jer

  • RAID 10 and ASM

    HI
    Can someone give the difference between RAID 10 and ASM
    From my understanding both is doing stripping of data.
    If we are implementing RAID 10 ,then do we need to implement ASM (if yes what is the advantage)
    Thanks

    user10394804 wrote:
    Can someone give the difference between RAID 10 and ASMWhat is the difference between a car and the road? ASM=vehicle. RAID=road.
    In other words, ASM is a Storage (or Volume) Manager System that runs on different RAID implementations. The RAID implementation can be at the actual storage layer (hardware RAID).. in which case RAID is external to ASM. Or ASM can itself be used to implement RAID to run on (software RAID).
    From my understanding both is doing stripping of data.ASM automatically stripes data across disks (e.g. RAID10 disks) in a diskgroup. RAID10 is a combination of striping and mirroring. So yes, both use striping - but ASM != RAID10. These are 2 very different layers in the storage tier.
    If we are implementing RAID 10 ,then do we need to implement ASM (if yes what is the advantage)Yes. ASM manages the storage layer for you. It can perform dynamic load balancing. With ASM 11r2, there are even more features that are introduced that makes ASM a very important part in tying the storage layer effectively and efficiently with the o/s layer and the database layer.
    Oracle specifically recommends using ASM for RAC.

  • ASM like RAID 1 between two storages

    In my production environment instances of Oracle are under the file system JFS2. Soon we will have to relocate space for these file or switch to ASM. Our preference is going to the ASM, but we do need some tests we are conducting.
    Today, in a production environment, data from storage1 are replicated via AIX / HACMP for storage2.
    Our tests with the ASM has to contemplate the use of a set of disks in storage1 and another set in storage2.
    Below the details of the environment:
    In AIX 5.3 TL8+
    root@suorg06_BKP:/> lspv+
    hdisk17 none None
    hdisk18 none None
    hdisk19 none None
    hdisk16 none None
    root@suorg06_BKP:/> fget_config -Av*
    ---dar0---
    User array name = 'STCATORG01'
    dac0 ACTIVE dac1 ACTIVE
    Disk DAC LUN Logical Drive
    hdisk17 dac0 15 ASMTST_02
    hdisk16 dac0 14 ASMTST_01
    ---dar1---
    User array name = 'STCATORG02'
    dac4 ACTIVE dac5 ACTIVE
    Disk DAC LUN Logical Drive
    hdisk18 dac5 16 ASMTST_B01
    hdisk19 dac5 17 ASMTST_B02
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB
    0 0 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk16 30720 0
    0 1 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk17 30720 0
    0 2 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk18 30720 0
    0 3 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk19 30720 0
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    no rows selected
    SQL> CREATE DISKGROUP 'DB_DG_TESTE' NORMAL REDUNDANCY DISK '/dev/rhdisk16', '/dev/rhdisk18';+
    Diskgroup created.
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB CREATE_DAT MOUNT_DATE
    DB_DG_TESTE_000 1 0 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk16 30720 30669 09/12/2008 09/12/2008
    0 1 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk17 30720 0 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 1 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk18 30720 30669 09/12/2008 09/12/2008
    0 3 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk19 30720 0 09/12/2008 09/12/2008
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    GROUP_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE TYPE TOTAL_MB FREE_MB OFFLINE_DISKS U USABLE_FILE_MB
    1 DB_DG_TESTE 512 4096 1048576 MOUNTED NORMAL 61440 61338 0 N _30669_
    SQL> ALTER DISKGROUP 'DB_DG_TESTE' ADD DISK '/dev/rhdisk17', '/dev/rhdisk19';+
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB CREATE_DAT MOUNT_DATE
    DB_DG_TESTE_000 1 0 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk16 30720 30681 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 1 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk18 30720 30681 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 2 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk17 30720 30682 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 3 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk19 30720 30681 09/12/2008 09/12/2008
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    GROUP_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE TYPE TOTAL_MB FREE_MB OFFLINE_DISKS U USABLE_FILE_MB
    1 DB_DG_TESTE 512 4096 1048576 MOUNTED NORMAL 122880 122725 0 N 46002
    At the end of the creation of diskgroup you can see the query that the space available for the diskgroup is 30669 MB, but after the addition of two other discs the size in MB available 46002.
    It was not expected that the space available for use was approximately 50% of total discs?
    How should I proceed with the creation of diskgroup to have it in the
    mirror storage1 with storage2 without this great loss of space
    Edited by: scarlosantos on Dec 9, 2008 4:39 PM

    Maybe my phrasing was bad in the last post.
    You can do the RAID on IDE 3 by creating the array with either SATA 1 or 2. To install the driver, you must bootup with your Windows CD and hit F6 when being prompt to install 3rd party Drivers for RAID/SCSI.
    You should have the SATA driver Floppy disk with you and it is required to install the drivers.
    After installing the drivers, exit Windows installation and reboot, during reboot press Ctrl F to enter the Promise RAID array menu and you are up to do the RAID. Please read through the Serial ATA Raid manual for more info.

  • Redologs on ASM or RAID 10

    Hi Experts,
    I would like to know the best storage configuration for redologs. My OS is RHEL 6.3 and database release is 11.2.0.4 2-node RAC.
    1. Is it good and recommended practice to place the redologs on ASM?
    or
    2. RAID 10 is the best for redologs in terms of performance?
    Thanks, Suvesh

    Performance generally depends on your configuration, for instance, how many devices are involved during a read and write operation.
    Other factors that have a great influence are data caching. If your RAID 10 configuration provides better caching or striping performance than ASM, you could configure ASM disk groups with external redundancy comprising of devices using a hardware RAID solution for best performance. I would however not recommend or split a database between using ASM for some and a regular filesystem using a RAID configuration for other database files.
    Performance depends on your hardware and setup, which is unknown and therefore your question cannot be answered. However, since ASM is the recommended solution anyway for all database files, I would consider RAID an optional solution that can add performance on top of your ASM configuration.

  • ASM on RAID 5 dirives

    hello,
    We are building VLDB (15TB) 10.2.0.2 DB using ASM . Network team has provided
    drive sets in a companation of RAID5, 0+1 Mirrors.
    what is impact running ASM on top of RAID configuration?. Network guys will not
    provide non RAID drive sets. what is best practice for applying ASM to RAID
    configurations?.
    thanks
    james

    thanks for your reply.
    I have another questions could please verify this for me.
    we have 2 disks mirror with 136G each has single spindle.
    we partioned the disks into 5 partions
    DISK0 1 2 3 4 5
    | 25 | 25 | 25 | 25 | 25 |
    DISK1 1 2 3 4 5
    | 25 | 25 | 25 | 25 | 25 |
    when i create my ASM disk groups I grouped
    DISK0 1 AND DISK1 1 as ASMGRP1 AND
    DISK0 2 AND DISK1 2 as ASMGRP2
    Is this right way of creating disk groups?... if I do that will affect the ASM I/O performance ??
    OR
    DISK0 1
    | 136gb |
    DISK2 1
    | 136gb |
    Can I create one disk group as one ASM diskgroups and put 2 databases in the single ASM groups??

  • ASM Disk Group RAID Levels

    This is the scenario that I am currently working on. Just need some input on whether it is feasible or not.
    We have a 2 node RAC running Oracle 10.2.0.3 on AIX 5L. Database size is ~2TB. The database mostly performs OLTP but also stores some historical data.
    There are two main applications using the database - one performs high reads with some small updates & inserts, while the other is very write intensive but does some reads as well.
    Currently there are three disk groups one for the tablespaces (dg_data), another for system/sysaux/undo tablespaces (dg_system) and another for archived logs & redo log copies (dg_flash) - all using external redundancy. ASM best practises recommend no more than 2 disk groups. It also recommends disk groups with disks of similar characteristics including raid levels. However, the dg_data disk group has both RAID 5 and RAID 1+0 disks which house tablespaces for both applications. Seeing that the applications have different requirements (heavy reads vs heavy writes) does it make sense to create a separate disk group with 2 different RAID levels or would using RAID 5 in dg_data satisfy both requirements?

    I am attempting to generate some statistics on the ASM Disks I/O activity before implementing the disk group separation in order have some metrics for comparison purposes. Enterprise Manager Grid Control displays the performance of disk groups and individual disks by showing the Disk Group I/O Cumulative Statistics. When comparing the results with the asmiostat output I am unable to correlate the results. I know that the asmiostat queries the v$asm_disk_stat view. Where does EM GC pull it's information from?
    For example, I run the following query on the ASM instance:
    SQL> select group_number,disk_number,total_mb,free_mb,name,path,reads,writes,read_time,write_time,bytes_read,bytes_written
    2 from v$asm_disk_stat
    3 where group_number=
    4 (select group_number from v$asm_diskgroup
    5* where name = 'DG_FLASH')
    GROUP_NUMBER DISK_NUMBER TOTAL_MB FREE_MB NAME PATH READS WRITES READ_TIME WRITE_TIME BYTES_READ BYTES_WRITTEN
    1 0 8671 8432 DG_FLASH_0000 /dev/asm2 14379476 10338479 149205.75 19633.64 290,136,450,560.00 7.2165E+10
    1 1 8671 8431 DG_FLASH_0001 /dev/asm3 11470508 10278698 184597.5 19313.54 249,769,027,584.00 9.2911E+10
    1 2 8671 8432 DG_FLASH_0002 /dev/asm4 17274529 8743188 178547.56 38342.52 339,439,240,192.00 6.7165E+10
    The output from the same period on Grid Control is below
    MEMBER DISKS AVG RESPTIME AVG THROUGHPUT TOT I/O TOT RDS TOT WRTS RDERRS WRTERRS
    DG_FLASH_0000 5.58 2.58 33179503 21949607 11,229,896 0 0
    DG_FLASH_0001 8.26 1.83 25752100 13131695 12,620,405 0 0
    DG_FLASH_0002 8.11 1.86 28269693 18798823 9,470,870 0 0
    The statistics in the query are lower than those in the EM GC report. I also tried querying the fixed views (x$) but the results were even more confusing.
    What is the best method for comparing and gathering statistics on ASM activity?

  • ASM vs RAID for 11gR2 RAC Environment

    Hi There!
    We are planning to install 11GR2 RAC with two nodes Cluster on LINUX in Our Environment.
    Operating System: OEL 5.4
    In our hardware we got two dell Servers with 16GB RAM on each plus On SAN side we have only 8 disk (173GB) left for RAC Cluster Setup. I am going to create to database (LIVE/UAT) on This Cluster Setup. Currently Our Production DB size is 6GB and I assume for coming 5 year it will not go beyond 100GB And I keep UAT size 15GB fix with No changes.So how you get the best ASM performance by using my all Resources.
    My question:
    1)     Which is best solution for ASM and RAID in our storage Environment?
    2)     How many Disk group I Create for Both Databases (UAT/LIVE)?
    3)     How many disks should I allocate in Each Disk Group with which RAID Option or if any suggesting for LUN, How do I create LUNS across the Disk which I got?
    4)     I know oracle recommended Two DISK Group DATA&FRA is there any other suggestion for CRS, REDO, and TEMP FILE?
    Thanks for your Assintance.
    Hemesh.

    My first question was : Which RAID Option(0,1,5,0+1) I choose with ASM ?Well, it doesnt matter for ASM. At least in your configuration with 8 disks.
    RAID0is not an option - forget about it. RAID1 (or combined with more than two disks and an overlayed RAID0 which makes an RAID 1+0) might be an option for write-intensive databases. RAID5 is more for read-intensive due to the RAID5-write-hole but offers "more" capacity at the cost of slower write speed.
    I recommended to stick with RAID1 (thus mirroring two disks) and exporting them to ASM rather than creating one big RAID1+0 over all of your disks and exporting the storage as one big chuck to ASM for manageability. If you want to add storage lateron your perfect in line with Oracles recommendations to have equal size LUNs in ASM with two mirrored disks. If you create on big RAID 1+0 and lateron add two disks you have a LUN of 600 GB size and one of 170 GB size...thats a big mismatch.
    But If i create TWO disk group then, is there good practice to offer them to both (UAT/LIVE) databases.?Normally there is a separation between UAT and P on storage and on server level. In your case it might be "ok" to place everything in the same disk group. This mainly depends on which database puts the most load on the disk subsystem.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • 6 x 73GB disks : ASM or RAID ?

    Hi,
    I am going to install Oracle db on a server with 6 x 73GB disks. What is the best configuration
    - ASM with no RAID,
    - ASM with RAID, or
    - RAID with no ASM.
    To be able to install ASM i need to install OS & Oracle-binaries on at least one disk, that leaves me with 5 disks to install ASM on.
    Also, is it safe to install OS and Oracle-binaries on one disk only ?
    If i go for a non-ASM solution, should i use RAID10 or RAID5 ?
    reg,
    Amitoj.

    Hi,
    ASM is a filesystem that manage by ORACLE that against of OS filesystem.
    and RAID is a technology to redudancy of data.
    RAID may be handling by hardware RAIDcontroller or by ASM of ORACLE.
    if you have RAIDcontroller in hardware, you can using ASM or OS file system. in case of using ASM then selecting EXTERNAL REDUNDANCY in installing ASM.
    but if you haven't RAIDcontroller in hardware , you can select REDUNDANCY handling by ASM in installing it.
    paryab.

  • RAID level, ASM

    Hi all,
    from performance perspective: Which RAID levels are recommended to store OCR/Voting disks , Redo logs, Control files, datafiles in an 11gR2 RAC using ASM?
    - OCR/Voting disks with NORNAL redundancy ASM level.
    - Redo logs, Control files, datafiles, temp files EXTERNAL redundancy ASM level.
    (we will use Redhat Enterprise Linux 5.5)
    Thank you,
    Diego

    Diego wrote:
    from performance perspective: Which RAID levels are recommended to store OCR/Voting disks , Redo logs, Control files, datafiles in an 11gR2 RAC using ASM?A 2-way mirror with a quorum disk is needed for the OCR and voting disk. This mean at minimum 3 disks.
    For the database - that depends entirely on what redundancy you need for the database layer.
    ASM automatically stripes across the disks for a diskgroup. You can then choose to mirror that in addition. Or you can use external mirroring (on the storage server/SAN). Or you can use RAID10 on the SAN and then use single stiped and mirrored LUNs per diskgroup. Or you can use multiple such LUNs per diskgroup, which means ASM will stripe the striped set. Not a real issue and covered in an Oracle support note.
    As ASM does not support RAID5/6/7, you can use that on the physical storage layer and then simply stripe in ASM.
    Or you can use 2 storage servers and use ASM to stripe the disks (in a diskgroup) per server. Then mirror these across storage server boundaries, thus introducing redundancy at physical storage server layer.
    Lots of possibilities and combinations. The best being whatever meets your requirements the best.
    Also keep in mind that the fabric layer also need to be redundant. No use of having 2 storage servers for example and mirroring across these for redundancy, when connectivity to the storage servers are via a single non-redundant fabric layer switch. Or wiring a dual port HBA/HCA into the same switch (cable failure covered, but loose the switch and loose all connectivity to the fabric layer),

  • Asm mirroring vs. hardware raid

    We are planning a new installation of Oracle 10g Standard Edition with RAC.
    What is best to use: asm mirroring or hardware raid?
    Thank you,
    Marius

    I found this link http://www.revealnet.com/newsletter-v6/0905_D.htm which has an interesting comparation.
    We have an iSCSI.
    Porzer: I'm thinking the same, but I don't have experience with Oracle and I want to know from someone with more experience :)
    Thank you,
    Marius

Maybe you are looking for