Redo Log - Storage Consideration

I have one question about redo Log storage guidelines, that i red in one article on metalink
In that article recommended place redo log on raid device level 1 (*Mirroring*),
and NOT recommended place it on raid 10 or 5
If you know - explain detailed please, why it is so?
Scientia potentia est

I haven't seen a raid 0, raid 1, or raid 5 filesystem in the last 5 years. Most companies now use SAN or NAS.
That is not entirely true is it Robert. Most default SAN installation are set up as Raid5 and are presented to the users as filesystem mounts.
I agree entirely re the benefits of using ASM
The reason why redo logs are recommended to be on Raid1 (1+0, 10) and not Raid5 is that redo logs write differently to all other oracle datafiles as they are written sequentially by the LGWR process. Raid5 involves writing parity data to another disk and therefore adds additional writes to what can already be a very intensive single-streamed process
John
www.jhdba.wordpress.com

Similar Messages

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • [Consideration] redo logs groups and it's members?

    hello Gurus,
    Well, the theory written in the book maybe different on the real situation, different company different configuration...
    How we determine how many redo logs groups it's should be? And how many members each groups better?
    What are the considerations?
    Regards,
    Nia..

    Hi,
    You can query v$log_history to check the log switch frequency.
    SELECT * FROM ( SELECT * FROM ( SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '99') "00:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '99') "01:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '99') "02:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '99') "03:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '99') "04:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '99') "05:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '99') "06:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '99') "07:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '99') "08:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '99') "09:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '99') "10:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '99') "11:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '99') "12:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '99') "13:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '99') "14:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '99') "15:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '99') "16:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '99') "17:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '99') "18:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '99') "19:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '99') "20:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '99') "21:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '99') "22:00"
    , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '99') "23:00"
    FROM V$LOG_HISTORY
    WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    ) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
    ) WHERE ROWNUM < 8
    regards
    jaffy
    Edited by: Jaffy on May 14, 2010 12:33 PM

  • Best way to move redo log from one disk group to another in ASM?

    Hi All,
    Our db is 10.2.0.3 RAC db. And database servers are window 2003 server.
    We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group 1 to 2. Here are the options we are thinking about but not sure which one is the best from easiness and safety prospective.
    Thank you very much for your help in advance.
    Shirley
    Option 1:
    1)     shutdown immediate
    2)     copy log files from disk group 1 to disk group2 using RMAN (need to research on this)
    3)     startup mount
    4)     alter database rename file ….
    5)     Open database open
    6)     delete the redo files from disk group 1 in ASM (how?)
    Option 2:
    1)     create a set of redo log groups in disk group 2
    2)     drop the redo log groups in disk group 1 when they are inactive and have been archived
    3)     delete the redo files associated with those dropped groups from disk group 1 (how?) (According to Oracle menu: when you drop the redo log group the operating system files are not deleted and you need to manually delete those files)
    Option 3:
    1)     create a set of redo members in disk group 2 for each redo log group in disk group 1
    2)     drop the redo log memebers in disk group 1
    3)     delete the redo files from disk group 1 associated with the dropped members

    Absolutely not, they are not even remotely similar concepts.
    OMF: Oracle Managed Files. It is an RDMBS feature, no matter what your storage technology is, Oracle will take care of file naming and location, you only have to define the size of a file, and in the case of a tablespace on an OMF DB Configuration you only need to issue a command similar to this:
    CREATE TABLESPACE <TSName>; So the OMF environment creates an autoextensible datafile at the predefined location with 100M by default as its initial size.
    On ASM it should only be required to specify '+DGroupName' as the datafile or redo log file argument so it can be fully managed by ASM.
    EMC. http://www.emc.com No further commens on it.
    ~ Madrid
    http://hrivera99.blogspot.com

  • Best practice - online redo logs and virtualization

    I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
    We use a non-standard disk distribution scheme -
    on the c: drive we have oracle_home as well as directories for control files and online redo logs.
    on the d: drive we have datafiles
    on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
    my question is this:
    is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
    Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
    Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
    We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
    Ideas?
    Mike

    Hi,
    "Old" Oracle standard is to use as many spindles as possible.
    It looks to me, you have only 1 disk with several partitions on it ??
    In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
    Take another physical seperate D: drive to install you application.
    Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
    And finally yet another disk for the archive logs.
    We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
    All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
    The machine, or must I say the database, operates like a high speed train, very, very fast.
    Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
    Try to prevent the system from swapping, because that is a performance killer!
    Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
    Success!
    FJFranken
    Edited by: fjfranken on 7-okt-2011 7:19

  • Corruption in redo log

    Hi,
    We are using Oracle 7.3 and we had a system crash. After i boot up the system and tried to open the database, i got a message that one of the redo log(current) was corrupted and hence the database cannot opened. We are in nonarchived mode and since we are using raid5, we haven't multiplexed the files. I cannot drop the log (As i cannot open the database,i cannot switch the logfile, drop and rebuild again). Now is there any methord to drop the log, and rebuild the log again? (The final option is to restore the whole system through "UFSRESTORE" since total storage of all HDD is only 16gb and then take a import of the schema).
    thanks
    Arun

    Citizen_2 wrote:
    How can i force the archiving of a redo log group 2?How could you archive a log group when you have lost all members of that group?
    Have you checked out this documentation: [Recovering After the Loss of Online Redo Log Files|file://///castle/1862/Home%20Directories/Vaillancourt/Oracle%2010gR2%20Doc/backup.102/b14191/recoscen008.htm#sthref1872]
    More specifically:
    If the group is . . .      Current      
    Then . . .
    It is the log that the database is currently writing to.
    And you should . . .
    Attempt to clear the log; if impossible, then you must restore a backup and perform incomplete recovery up to the most recent available redo log.
    >
    HTH!

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • Fundamental questions on redo logs and rollbacks

    Hi all,
    Some basic questions, I really want to understand it very clearly.
    Suppose that we have updated few records in a table. We know that the blocks to be updated will be fetched into buffer cache, they will be updated with new value and commited eventually. The questions I have are ,
    1) What exact information will go to redo log ? is it a copy of the block before change and copy of the block after change ?
    2) What exactly goes to rollback segment? is it copy of block before change (for update) and just the rowid for inserted row and the copy of block for a deleted row ?
    3) Whatever we do, is it the whole block that goes to redo or rollback ? Means if there are 10 rows in the block and we update one of them, still whole block goes to redo or rollback ?
    4) If we rollback, what goes where ? Is there anything that goes to redo if we rollback ?
    Please explain.
    Thanks.

    Redo stores changes made in the database, and undo/rollback stores the reverse of those changes. Data blocks may be changed prior to a commit, and recorded in both locations.
    So, when a database is recovered, redo is applied to the backup datafiles, rolling every change forward, and then undo is applied to reverse any uncomitted transactions.
    Undo/rollback can also be used simply to roll back a transaction in an active instance. Redo is only used during instance recovery.
    I don't know if this is tracked via the storage of block images, or if it just stores the change itself.
    -cf

  • How to change redo log size in oracle 10g

    Hi Experts,
    Can anybody confirm how to change redo log size in oracle 10g?
    Amit

    Dear Amit,
    You can enlarge the size of existing Online Redo log files, by adding new groups with different size of files (origlog$/mirrlog$) and then carefully droping the old groups with  their associated inactive files.
    Please refer SAP Note 309526 - Enlarging redo log files to perform the activity.
    Steps to perform:
    STEP-1. Analyze the exisiting situation and prepare an action plan.
    A. You have to ensure that no more than one log switch per minute occurs during peak times.
    It may also be necessary to increase the size of the online redo logs until they are large enough.
    Too many log switches lead to too many checkpoints, which in turn lead to a high writing load in the I/O subsystem.
    Use ST04 -> Additional Functions --> Display GV$-Views
    There you can select
    Gv$LOG_HISTORY --->for determing your existing LOG switching frequency.
    GV$LOG -
    > list the status(INACTIVE/CURRENT/ACTIVE) /size/sequence no. of existing online redolog files
    GV$LOGFILE  --- > list the information of existing online  redolog files with their storage paths
    You can document the existing situation of Online Redo Log Fiile management before going to enlarge Redo Log Files.
    It will be helpful, if something goes wrong while performing activities.
    B. Based on above Situation analysis, Plan your New Redo Log Group and there Members with new optimal size.
    e.g.
    Group No.         Redo Log File Locations  u201C/oracle/<SID>/u201D                  Size
                                 /origlogA                  /mirrlogA            
    15                        log_g15m1.dbf         log_g15m2.dbf               100 MB
    17                        log_g17m1.dbf            log_g17m2.dbf               100 MB
                                /origlogB                    /mirrlogB
    16                       log_g16m1.dbf          log_g16m2.dbf            100 MB
    18                       log_g18m1.dbf            log_g18m2.dbf            100 MB
    Continue to next.....

  • Redo log file thread

    Dear All,
    In am using oracle 10.2.0.4.0 in RHEL 4.
    In my current set we had two node RAC (tims1,tims2) db_name (tims).
    One of the node has failed(hard disk),Failed node information has been remove from OCR(all the cluster component like...ons,vip,listener,instance).
    My Question IS:After Removing All the Cluster Component does it remove redo log files associate with the failed instance.
    Redo logfile Before Node Delete:
    SQL> select member from v$logfile;
    MEMBER
    /database/tims/redo02.log
    /database/tims/redo01.log
    /database/tims/redo03.log
    /database/tims/redo04.log
    Redo Logfile After Node Delete
    SQL> select member from v$logfile;
    MEMBER
    /database/tims/redo02.log
    /database/tims/redo01.log
    /database/tims/redo03.log
    /database/tims/redo04.log
    It means redo logfile has not been removed.
    Please help me what are the redo thread i will have to remove.
    SQL> select group#,sequence#,thread# from v$log;
        GROUP#  SEQUENCE#    THREAD#
             1      28868          1
             2      28867          1
             3      29327          2
             4      29328          2
    Thanks &Regards
    Monoj Das

    monoj wrote:
    Dear All
    SQL> select group#,sequence#,thread# from v$log;
        GROUP#  SEQUENCE#    THREAD#
             1      28868          1
             2      28867          1
             3      29327          2
             4      29328          2
    According to my above query, thread# for group 1,2 is showing as 1 but the thread has been remove from cluster is there any use of those redo log.
    Thanks and Regards
    Monoj Das
    Please leave your theories aside and read the replies given in the thread. The logs are going to stay as they are (should be) on the shared storage. Why you are hard-bent to remove these log files anyways? Are you not willing to add the other instance? RAC db with just one instance is really not of much use.
    Aman....

  • Where are BLOB Files stored when using redo log files.

    I am using Archive Log Mode for my backups. I was wondering if Blob files get stored in the redo log files or if they are archived somewhere else?
    Rob.

    BLOB are just columns of some tables and are stored in related tablespace table by default in their own segments. Changes to BLOB columns are also stored in redo log more or less like any other column.
    See an short example for default LOB storage in Re: CLOB Datatype [About Space allocation]
    Edited by: P. Forstmann on 27 avr. 2011 07:20

  • Redo log best practice for performace - ASM 11g

    Hi All,
    I am new to ASM world....can you please give your valuable suggestion on below topic.
    What is the best practice for online redo log files ? I read it somewhere that oracle recommand to have only two disk group one for the all DB fils, control files online redo log files and other disk group for recovery files, like mulitplex redo file, archive log files etc.
    Will there be any performance improvment to make different diskgroup for online redo log (separate from datafiles, contorl files etc) ?
    I am looking for oracle document to find the best practise for redo log (peformance) on ASM.
    Please share your valuable views on this.
    Regards.

    ASM is only a filesystem.
    What really count on performance about I/O is the Storage Design i.e (RAID used, Array Sharing, Hard Disk RPM value, and so on).
    ASM itself is only a layer to read and write on ASM DISK. What mean if you storage design is ok ... ASM will be ok. (of course there is best pratices on ASM but Storage must be in first place)
    e.g There is any performance improvement make different Diskgroup?
    Depend...and this will lead to another question.
    Is the Luns on same array?
    If yes the performance will end up on same point. No matter if you have one or many diskgroup.
    Coment added:
    Comparing ASM to Filesystem in benchmarks (Doc ID 1153664.1)
    Your main concern should be how many IOPs,latency and throughput the storage can give you. Based on these values you will know if your redologs will be fine.

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Cold backup with online redo logs

    I am working on 10G in AIX for a single instance
    It is just a general db backup & restore question, but I have something confused.
    I am going to perform a cold backup with my ARCHIVELOG database.
    No wonder why I perform a cold backup because it is a testing database which can suffer from data lost and down time during backup.
    I read some guides. They all mentioned to backup all the datafiles and control files.
    During the restoration, I have to copy all the backed up datafiles and control files to the default location.
    Then Startup mount;
    The last step before open the database is recover database until cancel;
    For the acknowledgement, I have to do the command of recover database, because the online redo logs were not backed up, thus we have to recover it in order to reset the redo logs.
    For my question,Would I be able to skip the command of recover database, then directly startup the database if I have backed up the online redo logs and copy the default location during the restoration?
    However, I read many documents which mention that it is not suggested to backup the online redo logs. Is it just the case which ONLY applied in hot backup? Do you all think that for my case, cold backup for online redo logs is recommended?
    Thanks all

    jgarry wrote:
    Edit: And never forget, those test databases are some developers production.Absolutely true according to my experience. Loosing the work of a payed developer is just as bad as loosing the work of a production system and may even be worse because it may not be possible to re-enter missing data into the system.
    I think a cold backup is only suitable on special occasions, for instance, to relocate/copy the database to a different storage media, or if the database doesn't change or if loosing changes is absolutely irrelevant. Otherwise, put the database into archivelog mode and do a hot backup. After that you will also have alternative options which can make the restore and recovery of the database very easy and efficient, like flashback database, etc. but it will take substantial additional disk space.

  • Multiplexing Redo Log Files question

    If you are running RAC on ASM on a RAID system, is this required?  We are using an HP autoraid which mirrors at the block level and in the documentation about Multiplexing Redo Log Files it says that you do it to protect against media failure.  The autoraid that we are using gives us multiple levels of redundancy against media failure so I was wondering if Multiplexing would be adding more overhead than is needed.  Thanks for your input.

    ASM is quite compex and I'm not going to outline all the advantages or reasons for ASM, but under ASM you can drop and add devices to maintain your capacity needs online without loosing data, which you cannot do using RAID, which requires a re-initialize, for example, regardless of redundancy. Please see the documentation. ASM, like pretty much everything Oracle will add complexity and you will have to check your requirements. ASM is however pretty much the standard. If you use external RAID, make sure your storage is not using RAID 5 or 0. Regarding logical errors, you could for example overwrite or delete a file by mistake, in which case file redundancy does not protect you. If you are looking for reasons or ways not to use ASM, I'm sure you will find them, but what's the point?

Maybe you are looking for