Multiplex Redo Logs and Control File

I am wanting to setup an existing Oracle Express 10g instance to multiplex the redo log files and the control file.
Instance is using Oracle-Managed Files and the Flash Recovery Area.
With these options being used what are the steps required to setup multiplexing?
I tried setting the DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2 parameters but this doesn't appear to have worked (I even bounced the db instance).
BTW, the DB_CREATE_FILE_DEST is set to null and the DB_RECOVERY_FILE_DEST is set to the flash recovery area.
Any help is much appreciated.
Regards, Sheila

Thanks for this. My instance originally had two log groups so I've added a new member to each group into the same flash recovery area directory, but have assigned a name. Is this why when I query v$logfile the is_recovery_dest_file is set to NO? Is it ok to assign a name & directory and if not, how do you add a new memeber and allow Oracle-Managed files to name them?
Also, how can I check that the multiplexing is working (ie the database is writing to both sets of files)?
Thanks again.

Similar Messages

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Location of Redo log and control files?

    Dear all,
    I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
    What could be the location of control files?
    Amy

    select name
      from v$controlfile
    or
    show parameter control_filesKhurram

  • Multiplexing Online redo logs, archive logs, and control files.

    Currently I am only multiplexing my control files and online redo logs, My archive logs are only going to the FRA and then being backed up to tape.
    We have to replace disks that hold the FRA data. HP says there is a chance we will have to rebuild the FRA.
    As my archive logs are going to the FRA now, can I multiplex them to another disk group? And if all of the control files, online redo logs and archive logs are multiplexed to another disk group, when ASM dismounts the FRA disk group due to insufficient number of disks, will the database remain open and on line.
    If so then I will just need to rebuild the ASM volumes, and the FRA disk group and bring it to the mount state, correct?
    Thanks!

    You can save your online redo logs and archive logs anywhere you want by making use of of init params create_online_log_dest and log_archive_dest_n. You will have to create new redo log groups in the new location and drop the ones in the FRA. The archive logs will simply land wherever you designate with log_archive_dest_n parameters. Moving the control files off FRA is a little trickier because you will need to restore your controlfile to a non-FRA destination and then shutdown your instance, edit the control file param to reflect changes and restart.
    I think you will be happier if you move everything off the FRA diskgroup before dismounting it, and not expecting the db to automagically recover from the loss of files on the FRA.

  • Multiplexing Redo Log and maximum protection mode.

    Assume that during writing into redo logs the instance crashes. As a result, members of active redo group are not synchronized, some of them had more data. How Oracle will handle this when instance starts? And there can be case when at startup time some members that had more redo before crash, are lost.
    Now assume that we have standby database with maximum protection mode. After LGWR has written to local redo logs and before writing to standby redo logs, primary instance crashes. In this case standby site lost last transaction.
    Is it correct? Thanks.

    Assume that during writing into redo logs the instance crashes. As a result, members of active redo group are not synchronized, some of them had more data. How Oracle will handle this when instance starts? And there can be case when at startup time some members that had more redo before crash, are lost.
    Members of a particular group are written concurrently by LGWR, all members of a log group will have same data.  If any member of a particular group is lost or not reachable, oracle will read from the available log member during instance recovery.
    Multiplexing Redo Log Files
    http://docs.oracle.com/cd/B19306_01/server.102/b14231/onlineredo.htm#i1006249
    To answer your second question,In this mode no transaction commits on primary unless the redo is also written on atleast one standby database, otherwise primary will go down.
    Check below
    Maximum Protection
    http://docs.oracle.com/cd/B28359_01/server.111/b28294/protection.htm#CHDHFHJI

  • Best practice - online redo logs and virtualization

    I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
    We use a non-standard disk distribution scheme -
    on the c: drive we have oracle_home as well as directories for control files and online redo logs.
    on the d: drive we have datafiles
    on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
    my question is this:
    is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
    Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
    Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
    We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
    Ideas?
    Mike

    Hi,
    "Old" Oracle standard is to use as many spindles as possible.
    It looks to me, you have only 1 disk with several partitions on it ??
    In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
    Take another physical seperate D: drive to install you application.
    Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
    And finally yet another disk for the archive logs.
    We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
    All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
    The machine, or must I say the database, operates like a high speed train, very, very fast.
    Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
    Try to prevent the system from swapping, because that is a performance killer!
    Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
    Success!
    FJFranken
    Edited by: fjfranken on 7-okt-2011 7:19

  • Multiplexing redo log files

    I am using 9i R2 database.
    I just want to clear some doubts.
    I am having 3 redo log groups with one member each.
    Now for mutiplexing redo log files
    i want to add one one member to each group so
    that each group will have 2 members in some different path.
    my current path of log member are
    /oracle/files/a1.log (group 1)
    /oracle/files/b1.log (group 2)
    /oracle/files/c1.log (group 3)
    I want to add
    /oracle/sysfiles/a2.log (group 1)
    /oracle/sysfiles/b2.log (group 2)
    /oracle/sysfiles/c2.log (group 3)
    A--So is there any need to take database in any mode (nomunt,mount) or i can do it while
    database is open ?
    I will issue
    ALTER DATABASE ADD LOGFILE MEMBER /oracle/sysfiles/a2.log TO GROUP 1 ;
    First in inactive group,then
    alter system switch logfile; (To make 2 group inactive....and then 3)
    B-- All memebers in a redolog group must be of same size? if i am having
    100 M.B. of one member(a1.log) is there no need to specify size while adding new member(a2.log)?
    Can some one validate and correct my steps or suggest me regarding this?

    Hi,
    All the solutions are in oracle documentaiton.
    See, Members of the same multiplexed redo log group must be the same size (that all member in with in the group ) and Members of different groups can have different sizes. If the same of Files are different resultant would be the with Checkpoints. Since you can not gurantee that Checkpoints will occur at regular internals (Log switch with respect log members). Better to follow the Rules of traffic similarly follow the instrcutions of ORACLE Documentation.. :-)
    - Pavan Kumar N

  • Multiplexing Redo Log Files question

    If you are running RAC on ASM on a RAID system, is this required?  We are using an HP autoraid which mirrors at the block level and in the documentation about Multiplexing Redo Log Files it says that you do it to protect against media failure.  The autoraid that we are using gives us multiple levels of redundancy against media failure so I was wondering if Multiplexing would be adding more overhead than is needed.  Thanks for your input.

    ASM is quite compex and I'm not going to outline all the advantages or reasons for ASM, but under ASM you can drop and add devices to maintain your capacity needs online without loosing data, which you cannot do using RAID, which requires a re-initialize, for example, regardless of redundancy. Please see the documentation. ASM, like pretty much everything Oracle will add complexity and you will have to check your requirements. ASM is however pretty much the standard. If you use external RAID, make sure your storage is not using RAID 5 or 0. Regarding logical errors, you could for example overwrite or delete a file by mistake, in which case file redundancy does not protect you. If you are looking for reasons or ways not to use ASM, I'm sure you will find them, but what's the point?

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Log_file_name_convert , db_recovery_area and multiplexed redo logs

    Hi, I have a confusion here, need help.
    I have multiplexed redo log members on both asm diskgroup DATA, FRA.
    My log_file_name_convert='+DATA', '+DATA'
    and db_recovery_area ='+FRA'
    db_create_online_log_dest_1=+DAT
    Are above parameters right?

    log_file_name_convert is simply a string replacing function.Its same behaviour like DB_FILE_NAME_CONVERT
    So all entries of <first> will be replaced by <second> and the <third> entry will be replaced by the <fourth> and so on.Yes the strings should be in even numbers like 2,4,6,8.. ther should be sequence for the source & destination diskgroup and so on.
    So if you would have:
    'DATA','REPLACE','FRA','REPFRA'
    all occurences of DATA will be replaced with RAPLACE. But this would also mean ORADATAGRAM would be replaced to ORAREPLACEGRAM.If any ORL's / SRL's are created in DATA in source, by mentioning LOG_FILE_NAME_CONVERT those will be created to the connected proceeding string...
    Can you point out please where is mistake in my previous post?

  • Very high log file sequential read and control file sequential read waits?

    I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
    log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
    Elapsed: 20.12 (mins)
    DB Time: 67.04 (mins)
    and From top 5 wait events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    CPU time 1,712 42.6
    log file sequential read 99,909 683 7 17.0 System I/O
    log file sync 49,702 426 9 10.6 Commit
    control file sequential read262,625 384 1 9.6 System I/O
    db file sequential read 41,528 378 9 9.4 User I/O
    Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
    Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
    Thanks

    Welcome to the forums.
    There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
    We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
    We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
    We don't have any AWR or ASH data to look at.
    etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
    To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
    But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
    Thank you.

  • Why multiplex redo log group ?

    Hello,
    Why should we multiplex redo log groups if we have only file system which is already mirrored. Is there any one who had an incident when he has only one redo log member per group placed on a mirrored file system but still got corrupted and he felt better to have multiple members even if they reside on the same file system (which is mirrored)
    Thanks
    Salman

    Ansiktet wrote:
    EdStevens wrote:
    Salman Qureshi wrote:
    Hello,
    Why should we multiplex redo log groups if we have only file system which is already mirrored. Is there any one who had an incident when he has only one redo log member per group placed on a mirrored file system but still got corrupted and he felt better to have multiple members even if they reside on the same file system (which is mirrored)
    Thanks
    SalmanThe mirror won't protect you from an SA who deletes '/u01/oradata/redo01.log' because he is running out of space on /u01 and figures its safe to delete a log file.
    Or similar types of errors.
    The redo and control files are simply too critical to put all your eggs in one basket when planning their protection.:) Thats why you should not use Oracle default .log on redo, instead .dbf or .dbl can be used. THat's why I use the older (pre-10g) default of .rdo for redo logs
    However, has this happend anyone for real, that a SA delete Oracle files? Where do you think I came up with the example?
    HOw about an SA (or maybe the kind of "fresher" we often see here, Taking a look at a "log" file with notepad?
    If he is isnt stupid he should know that Oracle resides on /u01 partion (for example), and should not delete files there without consulting the DBA."should" is the operative word there. There is no accounting for corporate cultures and attitudes.
    But end the end, my example was to illustrate that not all problems with redo and control files are mitigated with disk mirroring.

  • How to create parameter and control file like filename + date

    Hello there
    I am trying to create parameter and control file with following command
    in SQLPLUS
    create pfile='/u03/oradata/WEBDB/backup/initWEBDB.ora' from spfile;
    In RMAN
    copy current controlfile to '/u03/oradata/WEBDB/backup/cf_longterm.cpy';
    how can I put date at the end of filename like
    initWEBDB8jan06.ora and cf_longterm8jan06.cpy
    Thanks in advance
    Lionel

    ASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
    With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
    Not mirroring the redo logs, assuming multiple members, is not as critical.

  • How to create redlog and control file at ASM in linux RAC

    Hi Experts,
    I will to maintance a oracle 10g database at ASM as RAC iin linux red hat.
    i am a new person with some question.
    nornally speaking, oracle recommadition for oracle 10g database as
    create 3 copy fills for control file
    create at least 2 redo log with mirror files in system.
    However, I checked find
    redlog file is at FRA place +FLSdisk1 and no mirror
    control file is at FRA place--+FLSDISK1/
    datebase file at ‘+DATA1/
    There are no mirror for relog.
    Go to EM, I also could not find place to enter file name.
    We use ASM to hold database to support RAC.
    Do i need to create redlog file as
    ALTER DATABASE ADD LOGFILE GROUP 1 ('+FLSdisk1/sale/onlinelog/REDO01.LOG','+FLSdisk1/sale/onlinelog/REDO01_mirror.LOG') SIZE 1000M reuse;
    my boss told me that ASM is reliable.
    Do you need to creat more directory to arrange redlog and control files in ASM for RAC system?
    FRA is a good place to store control file and redlog file ?
    Thanks
    JIM
    Edited by: user589812 on Jul 3, 2009 3:03 PM

    ASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
    With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
    Not mirroring the redo logs, assuming multiple members, is not as critical.

  • Redo logs and Flash recovery area

    Hi,
    Is it a good practice to place a copy of the (multiplexed) online redo at the flash recovery area? Wouldn't it be better to place a copy of the archived log at the flash recovery area?

    user492400 wrote:
    Hi,
    Is it a good practice to place a copy of the (multiplexed) online redo at the flash recovery area? Wouldn't it be better to place a copy of the archived log at the flash recovery area?Its not only the archvielogs that should be placed in the FRA. FRA is supposed to contain one copy of the archive logs and the rest 9 destinations are given to you for the multiplexing of it. The idea of multiplexing the redo logs and placing them anywhere( not just on the FRA itself) is simply required so that you won't get to a situation where you would lose all the redo log files and thus have to recreate them, losing the data inside them. So aleast one copy of the log files should be there and where you want to put it, that would depend on you.
    HTH
    Aman....

Maybe you are looking for

  • How to schedule a job in DBA_JOBS

    Hi All, I have to schedule a job called dwsp_mig_purordssku(sysdate,null) on every sunday at 8.00 PM. This procedure must take two parameter one is sysdate and another can be null. How shouls I do it with dbms_job.sumbit? Thanks Sid

  • I have bought a german card and i have a dutch ipad dutch store, how can I make use of my card without changing to a german store

    I have bought a German card for the app Store, but I have a Dutch Ipad. Nobody tells you that you have a problem. I assumed that Apple was international and that cards bought in different countries were no problem and you could use them anywhere. Sof

  • Connecting app with online database

    Hello, I have a problem and I'm hoping that you can help me solving it. I want to connect an online database with my windows phone app.  The database would have some data which I want to display in the app, users will be able to rate stuff that are i

  • Error code -120

    hi i'm getting an error code -120 (could not find directory) as i try to import audiofiles from one of my external drives. funny thing: some files i can import, some not. if i open a LP song from that drive, it opens normally. also using quicktime pl

  • Catching 404 error in WLS

    Hi, I am currently using WebLogic server 8.1. I have only one instance of WebLogic server and deploy 5-6 web applications in it. Now, I was thinking to create one handler to handle the 404 error for all the web applications to display the general err