Control file multiplexed RAID question

2 node RAC. OEL 5. 11.2.0.2
Is there any cause for concern over design issues when placing multiplexed controlfiles on disks with different RAID types?
I have FRA, REDO, VOTE and DATA disk groups available to me DATA is RAID 5, the rest are RAID 10. Is there any tales from the field of write speed issues multiplexing the files on a RAID 5 and a RAID 10?
Edited by: 961469 on Dec 11, 2012 6:10 AM
edited to include versions and env

Is there any cause for concern over design issues when placing multiplexed controlfiles on disks with different RAID types?IMHO, I believe not. Of course this depend on characteristic of I/O of each Array.
As best practice never mix files stored on array with heavy I/O with array which has Low I/O.
Controlfile is good for RAID 5 and RAID 10 no worry.
Read note:
I/O Tuning with Different RAID Configurations [ID 30286.1]
Swap space can be used on RAID devices without affecting Oracle.  
====================================================================================
RAID  Type of RAID        Control       Database        Redo Log        Archive Log
                            File          File            File            File
====================================================================================
0     Striping             Avoid          OK           Avoid           Avoid    
1     Shadowing             OK             OK          Recommended       Recommended
0+1   Striping +            OK         Recommended       OK            Avoid    
      Shadowing                                                                                 
3     Striping with         OK           Avoid           Avoid            Avoid    
      Static Parity                                                                                        
5     Striping with         OK           Avoid           Avoid            Avoid    
      Rotating Parity                  
------------------------------------------------------------------------------------On this case the main concern is:
Something which should help can disrupt and cause downtime.
Unable To Open Database Due To Diskgroup Used To Multiplex The RedoLogs & Controlfiles & Archivelogs Got Corrupted. [ID 1382372.1]
Regards,
Levi Pereira

Similar Messages

  • How to mulitplex control files with raid

    I have installed 8i DB server on a HP proliant server machine. Raid 2 is configured on the server machine with windows OS, two hard disks are mirrored.
    It is often recommended to multiplex conrol files and redo log files on multiple hard disk to protect from disk failure.
    how can i do this, as i have 3 logical drives but all they are of same physical device.
    multiplexing control files on different logican drives is same as keeping them in the same default folder.
    Please help making things clear ?????

    1. RAID is all about availability and is not a substitute for duplexing of control files or online redo logs.
    2. It is better to have multiple physical drive (sets) on which to place copies of the control file and online redo logs. That way, any failure of the RAID or file system will still leave you with copies of these vital files.
    3. If you only have logical drives, it is best to place copies on various logical volumes. This way, you are (at least) protected from file system failure, corruption, or accidental delete.

  • Regarding RESETLOG and NORESETLOG option while creating a control file

    Hi,
    I dont understand the need for resetlogs option while creating a controlfile for a db in NOARCHIVELOGMODE. I assume that reset logs clears all the redo log contents.
    While taking a cold backup what I did was:
    1. Shutdown instance
    2. Copy all the files
    3. Startup
    Now I tried recovering the same database on a new machine (with different path btw.) coz of which i had to create a new control file. My question is: while restoring the database, do I need to create the control file with NORESETLOG or RESETLOG option?
    When I tried using the NORESETLOG (NOARCHIVELOG) option I was able to recover the instance without any hassles.
    ie
    1. STARTUP NOMOUNT
    2. CREATE NEW CONTROL FILE USING NORESETLOG (NOARCHIVELOG)
    3. RECOVER DATABASE
    4. ALTER DATABASE OPEN;
    While the same thing with NORESETLOG (NOARCHIVELOG) option:
    1. STARTUP NOMOUNT
    2. CREATE NEW CONTROL FILE USING RESETLOG (NOARCHIVELOG)
    3. RECOVER DATABASE USING BACKUP CONTROLFILE
    This step asked me for some archivelogs which were not generated since the db is in NOARCHIVELOG mode.
    I wonder why we require the RESETLOG OPTION SINCE A NORMAL SHUTDOWN PERFORMED BEFORE COLD BACKUP would have ensured that there is no redo information left in the redo logs.
    Please let me know if I am thinking the incorrect way.
    Regards and Thanx in Advance,
    Raj

    If you had a db running in noarchivelog mode and had to clone the db and rename it, the create controlfile stmt:
    create controlfile reuse <db_name> needs to be changed to: create controlfile SET <db_name>, in which case the db can only be opened with resetlogs. Hope this answers your question

  • Control file parallel write

    Hi,
    From my statspack report one of the top wait events is control file parallel write.
                   Wait Time
    Event          Time     Percentage Avg. wait
    Control file      11000     3.61% 11.68
    parallel write
    How can I tune the control file parallel write event?
    Right now for this instance I have control file multiplexed onto
    3 different drives L. M. N
    Thanks

    If you are doing excessive log swaps, you could be generating too many checkpoints. See how many log swaps you have done in v$loghist. It is also possible to reduce the number of writes to log files by adding the /*+ APPEND */ hint and the "nologging" to insert statements to reduce the amount of log files filled.
    I have also combined update and delete statements to generate fewer writes to the log files.
    You can recreate the log files larger with :
    alter database drop logfile group 3;
    ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '/oracle/oradata/CURACAO9/redo03.log' size 500m reuse;
    ALTER DATABASE ADD LOGFILE member '/oracle/oradata/CURACAO9/redo03b.log' reuse to GROUP 3;
    alter system switch logfile;
    alter system checkpoint;
    alter database drop logfile group 2; -- and do group 2 etc.

  • What is Control file ?

    Hi Gurus,
    Please explain me that what is Control file and what was the purpose of it in SAP Oracle DataBase.
    Thanks.

    Hi Kalyan
    <b>What Is a Control File?</b>
    Every Oracle Database has a control file, which is a small binary file that records the physical structure of the database. The control file includes:
    The database name
    Names and locations of associated datafiles and redo log files
    The timestamp of the database creation
    The current log sequence number
    Checkpoint information
    The control file must be available for writing by the Oracle Database server whenever the database is open. Without the control file, the database cannot be mounted and recovery is difficult.
    The control file of an Oracle Database is created at the same time as the database. By default, at least one copy of the control file is created during database creation. On some operating systems the default is to create multiple copies. You should create two or more copies of the control file during database creation. You can also create control files later, if you lose control files or want to change particular settings in the control files.
    <b>Guidelines for Control Files</b>
    This describes guidelines you can use to manage the control files for a database, and contains the following topics:
    Provide Filenames for the Control Files
    Multiplex Control Files on Different Disks
    Back Up Control Files
    Manage the Size of Control Files
    Role of Control File will remain the same even when you use Oracle with SAP. Just there will be some more entries in the file.
    Refer the link below as well:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/control.htm#i1006143
    Hope this should clear you doubt.
    Regards
      Sumit Jain
    [Reward with points if useful]
    Message was edited by:
            Sumit Jain

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • How to multiplexing control files in ASM instance???

    Hi Folks
    I have an Oracle DB10g R2 with ASM in which the unique control file is in one group of ASM.
    My question is how to multiplex the controlfile and redo logs???
    Very thanks,
    Wilson

    Hi, you can use the OMF feature of ORACLE to move the controlfile into the ASM.
    1. you create a backup controlfile: ALTER DATABASE BACKUP CONTROLFILE TO '/home/oracle/control.bac';
    2. you specify the locations:
    - ALTER SYSTEM SET db_create_online_log_dest_1='+ASMGROUP1'; -- RMAN will restore one mirror of the controlfile in each of these locations!
    - ALTER SYSTEM SET db_create_online_log_dest_2='+ASMGROUP1';
    ALTER SYSTEM SET db_recover_file_dest='+ASMGROUP1'; -- this will make RMAN restore the controlfile in the flash_recovery_area!
    3. you create a pfile from spfile:
    - CREATE PFILE FROM SPFILE;
    4. you archive the current redolog:
    - ALTERSYSTEM ARCHIVE LOG ALL;
    and SHUTDOWN IMMEDIATE
    5. you edit the pfile by removing the entry for CONTROL_FILES!!!
    - and after this: CREATE SPFILE FROM PFILE;
    6. you use RMAN to restore the controlfile in the ASM:
    RMAN> STARTUP NOMOUNT;
    RMAN> RESTORE CONTROLFILE FROM '/home/oracle/control.bac';
    RMAN> ALTER DATABASE MOUNT;
    RMAN> RECOVER DATABASE USING BACKUP CONTROLFILE;
    RMAN>ALTER DATABASE OPEN RESETLOGS;
    Hope this helps,
    Lutz

  • Flat File and Control Files Questions

    Greetings,
    I've worked with Oracle for about 10 years, but have little experience with using sql-loader.
    I have data from Visual FoxPro tables going into Oracle 10g via a Perl script. I am having issues and therefore have a couple questions.
    1) If the data from my foxpro table is basically everything in the table as in 'Select * from table-name', does the control file have to list every column that is in the FoxPro table?
    -- I have a case where a FoxPro table has 15 columns but we are trying to upload only 10 columns. The script is dynamic. It selects * from each FoxPro table and creates a Flat File for each on the fly. Then sql-loader uploads the data to Oracle. The Flat File for this one table has data from all 15 columns, but the Control File only lists 10 of the columns to be uploaded into Oracle.
    2) Do the column names in the control file 'have' to match both the column names in the FoxPro table and the Oracle table, or only the Oracle table?

    YankeeFan wrote:
    Greetings,
    I've worked with Oracle for about 10 years, but have little experience with using sql-loader.
    I have data from Visual FoxPro tables going into Oracle 10g via a Perl script. I am having issues and therefore have a couple questions.
    1) If the data from my foxpro table is basically everything in the table as in 'Select * from table-name', does the control file have to list every column that is in the FoxPro table?
    -- I have a case where a FoxPro table has 15 columns but we are trying to upload only 10 columns. The script is dynamic. It selects * from each FoxPro table and creates a Flat File for each on the fly. Then sql-loader uploads the data to Oracle. The Flat File for this one table has data from all 15 columns, but the Control File only lists 10 of the columns to be uploaded into Oracle.
    Yes - use the FILLER spec to ignore columns you do not care about - http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_field_list.htm#sthref946
    2) Do the column names in the control file 'have' to match both the column names in the FoxPro table and the Oracle table, or only the Oracle table?Only the Oracle table.
    HTH
    Srini

  • Sql loader control file question.

    I have a text file (t.txt) which contains a record types AAA and AAB to input fixed width data into a table (t) AAA_NO, AAA_TYPE, AAB_DESC.
    Control file (control_t) contents:
    load data infile '/path/t.txt'
    insert into table t
    when ((1:3) = 'AAA' )
    AAA_NO position (4:14) CHAR,
    AAA_TYPE postion (15:27) CHAR
    Works prefectly, but I need to add another set of data from the same t.txt file with record type AAB. I attempted to add this into the same control file:
    into table t
    when (1:3) = 'AAB'
    AAB_DESC position (28:128) CHAR
    It fails naturally. How would I include the addtional record type data into the same table after AAA_NO and AAA_TYPE have already been inserted? Do I need to include the AAA_NO in the second insert (AAB_DESC)? Should I create another temp table to store only the AAA_NO and AAB_DESC and then insert that data into table t after the loader is done? Or can this be completed in the same control file?

    Thanks again for the assistance, this is a tough one to fix. I am new to sqlloader.
    The temp table creation is causing some serious errors, so I am back to trying to fix sqlloader to get the job done. the apt.txt file contains records that each row of a new record starts with either 'APT' or 'ATT'. Here is the details of what I am trying to do.
    crtl file:
    load data
    infile '/path/apt.txt
    insert
    into table t_hld
    when ((1:3) = 'APT')
    apt_no position (4:14) CHAR,
    apt_type position (15:27) CHAR,
    apt_id position (28:31) CHAR
    The next section is the problem where I am inserting apt_sked into the same table t_hld as above because it has a different record qualifier its ATT and not APT.
    insert
    into table t_hld
    when (1:3) = 'ATT'
    apt_no position (4:14) CHAR,
    apt_sked position (16:126) CHAR
    The positions of the data using fixed is working, I can insert the apt_sked data into another temp table instead of t_hld and it works. It's just when I attempt to place the ATT apt_sked data into the t_hld table after the APT data has been loaded into the t_hld table....I tried APPEND instead of INSERT, but that does not work.
    The APT_NO's of the data are all the same- it is the qualifier for the records (Primary Key attribute- however I do not have it established since it is a temp table concept).
    I am stuck trying to get the data in the t_hld table, everything works when I do not try to put the ATT apt_sked data into t_hld- everything is valid. And placing the ATT apt_sked data into a different temp table works perfectly- but I can't find a way to create an update to t_hld from this temp table without errors. So I am trying to go back to sqlloader to get this done- any thoughts or questions?
    Thanks a billion!
    Shawn

  • A question about restoring from cold backup(control file backup not clear)

    Hi,
    I had another question about restoring the cold backup. My database is in noarchivelog mode and after taking a consistent cold backup, all I need to do is to restore the backup right? -Why I got this question is because: when I backup my control file to trace, I see statements like this:-----
    -- Commands to re-create incarnation table
    -- Below log names MUST be changed to existing filenames on
    -- disk. Any one log file from each branch can be used to
    -- re-create incarnation records.
    -- ALTER DATABASE REGISTER LOGFILE '/uo1/app1/arch1_1_647102958.dbf';
    -- Recovery is required if any of the datafiles are restored backups,
    -- or if the last shutdown was not normal or immediate.
    RECOVER DATABASE
    -- Database can now be opened normally.
    ALTER DATABASE OPEN;
    My database is in noarchivelog mode now so don't know why these statements (of register the logfile) is there in the backup of control file? so when I restore the cold backup of this database, it will still work correct? (there is no logfile I have only CRD files in cold backup -no archive log files.)
    thanks
    Nirav

    Thanks for your inputs! It is most useful to me.
    Regards
    Nirav

  • Multiplex Redo Logs and Control File

    I am wanting to setup an existing Oracle Express 10g instance to multiplex the redo log files and the control file.
    Instance is using Oracle-Managed Files and the Flash Recovery Area.
    With these options being used what are the steps required to setup multiplexing?
    I tried setting the DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2 parameters but this doesn't appear to have worked (I even bounced the db instance).
    BTW, the DB_CREATE_FILE_DEST is set to null and the DB_RECOVERY_FILE_DEST is set to the flash recovery area.
    Any help is much appreciated.
    Regards, Sheila

    Thanks for this. My instance originally had two log groups so I've added a new member to each group into the same flash recovery area directory, but have assigned a name. Is this why when I query v$logfile the is_recovery_dest_file is set to NO? Is it ok to assign a name & directory and if not, how do you add a new memeber and allow Oracle-Managed files to name them?
    Also, how can I check that the multiplexing is working (ie the database is writing to both sets of files)?
    Thanks again.

  • Multiplexing Online redo logs, archive logs, and control files.

    Currently I am only multiplexing my control files and online redo logs, My archive logs are only going to the FRA and then being backed up to tape.
    We have to replace disks that hold the FRA data. HP says there is a chance we will have to rebuild the FRA.
    As my archive logs are going to the FRA now, can I multiplex them to another disk group? And if all of the control files, online redo logs and archive logs are multiplexed to another disk group, when ASM dismounts the FRA disk group due to insufficient number of disks, will the database remain open and on line.
    If so then I will just need to rebuild the ASM volumes, and the FRA disk group and bring it to the mount state, correct?
    Thanks!

    You can save your online redo logs and archive logs anywhere you want by making use of of init params create_online_log_dest and log_archive_dest_n. You will have to create new redo log groups in the new location and drop the ones in the FRA. The archive logs will simply land wherever you designate with log_archive_dest_n parameters. Moving the control files off FRA is a little trickier because you will need to restore your controlfile to a non-FRA destination and then shutdown your instance, edit the control file param to reflect changes and restart.
    I think you will be happier if you move everything off the FRA diskgroup before dismounting it, and not expecting the db to automagically recover from the loss of files on the FRA.

  • Checkpointing - control file contents question

    Some clarification is needed if possible...
    When you commit a transaction:
    - commit scn is recorded in the itl of the data block and undo segment header
    - lgwr records the committed scn (for all data blocks involved) to the redo log
    Checkpoint Event
    - (3 seconds or possibly less passes by) CKPT wakes up and signals DBWn to write dirty (modified and committed)
    blocks to disk
    - CKPT records the scn of those blocks in the control file (data file and redo thread sections) and the data file
    header (task of checkpoint when a log switch occurs)
    - Checkpoint position in the Redo Log is forwarded
    Control file contents question:
    When LGWR writes the commit scn to the redo log, who writes the scn to the control file? LGWR or CKPT?
    Also, when is the redo thread scn written?
    Matt

    Matt,
    This is my understanding of the stuff. Feel free to correct me.
    Checkpoint SCN , as I mentioned in my last reply is the marker of the point till which the data is "chekpointed" to the datafiles. This marker tells the controlfile that in the case of the crash, where to start recovery of the datafile and have to go which extent in the redo stream? This is only available in the datafile header and in the controlfile. This doesn't get recorded in the redo log file/stream.
    I mentioned checkpoint queue in my reply too. Though I couldn't find any reference directly mentioned between this and in the checkpoint SCN but I believe my theory , if not totall, partially is correct. The incremental checkpoint is the stuff which makes the decision that how many redo blocks needs to be applied to the datafile if its closed without a proper checkpoint. So this part is maintained in the Datafile header itself in the form of the checkpoint SCN. When not matched with the conrolfile checkpoint SCN, which is always higher than this, a recovery is reported.
    I hope its somewhat correct. Do let me know your views too.
    Cheers
    Aman....

  • Question about control files.

    Hi.
    For example, I have 1 control file on storage A and 1 on storage B. Storage B placed offline for some reasons. Will control file recover automatically when storage B be taken online or it has to be repaired manually?
    Thanks.
    Edited by: Web on Mar 6, 2012 6:04 PM

    Web wrote:
    Hi.
    For example, I have 1 control file on storage A and 1 on storage B. Storage B placed offline for some reasons. Will control file recover automaticall when storage B be taken online or it has to be repaired manually?
    Thanks.
    Will control file recover automaticallNo ... it can be recovered manually . Either you have to create another controlfile or you to move the controlfile location on other online storage ( using multiplexing ) ...
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Multiplexing control file

    How to multiplexing control file in oracle 10g?

    Hi,
    Two ways are to MULTIPLEXING CONTROL FILE.
    If you are using PFILE..
    1. Set a Parameter... CONTROL_FILE= <ORACLE_BASE>\ORADATA\ORCL\CONTROL1.CTL,<ORACLE_BASE>\ORADATA\ORCL\CONTROL2.CTL
    If you are using SPFILE..
    2. Then use, ALTER SYSTEM CLAUSE to change the paremeter value dynamically.
    for ex> ALTER SYSTEM SET CONTROL_FILE=<ORACLE_BASE>\ORADATA\ORCL\CONTROL1.CTL, <ORACLE_BASE>\ORADATA\ORCL\CONTROL2.CTL
    Thanks.

Maybe you are looking for

  • Data in ods

    Hai Experts, Can any body tell me data is present in active data table, new data table, change log table and as wel as with this there is a content button it is asking for the table name what is that button consists of can any body help me out Regard

  • Disconnection from internet behind WRT54G V2.2

    I have two computers at home with both access to WRT54G V2.2 wirelessly.  Since the first day i set up the router, both of my computers get disconnected from the internet from time to time. It is very irritating. The two computers are still connected

  • XDCAM HD workflow with Adobe Premiere Pro CC and Macbook Pro

    Hi folks, I have an assignment for a TV production to secure and convert the data (mxf files XDCAM HD 422 1080i 50 / 50 Mbit/sec with 8 audio channels) from Sony XDCAM camcorders (PDW-700) for postproduction. The producer wants me to convert the mxf

  • Need R/3 to reprice, but manual condition to stay.

    Hi Forum , We use CRM40 and My SAP ERP, we create Quotes in CRM send them to My SAP ERP, My SAP ERP is re-pricing them. That is OK, My SAP ERP is the master . Now the business need some manually conditions in CRM and  My SAP ERP like RB00, RA00, .. t

  • Help! Lighting Effects filter has disappeared.

    As the headline says, the Lighting Effects filter has disappeared from the Render options in the Filters menu of Photoshop CS5. Can someone please tell me how to get it back? Thanks!