Location of Redo log and control files?

Dear all,
I am checking the location of redo log and control files, but found that the redo log file (like log02a.dbf ....) in the same directory of data files. However, I couldn't find any control files in the data files directries.
What could be the location of control files?
Amy

select name
  from v$controlfile
or
show parameter control_filesKhurram

Similar Messages

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Multiplex Redo Logs and Control File

    I am wanting to setup an existing Oracle Express 10g instance to multiplex the redo log files and the control file.
    Instance is using Oracle-Managed Files and the Flash Recovery Area.
    With these options being used what are the steps required to setup multiplexing?
    I tried setting the DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2 parameters but this doesn't appear to have worked (I even bounced the db instance).
    BTW, the DB_CREATE_FILE_DEST is set to null and the DB_RECOVERY_FILE_DEST is set to the flash recovery area.
    Any help is much appreciated.
    Regards, Sheila

    Thanks for this. My instance originally had two log groups so I've added a new member to each group into the same flash recovery area directory, but have assigned a name. Is this why when I query v$logfile the is_recovery_dest_file is set to NO? Is it ok to assign a name & directory and if not, how do you add a new memeber and allow Oracle-Managed files to name them?
    Also, how can I check that the multiplexing is working (ie the database is writing to both sets of files)?
    Thanks again.

  • Multiplexing Online redo logs, archive logs, and control files.

    Currently I am only multiplexing my control files and online redo logs, My archive logs are only going to the FRA and then being backed up to tape.
    We have to replace disks that hold the FRA data. HP says there is a chance we will have to rebuild the FRA.
    As my archive logs are going to the FRA now, can I multiplex them to another disk group? And if all of the control files, online redo logs and archive logs are multiplexed to another disk group, when ASM dismounts the FRA disk group due to insufficient number of disks, will the database remain open and on line.
    If so then I will just need to rebuild the ASM volumes, and the FRA disk group and bring it to the mount state, correct?
    Thanks!

    You can save your online redo logs and archive logs anywhere you want by making use of of init params create_online_log_dest and log_archive_dest_n. You will have to create new redo log groups in the new location and drop the ones in the FRA. The archive logs will simply land wherever you designate with log_archive_dest_n parameters. Moving the control files off FRA is a little trickier because you will need to restore your controlfile to a non-FRA destination and then shutdown your instance, edit the control file param to reflect changes and restart.
    I think you will be happier if you move everything off the FRA diskgroup before dismounting it, and not expecting the db to automagically recover from the loss of files on the FRA.

  • Best practice - online redo logs and virtualization

    I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
    We use a non-standard disk distribution scheme -
    on the c: drive we have oracle_home as well as directories for control files and online redo logs.
    on the d: drive we have datafiles
    on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
    my question is this:
    is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
    Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
    Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
    We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
    Ideas?
    Mike

    Hi,
    "Old" Oracle standard is to use as many spindles as possible.
    It looks to me, you have only 1 disk with several partitions on it ??
    In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
    Take another physical seperate D: drive to install you application.
    Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
    And finally yet another disk for the archive logs.
    We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
    All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
    The machine, or must I say the database, operates like a high speed train, very, very fast.
    Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
    Try to prevent the system from swapping, because that is a performance killer!
    Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
    Success!
    FJFranken
    Edited by: fjfranken on 7-okt-2011 7:19

  • Where is the location of tablespace file and control file

    Hi, all
    where is the location of tablespace file and control file? tks

    For DataFiles, query DBA_DATA_FILES or V$DATAFILE
    For TempFiles, query DBA_TEMP_FILES or V$TEMPFILE
    For Online Redo Logs, query v$LOGFILE
    For Archived Redo Logs, query v$ARCHIVED_LOG
    for Controlfiles, query v$CONTROLFILE
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Very high log file sequential read and control file sequential read waits?

    I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
    log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
    Elapsed: 20.12 (mins)
    DB Time: 67.04 (mins)
    and From top 5 wait events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    CPU time 1,712 42.6
    log file sequential read 99,909 683 7 17.0 System I/O
    log file sync 49,702 426 9 10.6 Commit
    control file sequential read262,625 384 1 9.6 System I/O
    db file sequential read 41,528 378 9 9.4 User I/O
    Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
    Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
    Thanks

    Welcome to the forums.
    There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
    We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
    We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
    We don't have any AWR or ASH data to look at.
    etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
    To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
    But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
    Thank you.

  • How to create parameter and control file like filename + date

    Hello there
    I am trying to create parameter and control file with following command
    in SQLPLUS
    create pfile='/u03/oradata/WEBDB/backup/initWEBDB.ora' from spfile;
    In RMAN
    copy current controlfile to '/u03/oradata/WEBDB/backup/cf_longterm.cpy';
    how can I put date at the end of filename like
    initWEBDB8jan06.ora and cf_longterm8jan06.cpy
    Thanks in advance
    Lionel

    ASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
    With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
    Not mirroring the redo logs, assuming multiple members, is not as critical.

  • How to create redlog and control file at ASM in linux RAC

    Hi Experts,
    I will to maintance a oracle 10g database at ASM as RAC iin linux red hat.
    i am a new person with some question.
    nornally speaking, oracle recommadition for oracle 10g database as
    create 3 copy fills for control file
    create at least 2 redo log with mirror files in system.
    However, I checked find
    redlog file is at FRA place +FLSdisk1 and no mirror
    control file is at FRA place--+FLSDISK1/
    datebase file at ‘+DATA1/
    There are no mirror for relog.
    Go to EM, I also could not find place to enter file name.
    We use ASM to hold database to support RAC.
    Do i need to create redlog file as
    ALTER DATABASE ADD LOGFILE GROUP 1 ('+FLSdisk1/sale/onlinelog/REDO01.LOG','+FLSdisk1/sale/onlinelog/REDO01_mirror.LOG') SIZE 1000M reuse;
    my boss told me that ASM is reliable.
    Do you need to creat more directory to arrange redlog and control files in ASM for RAC system?
    FRA is a good place to store control file and redlog file ?
    Thanks
    JIM
    Edited by: user589812 on Jul 3, 2009 3:03 PM

    ASM is reliable but a smart DBA is very careful. If ASM is doing mirroring this is like RAID doing mirroring. What happens if you accidentally delete one copy ... the other one disappears instantly. Not a good idea.
    With respect to redo logs you need a minimum of three groups, two members, and one thread per instance. So a 2 node cluster should, at a minimum have 12 physical files.
    Not mirroring the redo logs, assuming multiple members, is not as critical.

  • How to configure logs and trace files

    Hello people,
       We have just implemented ESS-MSS, we have around 25000 people using this service and every 2 days my logs and trace file in server gets full and portal gets down.
    Please suggest how to solve this problem,how can i reduce trace and log files,,,,,any configuration or setting is there to configure this...please suggest and explain how can it be done.
    Biren

    Hi,
    You can control what messages gets logged depending on the severity.
    This can be configured using Log Configurator, check this how you can set severity to different locations.
    Netweaver Portal Log Configuration & Viewing (Part 1)
    Regards,
    Praveen Gudapati

  • Disk array configurations with oracle redo logs and flash recovery area.

    Dear Oracle users,
    We are planning to buy the new server for oracle database 10g standard edition. We put oracle database file, redo logs, and flash recovery area on each of disk array. My question is what is the best disk array configuration for redo logs and flash recovery area? RAID 10 or RAID 1? Is that possible we can duplicate Flash recovery area to the other location (such as net work drive) at the same time? Since we only have single disk array controller to connect to the disk arrays, I am try to avoid the single failure that will lose archive logs and daily backup.
    thanks,
    Belinda

    Thank you so much for the suggestion. Could you please let me know the answer for my question of FRA redundancy?
    “Is that possible we can duplicate Flash recovery area to the other location (such as net work drive) at the same time? Since we only have single disk array controller to connect to the disk arrays, I am try to avoid the single failure that will lose archive logs and daily backup.”

  • Database restore without temp, undo and control files.

    Hi All,
    You might found this question silly but I don't know so asking this question here.
    I have cold back up of the database. Now, I want create clone of that database, but I have some different paths for the DBFs so I will create new control file after restoring the database.
    Now, I know that I don't need control files and tempfiles to be restored. I have 10 undo files in backup but on the new clone database I don't need all 10. I want only 5. So can I do the restoration without undo , temp and control file and later on add undo and temp?? and if yes then tell me that can I add them at mount level??
    This is my first restore, Please guide me its very urgent

    Nitin Joshi wrote:
    f the COLD Backup does not include the Online Redo Logs, an ALTER DATABASE OPEN RESETLOGS is requireed >>to create these Online Redo Logs. Unfortunately, an OPEN RESETLOGS can only be done after an Incomplete >>Recovery or when using a Backup Control file.
    Therefore, we do a RECOVER with a CANCEL to simulate an Incomplete Recovery.Completely agree with you Hemant. And the links you've provided,i've gone through many times. Excellent description.
    I just wanted to know in above(OP's) scenario if he has complete cold backup(includes online redo logs), does he really need open reset logs or any recovery?
    Regards!no , if you have cold backup with online redo log files then i don't think so you need to open database in resetlogs.Resetlog is always after incomplete recovery or recovery using backup controlfile or you dont have redo logs.
    I am completely agree with you that with given scenario for the cold backup undo tablespace would not be part of recovery and you can
    -offline drop undo tablespace file
    -create another one undo tablespace and its undo datafile
    -point spfile to that newly undo tablespace
    I think Aman is saying in the context of restore and recover online database where undo tablespace create a vital role in database recovery, the undo blocks roll back the effects of uncommitted transactions previously applied by the rolling forward phase.
    Khurram

  • Physical standby with rman: location of redo logs

    guys,
    i use simple rman commands to create a physical standby database (i've attached the commands below). the problem is that on the physical standby, the location of the redo logs and standby redo logs differ from the primary.
    (we use redo logs and standby redo logs as we are running in max. availability mode and want to be prepared for switchover/failover).
    how do i backup/restore the database in a way that the redo logs are at the same place than on the primary database?
    thanks for your help,
    heri
    backup on primary:
    backup incremental level = 0 format '/tmp/transfer/td_%s_%p.bck' database;
    restore on standby:
    duplicate target database for standby NOFILENAMECHECK;

    Ogan,
    thanks alot for your reply. I was not using the log_file_name_convert parameter as i am not aware how to use it in this scenario. the log files on the standby are generated randomly.
    on the primary the log files are as follows:
    /home/oracle/app/oracle/oradata/td/redo01.log
    /home/oracle/app/oracle/oradata/td/redo02.log
    /home/oracle/app/oracle/oradata/td/redo03.log
    /home/oracle/app/oracle/oradata/td/standby_redo01.log
    /home/oracle/app/oracle/oradata/td/standby_redo02.log
    /home/oracle/app/oracle/oradata/td/standby_redo03.log
    on the standby the logs look like:
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_1_6b1f9mvc_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_2_6b1f9p36_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_3_6b1f9rdj_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_4_6b1f9v8r_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_5_6b1f9xms_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_6_6b1f9zxv_.log
    the filenames on the standby are not predictable for me in any way, so how would i use log_file_name_convert?
    thanks alot for your help!

  • Useful logs and trace files

    Hello experts, for our Netweaver AS administration, I am in charge of periodically checking logs and trace files. I would like to know which are the most useful logs and trace files and the information each one will hold. I am familiar with "DefaultTrace.trc", and as of today it is the only one I have used, but I believe I should also be looking at other logs and trace files.
    Any suggestions?

    Hi Pedro,
    If you are talking about JAVA only system defaulttrace is the best log/trace to look, there are other log files like application log, but maybe the best way to check you logs is using NWA (NetWeaver Administrator) on the following URL on your JAVA system:
    http://<hostname>:<port>/nwa
    From there you need to go to Monitoring -> Logs and Traces and then Predefined View/SAP logs.
    My other recommendation is to change the severity level to ERROR for all you JAVA component within the Visual Administrator -> ServeNode -> Services -> Log Configurator -> Locations, otherwise it is possible that you see a lot of garbage on the defaulttraces. Anyway you can change the severity level per component, on demand, to investigate any possible problem.
    The work directory is very imporant and maybe you can also check the file "dev_serverX" that also will give you information about any out of memory conditions and garbage collection activity if you have these values set for the server node using the config tool:
    -verbose:gc
    -XX:+PrintGCDetails
    -XX:+PrintGCTimeStamps
    You can find more information on here:
    http://help.sap.com/saphelp_nw70/helpdata/en/ac/e9d8a51c732e42bd0e7de54b9ff4e2/content.htm
    Hopefully this help you, let me know if you need more information,
    Zareh

Maybe you are looking for

  • Please help!!!! URGENT !!!! using environment variable

    How do I use the environment variable in my sql script. example: I am setting up environment variables through my .bat file. The batch file contain the following lines... SET ORACLE_SID=test SET SID_PWD=test SQLPLUS /NOLOG @C:\TEST.SQL In my TEST.SQL

  • Can "swipe" page navigation be supressed?

    We recently published a children's book using the Adobe DPS suite. We've had great feedback for the most part, but several professional app reviewers had several usability issues with the platform, specifically the menu system and sensitivity of swip

  • Incremental backup in noarchive log

    Dear all, Two backup script is to be written 1. for 0 level incremantal backup or full backup 2. level 1 incremantal backup first file: run backup incremental level 0 database; crosscheck backupset; second file run backup incremental level 1 database

  • [UITextView setText:] bad performance?

    Hi, I implemented a chat view which each cell is a custom bubble cell with a UITextView (for link, address, phone detection). The scrolling is not smooth as I expect, and after making some testing in Instruments "time profiler" I figured the setText

  • TS4006 bloquear mi iphone a distancia con un nuevo codigo?

    Como puedo bloquear a mi iphone a distancia con un nuevo codigo