Secondary destination for Archived logs

Version: 10.2, 11.1, 11.2
We occasionally get 'archiver error' on our production DBs due to our LOG_ARCHIVE_DEST_1 being full. How can I have a secondary location for archive logs in case my 'primary' location (LOG_ARCHIVE_DEST_1) becomes full ?
I gather that LOG_ARCHIVE_DEST_2 is reserved for shipping archive logs to Dataguard standby DB in which you specify the tns entry of standby using SERVICE parameter.
Can I specify LOG_ARCHIVE_DEST_3 as my secondary location in case LOG_ARCHIVE_DEST_1 becomes full ? Is it what LOG_ARCHIVE_DEST_n meant for ? Although the documentation says you can have upto 10 locations, I am confused if they are meant to store Multiplexed copies of archive logs ? That is not what I am looking for ?

>
Hi again Tom,
I have one more question:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_4 = 'LOCATION=/disk4/arch';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_3 = 'LOCATION=/disk3/arch
    ALTERNATE=LOG_ARCHIVE_DEST_4';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_4=ALTERNATE;
SQL> SELECT dest_name, status, destination FROM v$archive_dest;
DEST_NAME               STATUS    DESTINATION
LOG_ARCHIVE_DEST_1      VALID     /disk1/arch     -------------> Dest1
LOG_ARCHIVE_DEST_2      VALID     +RECOVERY       -------------> Dest2
LOG_ARCHIVE_DEST_3      VALID     /disk3/arch     -------------> Dest3
LOG_ARCHIVE_DEST_4      ALTERNATE /disk4/archMy understanding is (and I'm not terribly sure at the minute - don't have a test system to hand. I haven't
set up a backup/recovery strategy in a while - I just restore backups from time to time (normally every 4 weeks)
to ensure that the database recovers as it should) - my understanding is that under the scheme above
DEST_3 will be a copy of what's in DEST_1. DEST_4 on the other hand will "step in" should DEST_1
or DEST_3 fill up/fail.
As to DEST_2, I'm not sure - maybe something to do with Fast Recovery Area? I've Googled but can't
find anything - the trouble is that all the pages about this contain the word "recovery" and the "+"
sign doesn't appear to affect the search - does "+" mean something special to Google?
I don't have a system at the moment - if you do, why don't you test and see? On a test system, fill
up the file system for DEST_1 with rubbish and check to see what happens?
All of the above is to be taken with a pinch of salt - I don't have a system to hand and am not certain,
so CAVEAT EMPTOR
HTH,
Paul...
Edited by: Paulie on 21-Jul-2012 17:20

Similar Messages

  • How can I set destination for archived logs?

    I would like to know:
    how to set destination for archived logs?
    how to identify the init.ora that is used for my database?
    With rman using compressed backupset by default and and making
    backup database;
    What does it backup exactly?

    Another thing I am wondering, when I make a backup with rman : backup database.
    It saves the backups in the directory autobackup from the flash_recovery_area but it seems that it only saves the data files and the control files.Isn't there a way to sava archived logs files, control files, datafiles in a single backup?
    In fact I would like to make a full backup using rman on sunday of everything and a incremental backup all days of the week how can I acomplish this with a retention of 7 days?

  • "recover database until cancel" asks for archive log file that do not exist

    Hello,
    Oracle Release : Oracle 10.2.0.2.0
    Last week we performed, a restore and then an Oracle recovery using the recover database until cancel command. (we didn't use backup control files) .It worked fine and we able to restart the SAP instances. However, I still have questions about Oracle behaviour using this command.
    First we restored, an online backup.
    We tried to restart the database, but got ORA-01113,ORA-01110 errors :
    sr3usr.data1 needed media recovery.
    Then we performed the recovery :
    According Oracel documentation, "recover database until cancel recovery" proceeds by prompting you with the suggested filenames of archived redo log files.
    The probleme is it  prompts for archive log file that do not exist.
    As you can see below, it asked for SMAarch1_10420_610186861.dbf that has never been created. Therefore, I cancelled manually the recovery, and restarted the database. We never got the message "media recovery complete"
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10417_61018686
    Fri Sep  7 14:09:45 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf'
    Fri Sep  7 14:09:45 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_61018686
    Fri Sep  7 14:10:03 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf'
    Fri Sep  7 14:10:03 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_61018686
    Fri Sep  7 14:10:13 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf'
    Fri Sep  7 14:10:13 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
    Errors with log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
    ORA-308 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_61018686
    Fri Sep  7 14:15:19 2007
    ALTER DATABASE RECOVER CANCEL
    Fri Sep  7 14:15:20 2007
    ORA-1013 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Fri Sep  7 14:15:40 2007
    Shutting down instance: further logons disabled
    When restaring the database we could see that, a recovery of online redo log has been performed automatically, is it the normal behaviour of a recovery using "recover database until cancel"  command ?
    Started redo application at
    Thread 1: logseq 10416, block 482
    Fri Sep  7 14:24:55 2007
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 10416 Reading mem 0
      Mem# 0 errs 0: /oracle/SMA/origlogB/log_g14m1.dbf
      Mem# 1 errs 0: /oracle/SMA/mirrlogB/log_g14m2.dbf
    Fri Sep  7 14:24:55 2007
    Completed redo application
    Fri Sep  7 14:24:55 2007
    Completed crash recovery at
    Thread 1: logseq 10416, block 525, scn 105140074
    0 data blocks read, 0 data blocks written, 43 redo blocks read
    Thank you very much for your help.
    Frod.

    Hi,
    Let me answer your query.
    =======================
    Your question: While performing the recovery, is it possible to locate which online redolog is needed, and then to apply the changes in these logs
    1.   When you have current controlfile and need complete data (no data loss),
          then do not go for until cancel recovery.
    2.   Oracle will apply all the redologs (including current redolog) while recovery
         process is    on.
    3.  During the recovery you need to have all the redologs which are listed in the    view    V$RECOVERY_LOG and all the unarchived and current redolog. By querying  V$RECOVERY_LOG  you    can find out about the redologs required.
    4. If the required sequence is not there in the archive destination, and if recovery process    asks for that sequence you can query V$LOG to see whether requested sequence is part of the    online redologs. If yes you can mention the path of the online redolog to complete the recovery.
    Hope this information helps.
    Regards,
    Madhukar

  • How to calculate storage space for archive log files and database backups?

    Hi all,
    I have a 1.8 terabyte Oracle 9i database and need to plan for how much additional disk space I will need to perform nightly backups and for archivelog files. Is there a script or formula available that can help me estimate how much required disk space I will need to hold a days worth of archived logs as well as a nightly export dump file and a full hot RMAN backup on disk?
    Thanks!

    I'm not sure how to estimate the size of your backups, especially if you use incrementals. However, the space required for archive logs will be equal to the amount of REDO your DB generates. I would count the number of log switches per day with a query like the following:
    select trunc(first_time), count(*)
    from v$log_history
    group by trunc(first_time)
    I would then take the average and multiply this count by the size of your redo log files (assuming they are all the same size).

  • Validation failed for archived log

    Hi,
    oracle database version 11.2.0.4
    OS centOS 6.5
    Recently i have set rman backup scripy on production Database, As we are using dbvisit for standby database for that we have set cron which run in every 10 minutes  it generates archive and copy it to standby side,
    but sometimes backup failed due to expected archive is not represent at location so i put "crosscheck archivelog all" in script now backup is running fine, But i have analyzed backlog file getting
    "validation failed for archived log" the time stamp i have checked validation failed archive is current day and yesterday even though archives are present at the location and CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
    Guys i am worried it shouldn't be a big issue for me,
    please suggest what is wrong

    This forum is for Berkeley DB high availability. We do not have the expertise to help you with your Oracle database 11.2.0.4 issue. You'll need to submit this question to one of the Oracle database forums to get the help you are looking for.
    Paula Bingham

  • What is the meaning of stream capture waiting for archive log ?

    today I fing the oracle alert that stream capture waiting for archive log in other class,and use a lot of database time.
    What is it? And Can I prevent it happening ?

    today I fing the oracle alert that stream capture waiting for archive log in other class,and use a lot of database time.
    What is it? And Can I prevent it happening ?

  • Overheads for Archive Logs

    Hi,
    Not sure where I should address this but I would appreciate any helpful feedback.
    I would like to find out what are the overheads (in terms of size) for managing & storing archive logs for capacity planning purposes. Is this information documented anywhere? How can I find out?
    Thanks in advance.
    Thanks,
    Tony

    Since this number is dependent on your database, it would be impossible to answer. If you have a small database with few changes/deletes, you hardly need any space for archive logs. Oracle reccommends that you size your archive logs so that they switch ~1 time per hour.

  • Shell script for archive log transfer

    hi
    I dont want to reinvent the wheel.
    I am looking for shell script for log shipping to provide standby db.
    What I want to do is, get the last applied archived log number from alert.log
    Copy the files from archive destination according to this value.
    Cheers

    If you don't want to re-invent the wheel you use Dataguard, no scripts.
    And your script should use the dictionary, instead of some bs method to read the alert.
    v$archived_log has all information!
    Also as far as I know, the documentation describes manual standby.
    So apparently you not only don't want to reinvent the wheel, but you want the script on a silver plate on your doorstep!
    Typical attitude of most DBAs here. Use OTN for a permanent vacation.
    Sybrand Bakker
    Senior Oracle DBA

  • RMAN Backups for archive logs

    Hi,
    We are taking daily backups using BCV's and archive backup to tapes using RMAN. Please note we are using RMAN only for archive backups.
    We are changing our backup strategy and I need below information on RMAN.
    1) Is there any benefit I will be getting if I use RMAN for archive backups ( like finding the courruption in the logs,health of the archive) etc? Any reference note for this would be great.
    2) If I use multiplexing of archives using LOG_ ARCHIVE_DEST & LOG_ ARCHIVE_DUPLEX_DEST with this I want RMAN to take the backup of archives and delete from one location only ( Say from LOG_ ARCHIVE_DEST ). And I want the archives in LOG_ ARCHIVE_DUPLEX_DEST to maintain on my own ( Like different purge strategy to zip and keep for 7 days in this location.
    If I use command "backup archivelog all delete input" it is picking and removing radomly from either of these locations. Is there a way I can configure RMAN so that it will not touch my secondary locaton?
    Thanks in advance.
    Thanks,
    Varma

    >
    1) Is there any benefit I will be getting if I use RMAN for archive backups ( like finding the courruption in the logs,health of the archive) etc? Any reference note for this would be great.
    You can use validate backupset command to check the health of backup.
    2) If I use multiplexing of archives using LOG_ ARCHIVE_DEST & LOG_ ARCHIVE_DUPLEX_DEST with this I want RMAN to take the backup of archives and delete from one location only ( Say from LOG_ ARCHIVE_DEST ). And I want the archives in LOG_ ARCHIVE_DUPLEX_DEST to maintain on my own ( Like different purge strategy to zip and keep for 7 days in this location.
    Yes, you can do that.
    If I use command "backup archivelog all delete input" it is picking and removing radomly from either of these locations. Is there a way I can configure RMAN so that it will not touch my secondary locaton?
    You can allocate channels for that.
    regards

  • Oracle recommended location for archive logs in  oracle 10g rac

    Hello All,
    We would like to know the oracle recommended location for the archive logs in oracle10g RAC .we are using ASM.
    Thanks...

    user4487322 wrote:
    thanks. Is it the recommended setting ,if we go for a DR setup?I mean archive logs in ASM.If you can use dataguard, the archivelog copy to the standby system would be handled by Oracle and it supports ASM.
    Just remember, what ever your strategy, the archivelogs must be in a SHARED location (where all nodes can read/write to this location.)

  • Specifying separate location for Archived logs in Persistent settings

    DB version:11gR1
    I am new to RMAN
    Currently my RMAN bkp script looks like
    run {
    recover copy of database with tag "INCR_BKP";
         backup check logical incremental level 1 format '/data_DISK1/bkp_dir/nhprod31/data/INCR_%d_%u'
              for recover of copy with tag "INCR_BKP" database;
         backup (archivelog all  format='/data_DISK2/bkp_dir/nhprod31/arch_logs/ARCH_%d_%T_%u_s%s_p%p' DELETE ALL INPUT TAG "arch_logs");
         backup format '/data_DISK1/bkp_dir/nhprod31/ctrl_file/RMAN_CTL_%s:%t:%p.bkp' current controlfile;
    }I want Datafiles and Archived logs to be stored in different locations like above.
    But i want to remove those lengthy path locations in the script and just use
    backup database plus archivelog;For this, i need to set
    configure channel device type disk format ='/u01/backup/ora_%U.bak';Question1.
    Is it possible to specify a separate archive log location using CONFIGURE command?
    Question2.
    If it is not possible, all the backup files will have the same naming convention
    ora_%U.bakhow will you identify the type of backup file. Say, i need the archive log files to be named with Log sequence number in it (like ARCH_%d_%T_%u_s%s_p%p). Shouldn't Oracle be including a provision for this?
    I am not even sure that each archive log file will backed up as a single backup file. Oracle may consolidate all the archive logs in to one backup piece. Please forgive my ignorance.

    783999 wrote:
    Try this for datafile backup:
    RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '<PATH>/rbkp/databkp/DATA_L0_QA_%d_%s_%p';
    RMAN> show all;
    This worked for me.
    Regards,
    BikramThis won't work. Re-read the question carefully.

  • Query help for archive log generation details

    Hi All,
    Do you have a query to know the archive log generation details for today.
    Best regards,
    Rafi.

    Dear user13311731,
    You may use below query and i hope you will find it helpful;
    SELECT * FROM (
    SELECT * FROM (
    SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
        FROM V$LOG_HISTORY
        WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    ) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
    ) WHERE ROWNUM < 8;Hope That Helps.
    Ogan

  • RMAN failure for "archive-log ... not found in controlfile"

    Recovery Manager: Release 8.1.6.0.0 - Production
    RMAN> connect target sys/XXXX@cldb;
    2> connect catalog rman/XXXX@rmandb1;
    3>
    4> resync catalog;
    5>
    6> # Backup the database
    7> run {
    8> allocate channel ch1 type disk;
    9> backup incremental level 0 format
    10> '/tools/cm/clearquest/orabackup/cldb.cqdbserver/backup/backup_%t_%s_%p' database;
    11> sql 'alter system archive log current';
    12> }
    13>
    14> # Backup the archived redo logs
    15> run {
    16> allocate channel ch1 type disk;
    17>
    18> backup incremental level 0 format
    19> '/tools/cm/clearquest/orabackup/cldb.cqdbserver/backup/archivelog_%t_%s_%p' archivelog all;
    20> sql 'alter system archive log current';
    21> }
    22>
    23> # Backup the control file
    24> run {
    25> allocate channel ch1 type disk;
    26> sql 'alter database backup controlfile to trace';
    27> copy current controlfile to
    28> '/tools/cm/clearquest/orabackup/cldb.cqdbserver/controlfile.bak';
    29> }
    30>
    31>
    RMAN-06005: connected to target database: CLDB (DBID=1292102281)
    RMAN-06008: connected to recovery catalog database
    RMAN-03022: compiling command: resync
    RMAN-03023: executing command: resync
    RMAN-08002: starting full resync of recovery catalog
    RMAN-08004: full resync complete
    RMAN-03022: compiling command: allocate
    RMAN-03023: executing command: allocate
    RMAN-08030: allocated channel: ch1
    RMAN-08500: channel ch1: sid=17 devtype=DISK
    RMAN-03022: compiling command: backup
    RMAN-03023: executing command: backup
    RMAN-08008: channel ch1: starting incremental level 0 datafile backupset
    RMAN-08502: set_count=816 set_stamp=422358010 creation_time=22-FEB-01
    RMAN-08010: channel ch1: specifying datafile(s) in backupset
    RMAN-08522: input datafile fno=00001 name=/opt/orabase/oradata/cldb/system01.dbf
    ---snip----
    more like this
    ---snip----
    RMAN-08013: channel ch1: piece 1 created
    RMAN-08503: piece handle=/tools/cm/clearquest/orabackup/cldb.cqdbserver/backup/backup_422358010_816_1 comment=NONE
    RMAN-08525: backup set complete, elapsed time: 00:01:46
    RMAN-03023: executing command: partial resync
    RMAN-08003: starting partial resync of recovery catalog
    RMAN-08005: partial resync complete
    RMAN-03022: compiling command: sql
    RMAN-06162: sql statement: alter system archive log current
    RMAN-03023: executing command: sql
    RMAN-08031: released channel: ch1
    RMAN-03022: compiling command: allocate
    RMAN-03023: executing command: allocate
    RMAN-08030: allocated channel: ch1
    RMAN-08500: channel ch1: sid=17 devtype=DISK
    RMAN-03022: compiling command: backup
    RMAN-03025: performing implicit partial resync of recovery catalog
    RMAN-03023: executing command: partial resync
    RMAN-08003: starting partial resync of recovery catalog
    RMAN-08005: partial resync complete
    RMAN-03023: executing command: backup
    RMAN-08009: channel ch1: starting archivelog backupset
    RMAN-08502: set_count=817 set_stamp=422358179 creation_time=22-FEB-01
    RMAN-08014: channel ch1: specifying archivelog(s) in backup set
    RMAN-08504: input archivelog thread=1 sequence=85 recid=2610 stamp=422352397
    RMAN-08504: input archivelog thread=1 sequence=86 recid=2611 stamp=422352397
    ----snip----
    more similar stuff here
    ----snip----
    RMAN-08504: input archivelog thread=1 sequence=140 recid=2732 stamp=422358121
    RMAN-08013: channel ch1: piece 1 created
    RMAN-08503: piece handle=/tools/cm/clearquest/orabackup/cldb.cqdbserver/backup/archivelog_422358179_817_1 comment=NONE
    RMAN-08525: backup set complete, elapsed time: 00:00:17
    RMAN-08009: channel ch1: starting archivelog backupset
    RMAN-08502: set_count=818 set_stamp=422358197 creation_time=22-FEB-01
    RMAN-08014: channel ch1: specifying archivelog(s) in backup set
    ----snip----
    more like this
    ----snip----
    RMAN-08504: input archivelog thread=1 sequence=164 recid=2756 stamp=422358123
    RMAN-08504: input archivelog thread=1 sequence=165 recid=2757 stamp=422358123
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAG E STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03006: non-retryable error occurred during execution of command: backup
    RMAN-07004: unhandled exception during command execution on channel ch1
    RMAN-10035: exception raised in RPC: ORA-19571: archived-log recid 2011 stamp 419161314 not found in controlfile
    RMAN-10031: ORA-19571 occurred during call to DBMS_BACKUP_RESTORE.BACKUPARCHIVEDLOG
    I added the resync catalog command to the backup. But I am still getting this error. What is happening? I have the control_file_record_keep_time = 14 in the paramater file. Do I need to change this to something greater?
    null

    The error message is pointing to an inconsistency between what RMAN "thinks"
    should be there and what is in control files.
    There are a few things you can try to do
    1. Compare V$LOG, V$LOGFILE, etc. Some of these views are based on control files
    2. Shutdown and restart the instance.
    This may help in synchronizing control files
    with internal structures.
    3. Trace RMAN session using oradebug and even 10046 to find out what it is doing
    4. Dump entire controlfile to trace using
    alter session set events 'immediate trace name controlf level 10';
    Or dump loghist only
    alter session set events 'immediate trace name loghist level 4';
    This will dump 2**level entries.
    5. Re-create controlfiles with CREATE CONTROLFILE
    Regards,
    Sev

  • HTML output for archive logs generated

    Hi All,
    Greetings of the day,
    Have a sql scheduled in cron which gives number of archive logs generated in each hour..I have modify the shell to include HTML commands to get the ouput in HTML format...
    Any ideas on how will i do this?
    Thanks ,
    baskar.l

    Please take time to read the documentation. There is a link to "Generating HTML Reports in SQLPlus" which also has examples.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14357/ch7.htm#CHDCECJG
    Edited by: Hemant K Chitale on May 21, 2009 5:08 PM

  • Local file system for archive destination in RAC with ASM

    hi Gurus .
    I need an info.
    I have ASM file system in 2 node RAC.
    The client does not want to use flash recovery area.
    Can we use the archive destination as local file system rather than ASM.
    like /xyzlog1 for archvies coming from node 1 and /xyzlog2 for archive logs coming from node 2.
    Imortant thing is these two destinations are anot shared with each nodes.
    OS is solaris sparc 10.
    version is 10.2.0.2

    There is huge space in the storage.
    Pls tell me in general ho wdo you do this.
    Do we take and one disk from the storage and format it with local file system and share it among the 2 nodes?
    If so then that mount point will have same mount point name if we see it from other node... ..ryt
    In this scenario if one instance is down then from the same shared mount point which is on the same node(down) can apply archves?Here, Earlier you are using any ASM shared location for ARCHIVES ?
    if so you can add a CANDIDATE disk to your existing ARHCIVE disk group(shared).
    if not from LUN's after format, I.e. candidates based on that you can create a new DISK group(shared) according to your space requirements. THen you can assign to log_archive_dest_1 for both the nodes to single shared location(disk group)

Maybe you are looking for