Last archive log

Hi All,
sorry if i can not write English well. and i am beginner in Data guard(DG)
i established Data guard . primary in server1 and DG in server2. dont enable redo log for DG.
when an archive log created in server1 , immediate copy this archive log to server2 automatically.
but for example when we have 10 archive log in server1 . we have 11 archive log in server2. i dont know the last archive is archive log or another file type but this name is same archive log file name format.
for example when created O1_MF_1_69_689MFJSW_.ARC (40 MB size). this file copy to server2 but
in server2 created O1_MF_1_70_689MFJSW_.ARC . and i think when DG database is shutdown, last archive log
in this case O1_MF_1_70_689MFJSW_.ARC deleted automatically. my question is : what is this(last archive that created in server2) . this is normal or no?
Best Regards
Hassan

hi,
Can you paste the result of the below query.. Run it in primary
SET PAGESIZE 124
COL DB_NAME FORMAT A8
COL HOSTNAME FORMAT A12
COL LOG_ARCHIVED FORMAT 999999
COL LOG_APPLIED FORMAT 999999
COL LOG_GAP FORMAT 999
COL APPLIED_TIME FORMAT A12
COL LOG_DAY FORMAT A2
SELECT DB_NAME, HOSTNAME, LOG_ARCHIVED, LOG_APPLIED,APPLIED_TIME,
LOG_ARCHIVED-LOG_APPLIED LOG_GAP,LAG_DAY,LAG_TIME
FROM
SELECT NAME DB_NAME
FROM V$DATABASE
SELECT UPPER(SUBSTR(HOST_NAME,1,(DECODE(INSTR(HOST_NAME,'.'),0,LENGTH(HOST_NAME),
(INSTR(HOST_NAME,'.')-1))))) HOSTNAME
FROM V$INSTANCE
SELECT MAX(SEQUENCE#) LOG_ARCHIVED
FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES'
SELECT MAX(SEQUENCE#) LOG_APPLIED
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES'
SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES'
SELECT floor((sysdate - MAX(COMPLETION_TIME))*24*60) LAG_TIME
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES'
SELECT '+'||floor((sysdate - MAX(COMPLETION_TIME))*24) LAG_DAY
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES'
);thanks,
baskar.l

Similar Messages

  • Archive log query

    Hi,
    i want to extract the last archived log and the time of archiving. I know that maybe it's a stupid question but how i can get the last line from v$loghist if i have to use the group by clause because of the max function?
    This is my query:
    select max(sequence#),to_char(first_time,'DD-MM-YYYY, HH24:MI:SS') from v$loghist group by first_time;
    Thank you.
    Tarek

    This should do it:
    SELECT
      sequence#,
      to_char(first_time,'DD-MM-YYYY, HH24:MI:SS')
    FROM
      v$loghist
    WHERE
      sequence# IN
      (SELECT
        max(sequence#)
      FROM
        v$loghist
    SEQUENCE# TO_CHAR(FIRST_TIME,'
           229 14-01-2003, 09:19:43Hope this helps,
    L.

  • Could u tell me last applyed archive log in disaster recovery server

    Hi DBA'S,
    we are using oracle 9.2.0.8 my production database has dr(our dr is not a stand by it is created in diff way) server every time one cron job sync the database and transfer the new logs from production server to dr server. i want to know last applied archive in dr server.
    i am using this query select max(sequence#) from v$log_history; it is showing same results in boath production and dr when ever we backup 2 days before archive to some backup directory sync is going to fail
    please tell me correct query to find out the last applied archived log in dr server
    Thanks&Regards
    Tirupathi

    we are using oracle 9.2.0.8 my production database has dr(our dr is not a stand by it is created in diff way) server every time one cron job sync the database and transfer the new logs from production server to dr server. i want to know last applied archive in dr server.i am using this query select max(sequence#) from v$log_history; it is showing same results in boath production and dr when ever we backup 2 days before archive to some backup directory sync is going to fail
    please tell me correct query to find out the last applied archived log in dr server>
    As you are running cron job to bring the dr in sync, i suppose v$log_history might give you the wrong results.In your case check :-
    select sequence#,applied from v$archived_log where applied='YES';
    select max(sequence#) from v$archived_log where applied='YES';Would like to know, why manual recovery (archive logs being manually applied) is being performed, and not automatic.
    In case you switch to automatic below query would be useful :-
    select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
        from (select thread# thrd, max(sequence#) almax
                from v$archived_log
               where resetlogs_change#=(select resetlogs_change# from v$database)
               group by thread#) al,
             (select thread# thrd, max(sequence#) lhmax
               from v$log_history
                where first_time=(select max(first_time) from v$log_history)
               group by thread#) lh
              where al.thrd = lh.thrd;Refer [http://aprakash.wordpress.com/2010/07/21/use-of-varchived_log-and-vlog_history-on-physical-standby/]
    HTH
    Anand

  • How to find out which archived logs needed to recover a hot backup?

    I'm using Oracle 11gR2 (11.2.0.1.0).
    I have backed up a database when it is online using the following backup script through RMAN
    connect target /
    run {
    allocate channel d1 type disk;
    backup
    incremental level=0 cumulative
    filesperset 4
    format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
    database
    }The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
    I have not deleted any archived log.
    How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
    Thanks for any help.

    A few ways :
    1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
    1b. Review the alert.log of the database that was backed up.
    1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
    1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
    2a. Do a RESTORE DATABASE alone.
    2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
    2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
    2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
    (why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
    3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
    Hemant K Chitale

  • Archive logs not getting shipped when we do a swtich

    Hi Team.
    I am facing an strange issue on one of our standby.
    After the setup of the standby when we enabled the MrM mode, the archive got shipped smoothly.After after sometime when we tried to switch log file and check if it was working or not.
    From the alert log of the DR , we can only see these msg " Media Recovery Waiting for thread 1 sequence ***". But the moment we cancel the MrM mode and re enable it , its ships the same smooth.
    So need some guidance to debug the same .
    Please advise .
    Thanks

    Hello;
    On something like this I would check BOTH the Primary and Standby alert logs.
    Here's a SWITCH I forced yesterday:
    Thu Sep 12 16:01:11 2013
    ALTER SYSTEM ARCHIVE LOG
    Thu Sep 12 16:01:11 2013
    Thread 1 advanced to log sequence 811 (LGWR switch)
      Current log# 1 seq# 811 mem# 0: /u01/app/oracle/oradata/PRIMARY/redo01.log
    Thu Sep 12 16:01:11 2013
    LNS: Standby redo logfile selected for thread 1 sequence 811 for destination LOG_ARCHIVE_DEST_2
    Notice the last line is logged on the Primary almost the very moment I do the switch. Does your Database in  Primary mode show that?
    Best Regards
    mseberg

  • ORA-00339: archived log does not contain any redo

    Hi All,
    recently we faced 'ORA-00339: archived log does not contain any redo' issue at standby side,
    after searching on Google and from Metalink (note 30866.1 and 7197445.8 ) I find out that this is the known issue for 10g and below versions, our's is 11.2.0.3,
    Error in Alert Log :
    Errors in file /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/trace/DWPRD_pr0a_48412.trc:
    ORA-00339: archived log does not contain any redo
    ORA-00334: archived log: '/redolog2/redo/redolog3a.log'
    Errors in file /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/trace/DWPRD_pr0a_48412.trc (incident=190009):
    ORA-00600: internal error code, arguments: [kdBlkCheckError], [1], [56702], [6114], [], [], [], [], [], [], [], []
    Incident details in: /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/incident/incdir_190009/DWPRD_pr0a_48412_i190009.trc
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Slave exiting with ORA-10562 exception
    Errors in file /oracle/ora_home/diag/diag/rdbms/dwprd/DWPRD/trace/DWPRD_pr0a_48412.trc:
    ORA-10562: Error occurred while applying redo to data block (file# 1, block# 56702)
    ORA-10564: tablespace SYSTEM
    ORA-01110: data file 1: '/oradata1/database/DATAFILES/system01.dbf'
    ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 2
    ORA-00600: internal error code, arguments: [kdBlkCheckError], [1], [56702], [6114], [], [], [], [], [], [], [], []
    Mon Apr 15 11:34:12 2013
    Dumping diagnostic data in directory=[cdmp_20130415113412], requested by (instance=1, osid=48412 (PR0A)), summary=[incident=190009].
    Thanks

    Hi,
    "The archived log is not the correct log.
    It is a copy of a log file that has never been used for redo generation, or was an online log being prepared to be the current log."
    "Restore the correct log file."
    Can you say, what is last changes on your database, On log files?
    Did you copies your '/redolog2/redo/redolog3a.log' log file from other ?
    Regards
    Mahir M. Quluzade

  • Rman backup archive log

    Hi Guys,
    Can advise on the syntax to perform rman backup of archive logs generated in last 2 days?
    Should it be 1 or 2?
    thanks!
    1. BACKUP ARCHIVELOG UNTIL TIME 'SYSDATE-2';
    2. BACKUP ARCHIVELOG FROM TIME 'SYSDATE-2';

    What prevents you from trying both?
    I'm not trying to be difficult here but why take the time to ask people in a forum, not even supplying a version number, and not just find out?
    It took me less than 60 seconds to cut-and-paste both of your command lines into RMAN and look at the output.
    Edited by: damorgan on Jan 19, 2013 4:11 PM

  • Problem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Shell script for archive log transfer

    hi
    I dont want to reinvent the wheel.
    I am looking for shell script for log shipping to provide standby db.
    What I want to do is, get the last applied archived log number from alert.log
    Copy the files from archive destination according to this value.
    Cheers

    If you don't want to re-invent the wheel you use Dataguard, no scripts.
    And your script should use the dictionary, instead of some bs method to read the alert.
    v$archived_log has all information!
    Also as far as I know, the documentation describes manual standby.
    So apparently you not only don't want to reinvent the wheel, but you want the script on a silver plate on your doorstep!
    Typical attitude of most DBAs here. Use OTN for a permanent vacation.
    Sybrand Bakker
    Senior Oracle DBA

  • Archive logs are not transferred to STDBY database

    Hi,
    I have create a STDBY database (I am running the release 9.2.0.7.0).
    I see that the archivelogs are not correctly transferred into STDBY server.
    From Primary alert log, I see the following error:
    ARC1: Evaluating archive log 1 thread 1 sequence 16734
    ARC1: LGWR is actively archiving destination LOG_ARCHIVE_DEST_2
    ARC1: Destination LOG_ARCHIVE_DEST_2 archival not expedited
    ARC1: Beginning to archive log 1 thread 1 sequence 16734
    Creating archive destination LOG_ARCHIVE_DEST_1: '/backup/archivelogs/log1_16734.arc'
    ARC1: LGWR is actively archiving destination LOG_ARCHIVE_DEST_2
    Invoking non-expedited destination LOG_ARCHIVE_DEST_2 thread 1 sequence 16734 host STDBY_PROD
    ARC1: Completed archiving log 1 thread 1 sequence 16734
    Thu Nov 17 14:54:42 2011
    Errors in file /mnt/orclEBS/oracle/proddb/9.2.0/admin/PROD_ebslive/bdump/prod_arc0_5277.trc:
    ORA-03114: not connected to ORACLE
    Thu Nov 17 14:54:42 2011
    ARC0: FAL archive failed, see trace file.
    ARCH: FAL archive failed. Archiver continuing
    Thu Nov 17 14:54:42 2011
    ORACLE Instance PROD - Archival Error. Archiver continuing.
    ARCH: Connecting to console port...
    Thu Nov 17 14:54:42 2011
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    Thu Nov 17 14:54:42 2011
    Errors in file /mnt/orclEBS/oracle/proddb/9.2.0/admin/PROD_ebslive/bdump/prod_arc0_5277.trc:
    ORA-16055: FAL request rejected
    ARC0: Begin FAL archive (thread 1 sequence 16483 destination STDBY_PROD)
    Creating archive destination LOG_ARCHIVE_DEST_2: 'STDBY_PROD'
    Thu Nov 17 15:05:44 2011
    LGWR: I/O error 3114 archiving log 2 to 'STDBY_PROD'
    Thu Nov 17 15:05:44 2011
    Errors in file /mnt/orclEBS/oracle/proddb/9.2.0/admin/PROD_ebslive/bdump/prod_lgwr_5265.trc:
    ORA-03114: not connected to ORACLE
    Thu Nov 17 15:10:08 2011
    Errors in file /mnt/orclEBS/oracle/proddb/9.2.0/admin/PROD_ebslive/bdump/prod_arc0_5277.trc:
    ORA-03114: not connected to ORACLE
    Thu Nov 17 15:10:08 2011
    ARC0: FAL archive failed, see trace file.
    ARCH: FAL archive failed. Archiver continuing
    Thu Nov 17 15:10:08 2011
    ORACLE Instance PROD - Archival Error. Archiver continuing.
    ARCH: Connecting to console port...
    Thu Nov 17 15:10:08 2011
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    Thu Nov 17 15:10:08 2011
    Errors in file /mnt/orclEBS/oracle/proddb/9.2.0/admin/PROD_ebslive/bdump/prod_arc0_5277.trc:
    ORA-16055: FAL request rejected
    I see that both archiving destinations are VALID:
    SQL> select dest_id, dest_name, status from v$archive_dest_status where status != 'INACTIVE';
    DEST_ID ---------- DEST_NAME -------------------------------------------------------------------------------- STATUS
    1 LOG_ARCHIVE_DEST_1 VALID
    2 LOG_ARCHIVE_DEST_2 VALID
    SQL>
    TNSPING works properly in both directions, e.g. from PROD -> STDBY:
    $tnsping STDBY_PROD
    TNS Ping Utility for Linux: Version 9.2.0.7.0 - Production on 17-NOV-2011 16:56:40
    Copyright (c) 1997 Oracle Corporation. All rights reserved.
    Used parameter files:
    /mnt/orclEBS/oracle/proddb/9.2.0/network/admin/PROD_ebslive/sqlnet_ifile.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp) (HOST=EBSSTDBY) (PORT=1521)) (CONNECT_DATA=(SID=PROD)))
    OK (10 msec)
    Please help me to solve this issue.
    Thank you.

    A few notes :
    Troubleshooting Tips For Dataguard Switchover (9i and 10gR1) [ID 298986.1]
    Although it is about Switchover, it does mandate a password file exclusive. If the switchover requires the password file, shouldn't ARCH shipping require it ?
    Step By Step Guide To Create Physical Standby Database Using RMAN [ID 469493.1]
    also covers 9i with specific mentions of 9i differences from 10g. Passwordfile is not mentioned as different in 9i from 10g.
    Data Guard 9i - Net8 Configuration for a 2-node database environment. [ID 175122.1
    has log_archive_dest_2='SERVICE=NODE2STDBY.ACME.COM'    so it connects to the service
    *However* the document "Introduction to Oracle 9i Data Guard Manager" from
    Data Guard 9i Introduction to Data Guard Manager GUI [ID 150217.1]
    page 59 has this : "The creation process will attempt to create a remote login password file for the standby database to
    enable remote connections to the database. (Note: A remote login password file is not necessary for
    Data Guard operation; it is only needed to allow remote connections to the database from clients,
    such as other Enterprise Manager tools.)"
    Similarly the section "3.2.6 Set Initialization Parameters on a Physical Standby Database"
    in the 9i DataGuard manual at http://download.oracle.com/docs/cd/B10501_01/server.920/a96653/create_ps.htm#62941
    only has
    remote_archive_enable=TRUE
    (this is an instance parameter)
    Similarly the section "5.8.2.1 Primary Database Initialization Parameters"
    at http://download.oracle.com/docs/cd/B10501_01/server.920/a96653/log_transport.htm#1067609
    explains
    The last parameter, REMOTE_ARCHIVE_ENABLE=SEND, allows the primary database to send redo data to the standby database, but prevents the primary database from receiving redo data from another system.
    with REMOTE_ARCHIVE_ENABLE=RECEIVE on the Standby
    to Receive and archive the incoming redo data from the primary database, but only while the database is running in the standby role
    So, I guess that REMOTE_ARCHIVE_ENABLE is the important one. Not REMOTE_LOGIN_PASSWORDFILE.
    In 9i : http://download.oracle.com/docs/cd/B10501_01/server.920/a96536/ch1174.htm#1023087
    In 10g it is deprecated : http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams176.htm#sthref541 so that LOG_ARCHIVE_CONFIG is used.
    (and no longer appears in the 11.2 Reference)
    Hemant K Chitale

  • RMAN: How to apply Archive Logs after recoverying of all physical files

    Hi;
    I am using RMAN Oracle10g; my test database has being corrupted. I have already taken 0 level backup through this command
    run {
    allocate channel c1 type disk;
    backup incremental level 0 tag = Test_Weekly_database format 'O:\rman\backup\Full_Weekly_%d_%s_%p_%t'(database);
    release channel c1;
    configure controlfile autobackup format for device type disk to 'O:\rman\backup\Auto_Ctrl_weekly_%F';
    allocate channel c1 type disk;
    sql 'alter system archive log current';
    BACKUP tag = Test_Weekly_Arch ARCHIVELOG UNTIL TIME 'SYSDATE-7' format 'O:\rman\backup\Archive_weekly_%d_%s_%p_%t';
    DELETE ARCHIVELOG UNTIL TIME 'SYSDATE-7';
    release channel c1;
    After backing up I inserted few records in TEST123 table. And switch current log file.
    Then my database has been corrupted. Now I have last 0 level backup (RMAN) and archive logs files on OS level.
    I am recovering my database with the following commands; but archive logs have not being applied. And my inserted records were not presented in TEST123 table.
    Kindly guide me
    SQL> startup nomount
    CMD> RMAN target=/
    RMAN>set DBID 1168995671
    RMAN>RUN {
    SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'O:\rman\backup\Auto_Ctrl_weekly_%F';
    RESTORE CONTROLFILE from autobackup;
    RMAN> ALTER DATABASE MOUNT;
    RMAN> RESTORE DATABASE CHECK READONLY;
    RMAN> RECOVER DATABASE NOREDO;
    RMAN> restore archivelog all;
    SQL> startup mount
    SQL> alter database backup controlfile to trace;
    SQL> shut immediate
    SQL> startup nomount
    SQL> CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS ARCHIVELOG................;
    SQL> alter database open resetlogs
    ---Database altered.
    SQL> select * from TEST123;
    Not record found
    regards;
    Asim

    Dear Khurram;
    Kindly advise where i m worrg???????
    C:\>RMAN target=/
    RMAN> set DBID 1168995671
    executing command: SET DBID
    RMAN> RUN {
    2> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'O:\rman\backup\Auto_Ctrl_weekly_%F';
    3> RESTORE CONTROLFILE from autobackup;
    4> }
    executing command: SET CONTROLFILE AUTOBACKUP FORMAT
    Starting restore at 27-DEC-07
    using channel ORA_DISK_1
    recovery area destination: O:\rman\backup
    database name (or database unique name) used for search: ORCL
    channel ORA_DISK_1: no autobackups found in the recovery area
    channel ORA_DISK_1: looking for autobackup on day: 20071227
    channel ORA_DISK_1: autobackup found: O:\rman\backup\Auto_Ctrl_weekly_c-11689956
    71-20071227-04
    channel ORA_DISK_1: control file restore from autobackup complete
    output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL01.CTL
    output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL02.CTL
    output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL03.CTL
    Finished restore at 27-DEC-07
    RMAN> ALTER DATABASE MOUNT;
    database mounted
    released channel: ORA_DISK_1
    RMAN> RESTORE DATABASE CHECK READONLY;
    Starting restore at 27-DEC-07
    Starting implicit crosscheck backup at 27-DEC-07
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=155 devtype=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: sid=154 devtype=DISK
    Crosschecked 9 objects
    Finished implicit crosscheck backup at 27-DEC-07
    Starting implicit crosscheck copy at 27-DEC-07
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    Finished implicit crosscheck copy at 27-DEC-07
    searching for all files in the recovery area
    cataloging files...
    no files cataloged
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
    restoring datafile 00002 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
    restoring datafile 00003 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
    restoring datafile 00004 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
    restoring datafile 00005 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEST.DBF
    restoring datafile 00006 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEST2
    channel ORA_DISK_1: reading from backup piece O:\RMAN\BACKUP\FULL_WEEKLY_ORCL_3_
    1_642420573
    channel ORA_DISK_1: restored backup piece 1
    piece handle=O:\RMAN\BACKUP\FULL_WEEKLY_ORCL_3_1_642420573 tag=Test_WEEKLY_DATAB
    ASE
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:46
    Finished restore at 27-DEC-07
    RMAN> restore archivelog all;
    archive log thread 1 sequence 1 is already on disk as file O:\ARCHIVE\ARC00001_0642356125.001
    archive log thread 1 sequence 2 is already on disk as file O:\ARCHIVE\ARC00002_0642356125.001
    archive log thread 1 sequence 3 is already on disk as file O:\ARCHIVE\ARC00003_0642356125.001
    archive log thread 1 sequence 4 is already on disk as file O:\ARCHIVE\ARC00004_0642356125.001
    archive log thread 1 sequence 5 is already on disk as file O:\ARCHIVE\ARC00005_0642356125.001
    archive log thread 1 sequence 6 is already on disk as file O:\ARCHIVE\ARC00006_0642356125.001
    archive log thread 1 sequence 7 is already on disk as file O:\ARCHIVE\ARC00007_0642356125.001
    archive log thread 1 sequence 8 is already on disk as file O:\ARCHIVE\ARC00008_0642356125.001
    archive log thread 1 sequence 9 is already on disk as file O:\ARCHIVE\ARC00009_0642356125.001
    archive log thread 1 sequence 10 is already on disk as file O:\ARCHIVE\ARC00010_0642356125.001
    archive log thread 1 sequence 11 is already on disk as file O:\ARCHIVE\ARC00011_0642356125.001
    archive log thread 1 sequence 12 is already on disk as file O:\ARCHIVE\ARC00012_0642356125.001
    channel ORA_DISK_1: starting archive log restore to default destination
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=15
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=16
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=17
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=18
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=19
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=20
    channel ORA_DISK_1: reading from backup piece O:\RMAN\BACKUP\ARCHIVE_WEEKLY_ORCL_5_1_642420630
    channel ORA_DISK_1: restored backup piece 1
    piece handle=O:\RMAN\BACKUP\ARCHIVE_WEEKLY_ORCL_5_1_642420630 tag=Test_WEEKLY_ARCH
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
    Finished restore at 27-DEC-07
    RMAN> ALTER DATABASE OPEN;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of alter db command at 12/27/2007 10:27:10
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF'
    Regards;
    Asim

  • How to determine which archive logs are needed in flashback.

    Hi,
    Let's assume I have archive logs 1,2,3,4, then a "backup database plus archivelogs" in RMAN, and then archive logs 5+6. If I want to flashback my database to a point immediately after the backup, how do I determine which archive logs are needed?
    I would assume I'd only need archive logs 5 and/or 6 since I did a full backup plus archivelogs and the database would have been checkpointed at that time. I'd also assume archive logs 1,2,3,4 would be obsolete as they would have been flushed to the datafiles in the checkpoint.
    Are my assumptions correct? If not what queries can I run to determine what files are needed for a flashback using the latest checkpointed datafiles?
    Thanks.

    Thanks for the explanation, let me be more specific with my problem.
    I am trying to do a flashback on a failed primary database, the only reason why I want to do a flashback is because dataguard uses the flashback command to try and synchronize the failed database. Specifically dataguard is trying to run:
    FLASHBACK DATABASE TO SCN 865984
    But it fails, if I run it manually then I get:
    SQL> FLASHBACK DATABASE TO SCN 865984;
    FLASHBACK DATABASE TO SCN 865984
    ERROR at line 1:
    ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
    ORA-38761: redo log sequence 5 in thread 1, incarnation 3 could not be accessed
    Looking at the last checkpoint I see:
    CHECKPOINT_CHANGE#
    865857
    Also looking at the archive logs:
    RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# RESETLOGS_CHANGE# RESETLOGS
    25 766838550 1 1 863888 10-NOV-11 863892 863888 10-NOV-11
    26 766838867 1 2 863892 10-NOV-11 864133 863888 10-NOV-11
    27 766839225 1 3 864133 10-NOV-11 864289 863888 10-NOV-11
    28 766839340 1 4 864289 10-NOV-11 864336 863888 10-NOV-11
    29 766840698 1 5 864336 10-NOV-11 865640 863888 10-NOV-11
    30 766841128 1 6 865640 10-NOV-11 865833 863888 10-NOV-11
    31 766841168 1 7 865833 10-NOV-11 865857 863888 10-NOV-11
    How can I determine what archive logs are needed by a flashback command? I deleted any archive logs with a SCN less than the checkpoint #, I can restore them from backup but I am trying to figure out how to query what is required for a flashback. Maybe this coincides with the point that flashbacks have nothing to do with the backups of datafiles or the checkpoints?

  • Archive log missing on standby: FAL[client]: Failed to request gap sequence

    My current environment is Oracle 10.2.0.4 with ASM 10.2.0.4 on a 2 node RAC in production and a standby that is the same setup. I'm also running on Oracle Linux 5. Almost daily now an archivelog doesnt make it to the standby and oracle doesnt seem to resolve the gap sequence from the primary. If I stop and restart recovery it gets the logfile and continues recovery just fine. I have checked my fal_client and fal_server settings and they look good. The logs after this error do continue to get written to the standby but the standby wont continue recovery until I stop and restart recovery and it fetches this missing log.
    The only thing I know thats happening is that the firewall people are disconnecting any connections that are inactive for 60 minutes and recently did an upgrade that they are claiming didnt change anything:)  I dont know if this is causing this problem or not. Any thoughts on what might be happening?
    Error in standby alert.log:
    Tue Jun 29 23:15:35 2010
    RFS[258]: Possible network disconnect with primary database
    Tue Jun 29 23:15:36 2010
    Fetching gap sequence in thread 2, gap sequence 9206-9206
    Tue Jun 29 23:16:46 2010
    FAL[client]: Failed to request gap sequence
    GAP - thread 2 sequence 9206-9206
    DBID 661398854 branch 714087609
    FAL[client]: All defined FAL servers have been attempted.
    Error on primary alert.log:
    Tue Jun 29 23:00:07 2010
    ARC0: Creating remote archive destination LOG_ARCHIVE_DEST_2: 'WSSPRDB' (thread 1 sequence 9265)
    (WSSPRD1)
    ARC0: Transmitting activation ID 0x29c37469
    Tue Jun 29 23:00:07 2010
    Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
    ORA-03135: connection lost contact
    FAL[server, ARC0]: FAL archive failed, see trace file.
    Tue Jun 29 23:00:07 2010
    Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Tue Jun 29 23:00:07 2010
    ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
    Tue Jun 29 23:00:41 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    Tue Jun 29 23:00:41 2010
    FAL[server, ARC2]: Begin FAL archive (dbid 0 branch 714087609 thread 2 sequence 9206 dest WSSPRDB)
    FAL[server, ARC2]: FAL archive failed, see trace file.
    Tue Jun 29 23:00:43 2010
    Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc2_14028.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Tue Jun 29 23:00:43 2010
    ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
    Tue Jun 29 23:01:16 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    Tue Jun 29 23:15:01 2010
    Thread 1 advanced to log sequence 9267 (LGWR switch)
    I have checked the trace files that get spit out but they arent anything meaningful to me as to whats really happening. Snipit of the trace file:
    tkcrrwkx: Starting to process work request
    tkcrfgli: SRL header: 0
    tkcrfgli: SRL tail: 0
    tkcrfgli: ORL to arch: 4
    tkcrfgli: le# seq thr for bck tba flags
    tkcrfgli: 1 359 1 2 0 3 0x0008 ORL active cur
    tkcrfgli: 2 358 1 0 1 1 0x0000 ORL active
    tkcrfgli: 3 361 2 4 0 0 0x0008 ORL active cur
    tkcrfgli: 4 360 2 0 3 2 0x0000 ORL active
    tkcrfgli: 5 -- entry deleted --
    tkcrfgli: 6 -- entry deleted --
    tkcrfgli: 7 -- entry deleted --
    tkcrfgli: 8 -- entry deleted --
    tkcrfgli: 9 -- entry deleted --
    tkcrfgli: 191 -- entry deleted --
    tkcrfgli: 192 -- entry deleted --
    *** 2010-03-27 01:30:32.603 20998 kcrr.c
    tkcrrwkx: Request from LGWR to perform: <startup>
    tkcrrcrlc: Starting CRL ARCH check
    *** 2010-03-27 01:30:32.603 66085 kcrr.c
    Beginning controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14011)]
    *** 2010-03-27 01:30:32.645 66173 kcrr.c
    Acquired controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14024)]
    *** 2010-03-27 01:30:32.649 66394 kcrr.c
    Ending controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20397]
    tkcrrasgn: Checking for 'no FAL', 'no SRL', and 'HB' ARCH process
    # HB NoF NoS CRL Name
    29 NO NO NO NO ARC0
    28 NO YES YES NO ARC1
    27 NO NO NO NO ARC2
    26 NO NO NO NO ARC3
    25 YES NO NO NO ARC4
    24 NO NO NO NO ARC5
    23 NO NO NO NO ARC6
    22 NO NO NO NO ARC7
    21 NO NO NO NO ARC8
    20 NO NO NO NO ARC9
    Thanks.
    Kristi

    It's the network that's messing up; unlikely due to firewall timeout as it waits for 60 minutes and you are switching every 15 minutes. There may be some other network glitch that needs rectified.
    In any case - arch file missing/ corrupt / halfway through - FAL setting should have refetched the problematic archive log automatically.
    As many had suggested already, the best way to resolve RFS issues I believe is to use real-time apply by configuring standby redo logs. It's very easy to configure it and you can opt for real-time apply even in max-performance mode that you are using right now.
    Even though you are maintaining (I guess) 1-1 between primary & standby instances, you can provide both primary instances in fal_server (like fal_server=string1,string2). See if that helps.
    lastly, check if you are having simiar issue at other times as well that might be getting rectified automatically as expected.
    col message for a80
    col time for a20
    select message, to_char(timestamp,'dd-mon-rr hh24:mi:ss') time
    from v$dataguard_status
    where severity in ('Error','Fatal')
    order by timestamp;
    Cheers.

  • Archive log can not ship to GAP logfiles to standby DB automatically

    we have a non-real time standby database, which will receive the archive file from the primary database server most of the time, and will apply the logfiles only at one point of time daily.
    Some times, we need to shutdown the Standby DB server for a while ( 3-4 hours).
    The missed logfiles will catch up during the standby down time later.
    But since last week we had an storage incident, the primary DB server stops to catch up the missed logfiles, and saw this message at the archvie trace file:
    ABC: tkrsf_al_read: No mirror copies to re-read data
    Current, we found the archive log gaps on the standby server, and have to manually copy those logfiles over and register them.
    Saw some tips on the internet to change the parameter "log_archive_max_processes", but no help for us at all.
    Here is the parameter on the Primary DB server:
    log_archive_dest_2 = SERVICE=Standby_server reopen=300

    which will receive the archive file from the primary database server most of the timeMost times from primary. Then remaining times? so you Copy manually and register?
    Then it's dataguard not an manual standby.
    Error 1034 received logging on to the standby
    Errors in file /******/***arc210536.trc:
    ORA-01034: ORACLE not available
    FAL[server, ARC2]: FAL archive failed, see trace file.these errors in primary when , stanby is down and when primary tries to connect to standby, so tese errors not considerable to investigate ,
    When you don't want to apply archives on standby no need to shutdown. Just put this value log_archive_dest_state_2='defer'
    Once you enable check what are the errors in primary alert log file.
    How is your network band width speed? Is it capable to hold that much archive data?
    May be it will take some time when you pause & start.
    Also use LGWR in log_archive_dest_2 for real time apply after creatin standby redo logs.
    So post the alert log information once you enable standby database.

  • Archive log miss : How to restart capture

    Hi Gurus,
    I configured hotlog CDC distributed on 10.2.0.4 Databases.
    I make a mistake: in my source Db I have deleted an Archive log,
    and now the state of Capture process in V_$STREAM_CAPTURE is "WAITING FOR REDO: LAST SCN MINED 930696".
    Now I'd like to restart the capture process from the next archive (just after the missed archive)
    How is it possible?
    tnk Fabio

    I'm sorry to tell you that, but it's not possible. (Just as it's not possible to recover database with missing logs...)
    You will have to recreate the capture process and to re-instantiate the replicated tables.
    Regards,

Maybe you are looking for