Finding relevant Archive log number

hi,
The setup is a oracle 10g database running on solaris 10 and its in archive log mode. this is a development setup
The archive log directory now contains archives from the data of creation of the database. All schemas containing data have been dropped and recreated by use of expdp 3-4 times so far for refreshing the data purposes, however there is no track of the date of refresh.
I am under the assumption that the archive logs prior to the recreation of the schemas are not going to be used.
is there any way to identify a archive log number, so that i can remove all the archive logs prior to this number.
Please share ur advice on this.

Hi,
I am under the assumption that the archive logs prior to the recreation of the schemas are not going to be used.
AFAIK no, there is no way. but you have one more option.
1. take database backup now and remove all the archive log before backup.
Regards,
Taj

Similar Messages

  • Finding the order of archive logs in RAC to NON-RAC manual cloning

    Hi,
    I am in the process of automating a RAC to NON-RAC clone using traditional method [ not using RMAN ].
    Here are the steps:
    On Source:
    Record the sequence number and thread of the instance
    put the database in begin backup mode
    copy the files
    disable the backup mode
    record the sequence number and thread of the instance
    prepare the order of the archive logs which will be needed for recovery ==> Here is where I need help
    On Target:
    Restore the backup
    Issue recovery using the order of archive logs identified ==> Failing as order is not correct.
    I am finding the archive logs based on FIRST_TIME# and preparing the order which is not correct. I think i should be using SCN but not sure what should be the criteria.
    Could you please let me know on what basis we can find the order of the archive logs which will be asked during the recovery ?
    Your quick response is appreciated. Thanks in advance for your help.
    Thanks
    Suneel

    Yes, When we execute recover database using backup controlfile until cancel, will prompt for the redo log sequences.
    But as i mentioned before, I am trying script everything and cannot manually enter the sequence numbers required. I need to know the archives and thread it is going to ask [ in what order as well ] so that recovery command can be scripted in shell.
    Please let me know if there is a away to find the order of archive logs which needs to be applied to recover [ if this is a NON-RAC, we can simply go with the sequence numbers but since it is a RAC, i should know the thread number as well ].
    Thanks

  • Generate archive logs are not in sequence number?

    On last friday... the latest archive log number was ARC00024.ARC. Tomorrow when I come backup, the archive logs ARC00001.ARC and ARC00002.ARC were being generated by oracle itself. I wondering the archive log sequence should be in sequence. What is happening?
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination C:\oracle\ora92\RDBMS
    Oldest online log sequence 1
    Next log sequence to archive 3
    Current log sequence 3
    SQL>
    FAN
    Edited by: user623471 on Jun 7, 2009 7:35 PM

    khurram,
    Its our production instance and havent issued resetlogs option but when listing the arvchives it shows in different sequence number...
    and also while copying the archives by RMAN it doesnt copy in sequence
    -rw-r----- 1 xxx dba 69363859 May 28 19:16 2_10373.arc.gz
    -rw-r----- 1 xxx dba 43446622 May 28 19:16 1_10553.arc.gz
    -rw-r----- 1 xxx dba 52587365 May 28 19:16 1_10578.arc.gz
    -rw-r----- 1 xxx dba 45251820 May 28 19:16 1_10543.arc.gz
    -rw-r----- 1 xxx dba 60890256 May 28 19:17 1_10579.arc.gz
    -rw-r----- 1 xxx dba 46659008 May 28 19:17 1_10548.arc.gz
    -rw-r----- 1 xxx dba 116899466 May 28 19:17 2_10353.arc.gz
    -rw-r----- 1 xxx dba 77769517 May 28 19:17 1_10531.arc.gz
    -rw-r----- 1 xxx dba 66401923 May 28 19:18 1_10530.arc.gz
    -rw-r----- 1 xxx dba 45972697 May 28 19:18 1_10605.arc.gz
    -rw-r----- 1 xxx dba 55082543 May 28 19:18 1_10600.arc.gz
    -rw-r----- 1 xxxq dba 42682207 May 28 19:19 1_10547.arc.gz
    thanks,
    baskar.l

  • Shell script for archive log transfer

    hi
    I dont want to reinvent the wheel.
    I am looking for shell script for log shipping to provide standby db.
    What I want to do is, get the last applied archived log number from alert.log
    Copy the files from archive destination according to this value.
    Cheers

    If you don't want to re-invent the wheel you use Dataguard, no scripts.
    And your script should use the dictionary, instead of some bs method to read the alert.
    v$archived_log has all information!
    Also as far as I know, the documentation describes manual standby.
    So apparently you not only don't want to reinvent the wheel, but you want the script on a silver plate on your doorstep!
    Typical attitude of most DBAs here. Use OTN for a permanent vacation.
    Sybrand Bakker
    Senior Oracle DBA

  • Using Log Miner to find reason for extremely large archive logs.

    Hello everyone,
    I have an Oracle 10g RAC database that sometimes generates extremely large number archive logs. The database is in ARCHIVELOG mode.
    The usual volume of archive logs per day after compression is about 5GB, sometimes that spikes to 15GB and I cannot understand why.
    I am looking at gathering statistics based on the inflated redo logs via LogMiner.
    Looking at the structure of V$LOGMNR_CONTENTS - there are columns with promising names such as REDO_LENGTH, REDO_OFFSET, UNDO_LENGTH, UNDO_OFFSET.
    However all these columns are deprecated. http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1154.htm
    Is there a way of identifying operations that generate large redo logs?
    The documentation for LogMiner has some example user sessions but none show how to generate statistics on the connection between redo logs and sql statements.
    I see nothing that can help me in the following views:
    V$LOGMNR_DICTIONARY, V$LOGMNR_DICTIONARY_LOAD, V$LOGMNR_LATCH,V$LOGMNR_LOGS, V$LOGMNR_PARAMETERS, V$LOGMNR_PROCESS, V$LOGMNR_SESSION
    These views plus the following columns sound somewhat promising:
    V$LOGMNR_CONTENTS -> RBABLK, RBABYTE, UBAFIL, UBABLK, UBAREC, UBASQN, ABS_FILE#, REL_FILE#, DATA_BLK#, DATA_OBJ#, DATA_OBJD#
    V$LOGMNR_STATS -> NAME , VALUE
    However I found nothing in the documentation on how to use them. (esp. not in Database reference, Database Utilities - the main documents I looked into). What should I read? Any strategies or ideas?
    Kind regards:
    al_shopov

    To find sessions generating lots of redo, you can use either of the following
    methods. Both methods examine the amount of undo generated. When a transaction
    generates undo, it will automatically generate redo as well.
    The methods are:
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
    how much blocks have been changed by the session. High values indicate a
    session generating lots of redo.
    The query you can use is:
    SELECT s.sid, s.serial#, s.username, s.program,
    i.block_changes
    FROM v$session s, v$sess_io i
    WHERE s.sid = i.sid
    ORDER BY 5 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. This view contains information about the amount of
    undo blocks and undo records accessed by the transaction (as found in the
    USED_UBLK and USED_UREC columns).
    The query you can use is:
    SELECT s.sid, s.serial#, s.username, s.program,
    t.used_ublk, t.used_urec
    FROM v$session s, v$transaction t
    WHERE s.taddr = t.addr
    ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
    the session.
    hth
    Kezie

  • How to find out who deleted the archive logs

    Hi All,
    Recently some archive logs were deleted from one of our servers. Is there any way to find out which user has deleted the archive logs through OS or through database ?
    OS Version :-
    SunOS Generic_Virtual sun4u sparc SUNW,SPARC-Enterprise
    Database Version:-
    SQL*Plus: Release 9.2.0.8.0 - Production on Mon Apr 9 01:12:15 2012

    888132 wrote:
    Hi All,
    Recently some archive logs were deleted from one of our servers. Is there any way to find out which user has deleted the archive logs through OS or through database ?
    OS Version :-
    SunOS Generic_Virtual sun4u sparc SUNW,SPARC-Enterprise
    Database Version:-
    SQL*Plus: Release 9.2.0.8.0 - Production on Mon Apr 9 01:12:15 2012As explained by others, from oracle database there is no record if they are deleted from OS.
    But you can probably find the history of OS command been run with history command :). You can get the date and time.
    Following link can help
    http://stackoverflow.com/questions/99755/how-do-i-get-the-command-buffer-in-solaris-10
    http://www.cyberciti.biz/faq/unix-linux-bash-history-display-date-time/
    http://www.linuxquestions.org/questions/solaris-opensolaris-20/in-solaris-command-line-how-to-get-the-previous-commands-573814/
    But i suggest you to post in Sun OS forum to get more details as its nothing to do with Database(in this scenario)

  • How to find out which archived logs needed to recover a hot backup?

    I'm using Oracle 11gR2 (11.2.0.1.0).
    I have backed up a database when it is online using the following backup script through RMAN
    connect target /
    run {
    allocate channel d1 type disk;
    backup
    incremental level=0 cumulative
    filesperset 4
    format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
    database
    }The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
    I have not deleted any archived log.
    How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
    Thanks for any help.

    A few ways :
    1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
    1b. Review the alert.log of the database that was backed up.
    1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
    1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
    2a. Do a RESTORE DATABASE alone.
    2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
    2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
    2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
    (why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
    3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
    Hemant K Chitale

  • How to find a session with high archive logs

    Any query, to see which active session is generating high archive logs in oracle 8i &9i and high rbs usage

    Though, there is no direct option or view where you can get this information.
    However, you may can find out the session which are generating lot of redo and undo. I guess, when the session has lot of redo & undo, definately, it contribution towards archive would be more.
    You can query v$sess_io and v$session to findout the the session which is generating lot of redo. i.e. lot of block changes occuring.
    SELECT s.sid, s.serial#, s.username, s.program, i.block_changes
    FROM v$session s, v$sess_io i
    WHERE s.sid = i.sid
    ORDER BY 5 desc
    Also query v$transaction and v$session to find out the session that is generating lot of undo information.
    Jaffar
    Message was edited by:
    Syed Jaffar

  • Unable to find archived log

    Hi
    I am restoring a hot backup taken through RMAN using following commands:
    configure controlfile autobackup on;
    BACKUP DATABASE ;
    BACKUP ARCHIVELOG ALL DELETE INPUT;
    Now I am going to restore that using following commands:
    restore spfile from autobackup;
    restore controlfile from autobackup;
    shutdown immediate;
    startup mount;
    restore database;
    RECOVER DATABASE;
    ALTER DATABASE OPEN RESETLOGS;
    But it goes fine till restore database. At recover database I get following errors:
    archived log for thread 1 with sequence 2461 is already on disk as file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_8fbs9bvt_.log
    archived log for thread 1 with sequence 2462 is already on disk as file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_2_8fbs9chb_.log
    unable to find archived log
    archived log thread=1 sequence=545
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/11/2013 20:41:43
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 545 and starting SCN of 25891726
    I have checked the backup folder and there are only empty date wise folders under archivedlog folders.
    If I write RMAN> ALTER DATABASE OPEN RESETLOGS; I get:
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of alter db command at 09/11/2013 20:43:01
    ORA-01190: control file or data file 1 is from before the last RESETLOGS
    ORA-01110: data file 1: '/u01/app/oracle/oradata/XE/system.dbf'
    If I write RMAN> recover database until sequence 545; I get
    Starting recover at 11-SEP-13
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=695 device type=DISK
    starting media recovery
    unable to find archived log
    archived log thread=1 sequence=545
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/11/2013 21:09:34
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 545 and starting SCN of 25891726
    I don't mind if some data is lost. Will be really thankful if someone can help me get my database open;
    Habib

    They way you are trying to recover will try to recover up to the last know SCN. Try to do a point in time recover up to a few minutes before the database was shutdown or crashed.
    Try something like this
    run{
    set until time "to_date('2013-09-11:00:00:00', 'yyyy-mm-dd:hh24:mi:ss')";
    restore spfile from autobackup;
    restore controlfile from autobackup;
    shutdown immediate;
    startup mount;
    restore database;
    RECOVER DATABASE;
    ALTER DATABASE OPEN RESETLOGS;

  • Database large Number of archive log

    Oracle 11g
    window server 2008 R2
    My database working fine, from last week i have noticed that database generating large no of archive log.
    Database size is 30GB
    Only one table space is 16GB , other tablespaces not more 2 GB.
    I can not figured out why it  generating large no. of archive log. can any one help me to figure out.
    previous week i have only did these changes
    Drop index
    create index
    create new table from existing table.
    nothing else i  did.

    Hi
    As you say workload increases. See when the number of log switches goes high and take an AWR report or statspack report. Check the DML operations. Use below query to chk the log switches
    spool c:\log_hist.txt
    SET PAGESIZE 90
    SET LINESIZE 150
    set heading on
    column "00:00" format 9999
    column "01:00" format 9999
    column "02:00" format 9999
    column "03:00" format 9999
    column "04:00" format 9999
    column "05:00" format 9999
    column "06:00" format 9999
    column "07:00" format 9999
    column "08:00" format 9999
    column "09:00" format 9999
    column "10:00" format 9999
    column "11:00" format 9999
    column "12:00" format 9999
    column "13:00" format 9999
    column "14:00" format 9999
    column "15:00" format 9999
    column "16:00" format 9999
    column "17:00" format 9999
    column "18:00" format 9999
    column "19:00" format 9999
    column "20:00" format 9999
    column "21:00" format 9999
    column "22:00" format 9999
    column "23:00" format 9999
    SELECT * FROM (
    SELECT * FROM (
    SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0), '99')) "00:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0), '99')) "01:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0), '99')) "02:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0), '99')) "03:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0), '99')) "04:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0), '99')) "05:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0), '99')) "06:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0), '99')) "07:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0), '99')) "08:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0), '99')) "09:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0), '99')) "10:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0), '99')) "11:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0), '99')) "12:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0), '99')) "13:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0), '99')) "14:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0), '99')) "15:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0), '99')) "16:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0), '99')) "17:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0), '99')) "18:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0), '99')) "19:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0), '99')) "20:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0), '99')) "21:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0), '99')) "22:00"
    , SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0), '99')) "23:00"
      FROM V$LOG_HISTORY
      WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
      GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
      ) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
      ) WHERE ROWNUM <8;
    spool off
    One common mistake is enabling debugging. You can  check in application code if any debugging is enabled. (insert every records for logging or support purpose)
    Regards
    Anand.

  • ORA-19599: block number 1985 is corrupt in archived log +FG/

    Hi Team,
    I couldn't take backup of the RAC database archivelog. Please help me
    RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
    Starting backup at 24-MAR-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00001 name=+DG1/kvxcprd/datafile/system.260.777756857
    input datafile fno=00003 name=+DG1/kvxcprd/datafile/sysaux.268.777756857
    input datafile fno=00002 name=+DG1/kvxcprd/datafile/undotbs1.263.777756857
    input datafile fno=00005 name=+DG1/kvxcprd/datafile/undotbs2.264.777756983
    input datafile fno=00004 name=+DG1/kvxcprd/datafile/users.267.777756857
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=2 sequence=17 recid=26 stamp=810915691
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/24/2013 14:37:11
    ORA-19599: block number 1985 is corrupt in archived log +FG/kvxcprd/archivelog/2013_03_24/thread_2_seq_17.269.810915689
    RMAN> backup archivelog all;
    Starting backup at 24-MAR-13
    current log archived
    using channel ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup command at 03/24/2013 14:39:52
    ORA-19563: header validation failed for file
    Thanks & Regards,
    Edited by: 995796 on Mar 24, 2013 2:26 AM
    Edited by: 995796 on Mar 24, 2013 2:27 AM

    RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
    Starting backup at 24-MAR-13
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00001 name=+DG1/kvxcprd/datafile/system.260.777756857
    input datafile fno=00003 name=+DG1/kvxcprd/datafile/sysaux.268.777756857
    input datafile fno=00002 name=+DG1/kvxcprd/datafile/undotbs1.263.777756857
    input datafile fno=00005 name=+DG1/kvxcprd/datafile/undotbs2.264.777756983
    input datafile fno=00004 name=+DG1/kvxcprd/datafile/users.267.777756857
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=2 sequence=17 recid=26 stamp=810915691
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/24/2013 14:37:11
    ORA-19599: block number 1985 is corrupt in archived log +FG/kvxcprd/archivelog/2013_03_24/thread_2_seq_17.269.810915689You can remove this log with command
    rman> delete archivelog '+FG/kvxcprd/archivelog/2013_03_24/thread_2_seq_17.269.810915689';

  • RMAN- unable to find archive log

    Hi All,
    I am facing this problem while i am recovering my database. I have checked this archive file and it is present on the location where it should be. So what can be the solution.
    Pls help...
    Thanks and Regards
    Amit Raghuvanshi

    Hi Dear,
    The location is on the disk and the error is...
    released channel: ORA_DISK_1
    allocated channel: dev2
    channel dev2: sid=12 devtype=DISK
    Starting recover at 08-AUG-07
    starting media recovery
    archive log thread 1 sequence 54266 is already on disk as file /erpp/erppdata/log01a.dbf
    archive log thread 1 sequence 54267 is already on disk as file /erpp/erppdata/log02a.dbf
    unable to find archive log
    archive log thread=1 sequence=54259
    released channel: dev2
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 08/08/2007 11:33:57
    RMAN-06054: media recovery requesting unknown log: thread 1 scn 5965732883373
    Regards
    Amit Raghuvanshi

  • How to find thead 2 archive log when i recover from 2-node RAC to single

    I backup 2-ndoe RAC and restore to single - node
    Control file created.
    SQL> recover database iuckup controlfile
    SQL> recover database using backup controlfIle;
    ORA-00279: change 12100176131169 generated at 07/06/2013 16:36:57 needed for
    thread 1
    ORA-00289: suggestion : /arch/hop1_566085708_1_212692.dat                 -- Oracle suggest
    ORA-00280: change 12100176131169 for thread 1 is in sequence #212692
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    arch/sch1_566085708_1_212692.dat                                            --- I input  
    ORA-00279: change 12100176131169 generated at  needed for thread 2
    ==> Oracle didn't suggest thread 2 archive log file/
    How can I find ?

    You have to query V$ARCHIVED_LOG for THREAD# 2 on the source database to find the first archivelog where 12100176131169 is less than the NEXT_CHANGE#
    select SEQEUNCE# from V$ARCHIVED_LOG where THREAD#=2 and 12100176131169 between FIRST_CHANGE# and NEXT_CHANGE#
    Once you provide the Sequence# and file name for the first Thread 2 file, the RECOVER command will automatically generate the expected file name for the subsequent archivelogs (of both threads).
    Hemant K Chitale

  • Have i missed any archive log in Standby setup?, im unable to find

    Dear All,
    I setup standby in Oracle10g.
    In my Primary the archive log list shows as
    primary>archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination D:\STANDBY\Archive
    Oldest online log sequence 30
    Next log sequence to archive 32
    Current log sequence 32
    In my standby the archive log list shows as
    standby>archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination C:\PRIMARY\Archive
    Oldest online log sequence 31
    Next log sequence to archive 0
    Current log sequence 32
    Have i missed any archive log file?
    But in my archive location of the primary i found the archive files from ARC00002_0633898426.001,ARC00003_0633898426.001 to ARC00031_0633898426.001
    and in my archive location of the standby i found the archive files from
    ARC00002_0633898426.001,ARC00003_0633898426.001 to ARC00031_0633898426.001 and one more extra archive file ARC00032_0633898426.001.
    I am getting confused from where i got the archive file ARC00032_0633898426.001 in standby only which is not in primary?
    Please clarify this...

    Please let me know abiut the above issue, i will be thankful to you.

  • Archive Logs NOT APPLIED but transferred

    Hi Gurus,
    I have configured Primary & Standby databases in same Oracle Home. OS version is OEL 5. Database version is 10.2.0.1. I could get the archive logs in the standby site but they are not getting applied in the standby database. I don't have OLAP installed in my database version. Would this create this issue? However I attached my primary alert log details below for your reference:
    Thu Aug 30 23:55:37 2012
    Starting ORACLE instance (normal)
    Cannot determine all dependent dynamic libraries for /proc/self/exe
    Unable to find dynamic library libocr10.so in search paths
    RPATH = /ade/aime1_build2101/oracle/has/lib/:/ade/aime1_build2101/oracle/lib/:/ade/aime1_build2101/oracle/has/lib/:
    LD_LIBRARY_PATH is not set!
    The default library directories are /lib and /usr/lib
    Unable to find dynamic library libocrb10.so in search paths
    Unable to find dynamic library libocrutl10.so in search paths
    Unable to find dynamic library libocrutl10.so in search paths
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 150
    sga_target = 289406976
    control_files = /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control01.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control02.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control03.ctl
    db_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim
    log_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWSTAND/onlinelog, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWPRIM/onlinelog
    db_block_size = 8192
    compatible = 10.2.0.1.0
    log_archive_config = DG_CONFIG=(newprim,newstand)
    log_archive_dest_1 = LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/arch/
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=newprim
    log_archive_dest_2 = SERVICE=newstand LGWR ASYNC VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=newstand
    log_archive_dest_state_1 = enable
    log_archive_dest_state_2 = enable
    log_archive_max_processes= 30
    log_archive_format = %t_%s_%r.dbf
    fal_client = newprim
    fal_server = newstand
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area
    db_recovery_file_dest_size= 2147483648
    standby_file_management = AUTO
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=newprimXDB)
    job_queue_processes = 10
    background_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump
    user_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump
    core_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/cdump
    audit_file_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/adump
    db_name = newprim
    db_unique_name = newprim
    open_cursors = 300
    pga_aggregate_target = 95420416
    PMON started with pid=2, OS id=28091
    PSP0 started with pid=3, OS id=28093
    MMAN started with pid=4, OS id=28095
    DBW0 started with pid=5, OS id=28097
    LGWR started with pid=6, OS id=28100
    CKPT started with pid=7, OS id=28102
    SMON started with pid=8, OS id=28104
    RECO started with pid=9, OS id=28106
    CJQ0 started with pid=10, OS id=28108
    MMON started with pid=11, OS id=28110
    MMNL started with pid=12, OS id=28112
    Thu Aug 30 23:55:38 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    Thu Aug 30 23:55:38 2012
    ALTER DATABASE MOUNT
    Thu Aug 30 23:55:42 2012
    Setting recovery target incarnation to 2
    Thu Aug 30 23:55:43 2012
    Successful mount of redo thread 1, with mount id 1090395834
    Thu Aug 30 23:55:43 2012
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Thu Aug 30 23:55:43 2012
    ALTER DATABASE OPEN
    Thu Aug 30 23:55:43 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=16, OS id=28122
    ARC1 started with pid=17, OS id=28124
    ARC2 started with pid=18, OS id=28126
    ARC3 started with pid=19, OS id=28128
    ARC4 started with pid=20, OS id=28133
    ARC5 started with pid=21, OS id=28135
    ARC6 started with pid=22, OS id=28137
    ARC7 started with pid=23, OS id=28139
    ARC8 started with pid=24, OS id=28141
    ARC9 started with pid=25, OS id=28143
    ARCa started with pid=26, OS id=28145
    ARCb started with pid=27, OS id=28147
    ARCc started with pid=28, OS id=28149
    ARCd started with pid=29, OS id=28151
    ARCe started with pid=30, OS id=28153
    ARCf started with pid=31, OS id=28155
    ARCg started with pid=32, OS id=28157
    ARCh started with pid=33, OS id=28159
    ARCi started with pid=34, OS id=28161
    ARCj started with pid=35, OS id=28163
    ARCk started with pid=36, OS id=28165
    ARCl started with pid=37, OS id=28167
    ARCm started with pid=38, OS id=28169
    ARCn started with pid=39, OS id=28171
    ARCo started with pid=40, OS id=28173
    ARCp started with pid=41, OS id=28175
    ARCq started with pid=42, OS id=28177
    ARCr started with pid=43, OS id=28179
    ARCs started with pid=44, OS id=28181
    Thu Aug 30 23:55:44 2012
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARCt started with pid=45, OS id=28183
    LNS1 started with pid=46, OS id=28185
    Thu Aug 30 23:55:48 2012
    Thread 1 advanced to log sequence 68
    Thu Aug 30 23:55:48 2012
    ARCo: Becoming the 'no FAL' ARCH
    ARCo: Becoming the 'no SRL' ARCH
    Thu Aug 30 23:55:48 2012
    ARCp: Becoming the heartbeat ARCH
    Thu Aug 30 23:55:48 2012
    Thread 1 opened at log sequence 68
    Current log# 1 seq# 68 mem# 0: /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/redo01.log
    Successful open of redo thread 1
    Thu Aug 30 23:55:48 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Thu Aug 30 23:55:48 2012
    SMON: enabling cache recovery
    Thu Aug 30 23:55:48 2012
    Successfully onlined Undo Tablespace 1.
    Thu Aug 30 23:55:48 2012
    SMON: enabling tx recovery
    Thu Aug 30 23:55:49 2012
    Database Characterset is WE8ISO8859P1
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=47, OS id=28205
    Thu Aug 30 23:55:49 2012
    Error 1034 received logging on to the standby
    Thu Aug 30 23:55:49 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
    ORA-01034: ORACLE not available
    FAL[server, ARC1]: Error 1034 creating remote archivelog file 'newstand'
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Thu Aug 30 23:55:49 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Thu Aug 30 23:55:49 2012
    ORACLE Instance newprim - Archival Error. Archiver continuing.
    Thu Aug 30 23:55:49 2012
    db_recovery_file_dest_size of 2048 MB is 9.77% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Thu Aug 30 23:55:50 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump/newprim_ora_28120.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-12663: Services required by client not available on the server
    ORA-36961: Oracle OLAP is not available.
    ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    Thu Aug 30 23:55:50 2012
    Completed: ALTER DATABASE OPEN
    Thu Aug 30 23:56:33 2012
    FAL[server]: Fail to queue the whole FAL gap
    GAP - thread 1 sequence 1-33
    DBID 1090398314 branch 792689455
    Kindly, guide me please..
    -Vimal.

    CKPT: The trace file details are added below for your reference;
    /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning and Data Mining options
    ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1
    System name:     Linux
    Node name:     localhost.localdomain
    Release:     2.6.18-8.el5PAE
    Version:     #1 SMP Tue Jun 5 23:39:57 EDT 2007
    Machine:     i686
    Instance name: newprim
    Redo thread mounted by this instance: 1
    Oracle process number: 17
    Unix process pid: 28124, image: [email protected] (ARC1)
    *** SERVICE NAME:() 2012-08-30 23:55:48.314
    *** SESSION ID:(155.1) 2012-08-30 23:55:48.314
    kcrrwkx: nothing to do (start)
    Redo shipping client performing standby login
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1034 and errmsg is 'ORA-01034: ORACLE not available
    *** 2012-08-30 23:55:49.723 60679 kcrr.c
    Error 1034 received logging on to the standby
    Error 1034 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
    Error 1034 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
    ORA-01034: ORACLE not available
    *** 2012-08-30 23:55:49.723 58941 kcrr.c
    kcrrfail: dest:2 err:1034 force:0 blast:1
    kcrrwkx: unknown error:1034
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    ARCH: Connecting to console port...
    kcrrwkx: nothing to do (end)
    *** 2012-08-31 00:00:43.417
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:05:43.348
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:10:43.280
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:15:43.217
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:20:43.160
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:25:43.092
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:30:43.031
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:35:42.961
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:40:42.890
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:45:42.820
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:50:42.755
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:55:42.686
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:00:42.631
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:05:42.565
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:10:42.496
    kcrrwkx: nothing to do (start)
    Mahir: Yes I have my 4 standby redo logs!
    I created the standby manually without using RMAN.
    Hemant: if it asks for even first thread, then obviously it shows nothing is applied on Standby. By the way so it is not called a 'GAP', I think..!
    Thanks.

Maybe you are looking for