Archive Log Format Issues

Hi DBAs,
I have 2 Archive destination. My archive log format is ARC%S_%R.%T
But In my 1 location E:\app\Administrator\product\11.1.0\db_1\RDBMS format shows ARC00025_0769191639.001
2 location shows E:\app\Administrator\flash_recovery_area\BASKAR\ARCHIVELOG\2011_12_08\O1_MF_1_25_7G15PVYX_.ARC
SQL> select destination from v$archive_dest;
DESTINATION
E:\app\Administrator\product\11.1.0\db_1\RDBMS
USE_DB_RECOVERY_FILE_DEST
My Question is that, I am using this format only ARC%S_%R.%T
but it shows different format in each location. May i know what 's the reason behind this?
Thanks in Advance

If you are using other archive destination other than FRA it will creates as per LOG_ARCHIVE_FORMAT,
and the FRA configured then the archive format for FRA is O1_MF_1_25_7G15PVYX_.ARC
from your query it is clear that there are two destinations are configured, So if you dont want *.ARC* files, you have to disable FRA.
But recommended to use FRA easy to manage.

Similar Messages

  • Standby creating archives log files issue!

    Hello Everyone,
    Working on oracle 10g R2/Windows, I have created a dataguard with one standby database, but there is a strange issue that happen, and I'll need someone to shed the light to me.
    By default archived log created from the primary database should be the  sent to the stanndby database, but I found that the standby database has plus one archived log file.
    From the primary database:
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination C:\local_destination1_orcl
    Oldest online log sequence 1021
    Next log sequence to archive 1023
    Current log sequence 1023
    contents of C:\local_destination1_orcl
    1_1022_623851185.ARC
    from the standby database:
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination C:\local_destination1_orcl
    Oldest online log sequence 1022
    Next log sequence to archive 0
    Current log sequence 1023
    contents of C:\local_destination1_orcl
    1_1022_623851185.ARC
    1_1023_623851185.ARC ---> this is the extra archive file created in the standby database, should someone let me know how to avoid this?
    Thanks for your help

    SELECT FROM v$version;*
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
    PL/SQL Release 10.2.0.1.0 - Production
    CORE 10.2.0.1.0 Production
    TNS for 64-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    The standby database is a physical standby database (not logical standby)
    Thanks against for your contribution, but I'm still not know why standby create and arhive files too?

  • Archive log backup issue

    Hi,
    I am facing the issue with archive log backup with external autoloader tape drive ( HP data protector software).
    The archivelog backup is not successfull.
    Kindly provide me a suggestion o solve this issue. Please find the log below
    BR0002I BRARCHIVE 7.00 (32)
    BR0262I Enter database user name[/password]:
    BR0169I Value 'util_file_online' of parameter/option 'backup_dev_type/-d' ignored for 'brarchive' - 'util_file' assumed
    BR0006I Start of offline redo log processing: adzulphz.sve 2009-01-28 12.12.11
    BR0252E Function fopen() failed for '/oracle/SFD/saparch/adzulphz.sve' at location main-6
    BR0253E errno 13: Permission denied
    BR0121E Processing of log file /oracle/SFD/saparch/adzulphz.sve failed
    BR0007I End of offline redo log processing: adzulphz.sve 2009-01-28 12.12.11
    BR0280I BRARCHIVE time stamp: 2009-01-28 12.12.11
    BR0005I BRARCHIVE terminated with errors
    [Major]
    From: OB2BAR_OMNISAP@sfwdqs "OMNISAP" Time: 01/28/09 12:12:11
    BRARCHIVE /usr/sap/SFD/SYS/exe/run/brarchive -a -c -u system/******** returned 3
    [Normal]
    From: BSM@sfwsol "Archive" Time: 1/28/2009 12:19:09 PM
    OB2BAR application on "sfwdqs" disconnected.
    [Normal]
    From: BMA@sfwsol "HP:Ultrium 3-SCSI_1_sfwsol" Time: 1/28/2009 12:19:38 PM
    Tape0:0:5:0C
    Medium header verification completed, 0 errors found
    [Normal]
    From: BMA@sfwsol "HP:Ultrium 3-SCSI_1_sfwsol" Time: 1/28/2009 12:19:58 PM
    By: UMA@sfwsol@Changer0:0:5:1
    Unloading medium to slot 4 from device Tape0:0:5:0C
    [Normal]
    From: BMA@sfwsol "HP:Ultrium 3-SCSI_1_sfwsol" Time: 1/28/2009 12:20:21 PM
    ABORTED Media Agent "HP:Ultrium 3-SCSI_1_sfwsol"
    [Normal]
    From: BSM@sfwsol "Archive" Time: 1/28/2009 12:20:21 PM
    Regards,
    Kumar

    Hi ,
    Please check the directory permissions for "/oracle/SFD/saparch".
    Please check permissions for <sid>adm and ora<sid> for the above directory.
    "Note 17163 - BRARCHIVE/BRBACKUP messages and codes" and also related notes may help you for addtional information.
    Regards
    Upender Reddy

  • Archive log format in ASM

    Hi Folks,
    Am copying the archive logs from ASM instance using the below command...
    allocate channel t1 device type disk format '/backup/today/today/%h_%e.arc'; according to ML id:293234.1 %h is the thread number and %e is the sequence number...
    What is the value for resetlogs identifier... %r doesnt works in archive log copy....if i include %r it gives as below..
    output filename=/backup/today/today/1_%r_10572.arc recid=21022 stamp=688060687
    what is the exact value to be passed?
    thanks
    baskar.l

    Hi..
    Go through the below link which shows all the format strings available in RMAN
    [http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta033.htm#RCMRF195]

  • Archive log format

    Hi,
    I have given log_archive_format = %t_%s.dbf in my init.ora file.
    But when I am doing recovery it asks for the archive format like arch1_47677.dbf appending arch at start of the string. How I can get the format as 1_47677.dbf.
    Regards,
    Mushir

    Hi,
    My current format on production is below.
    log_archive_format string %t_%s.dbf
    It is creating archive logs as 1_47673.dbf format.
    I am cloning the database on other testnode using recovery and keep the log_archive_format same as production on test box also. But in recovery it is asking the format as arch1_47673.dbf not 1_47673.dbf.
    Regards,
    Mushir

  • Extended Log Format Issue

    Hello,
    I am rather new to BEA, and I am experiencing a little bit of a problem here. I am trying to configure the access logs in order for them to generate output concerning Browser information. I have gone through and was successfully able to change the configuration to extended. The browser information is now displayed.
    I have one MAJOR problem. When using the common log format, the usernames are logged just fine. But when I changed to the extended format, I only get a value of '-'. I believe all of my configuration is correct according to w3c standards, as I am using 'cs-username' in the '#Fields' directive. I have been searching all over for information reguarding how this field can be documented, but I cannot find anything.
    I have even gone out and tried to configure our apache logs to read this field, and yet I get the same '-' value there as well.
    If anyone could PLEASE help me in this matter, I would greatly appreciate it, as I am going down a river without a paddle on this one.
    Thanks in advance,
    Garret

    <p>Hello,</p>
    <p>Have you read through common log format doc and Enabling and Configuring HTTP Access Logs? Oddly they don't mension cs-username in the supported field identifiers section. You could try the common log format field name: auth_user. Or a custom identifier. I would also raise a support case with BEA if you don't get anywhere to make sure they do actually support this field.</p>
    <p>
    Hussein Badakhchani</br>
    </p>

  • RAC online and archive logs question

    Hello All,
    I setup a RAC database instances prod1 and prod2 (10.2.0.4). Datafiles and onlinelogs are on ASM.
    Does these results look good queried from two instances. I am kind of concerned about the Group3 that has the same name for both the members.
    Also archived logs are going to the ASM, is this a good practice. I was reading Oracle RMAN book and it mentioned archived logs go to local disk.
    Is it possible to archive to local disk for online that are on ASM? Please advice. Early reply appreciated.. Thanks San~
    PROD1 Instance
    SQL> select member from v$logfile;
    MEMBER
    +DATA/prod/onlinelog/group_2.264.706892209
    +FLASH/prod/onlinelog/group_2.259.706892211
    +DATA/prod/onlinelog/group_1.261.706892209
    +FLASH/prod/onlinelog/group_1.260.706892209
    +DATA/prod/onlinelog/group_3.258.706892235
    +FLASH/prod/onlinelog/group_3.258.706892235
    +DATA/prod/onlinelog/group_4.256.706892237
    +FLASH/prod/onlinelog/group_4.257.706892237
    8 rows selected.
    PROD2 Instance
    SQL> select member from v$logfile;
    MEMBER
    +DATA/prod/onlinelog/group_2.264.706892209
    +FLASH/prod/onlinelog/group_2.259.706892211
    +DATA/prod/onlinelog/group_1.261.706892209
    +FLASH/prod/onlinelog/group_1.260.706892209
    +DATA/prod/onlinelog/group_3.258.706892235
    +FLASH/prod/onlinelog/group_3.258.706892235
    +DATA/prod/onlinelog/group_4.256.706892237
    +FLASH/prod/onlinelog/group_4.257.706892237
    8 rows selected.
    ===
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 3
    Next log sequence to archive 4
    Current log sequence 4
    ====
    Thanks
    San

    Hi San,
    sannidhi wrote:
    Also archived logs are going to the ASM, is this a good practice. I was reading Oracle RMAN book and it mentioned archived logs go to local disk.
    Is it possible to archive to local disk for online that are on ASM? Please advice. Early reply appreciated.. Thanks San~
    It is recommend to store archived log files on ASM and on Shared disk, check your archive log format which suppose to represent uniqueness across all instances.
    Yes, technically it is possible to archive to local disk, but not recommended as if you loose local disk there will be gaps in the archived log files and also it increases the administration.
    Regards,
    Thota

  • How does one name an Archive Log file in ARC%S_%R.%T format

    Hallo!I have been trying to enable Archive Log mode for the Oracle 10g database.
    In OEM,I went via the links Maintenance->Recovery Settings->Media Recovery
    There,a text box by the name Log Archive Filename Format requires one to name the Archive Log file in ARC%S_%R.%T format.
    I have tried several times to name the Archive log e.g ARC001_001.001 but when I shutdown and restart the database instance,the error below appears
    ORA-19905: log_archive_format must contain %s, %t and %r.
    This error has proved impossible to rectify and I am forced to uninstall then re-install Oracle 10g.
    I would like to have back-ups in archive log mode.Please give me the best way to name the Archive Log file i.e with an example name so that I can have online back-ups.
    Thanks.

    Hi,
    If you try to change the LOG_ARCHIVE_FORMAT to something other than the default %s,%t,%r, it will fail
    Perform as the below steps in order to enable archive log mode
    SQL>create pfile from spfile
    SQL>create pfile='c:\temp\init.ora' from spfile;
    SQL>created;
    SQL>shutdown immediate;
    edit the init.ora file by adding the following information
    *.LOG_ARCHIVE_DEST_1='LOCATION=C:\db\archive1'
    *.LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf'
    Start the DB with modified pfile.
    - Pavan Kumar N
    Edited by: Pavan Kumar on May 2, 2010 2:17 PM

  • Issue with backing up Archive logs

    Hi All,
    Please help me with the issues/confusions I am facing :
    1. Currently, the  "First active log file  = S0008351.LOG"  from "db2 get db cfg for SMR"
        In the log_dir, there should be logs >=S0008351.LOG
        But in my case, in addition to these logs, there are some old logs like S0008309.LOG, S0008318.LOG, S0008331.LOG  etc...
        How can I clear all these 'not-really-wanted' logs from the log_dir ?
    2. There is some issue with archive backup as a result the archive backups are not running fine.
        Since this is a very low activity system, there are not much logs generated.
        But the issue is :
        There are so many archive logs in the "log_archive" directory, I want to cleanup the directory now.
        The latest online backup is @ 26.07.2011 04:01:04
        First Log File      : S0008344.LOG
        Last Log File       : S0008346.LOG
        Inside log_archive there are archive logs from  S0008121.LOG   to   S0008304.LOG
        I wont really require these logs, correct ?
    Please clear my confusions...

    Hi,
    >
    > 1. Currently, the  "First active log file  = S0008351.LOG"  from "db2 get db cfg for SMR"
    >     In the log_dir, there should be logs >=S0008351.LOG
    >     But in my case, in addition to these logs, there are some old logs like S0008309.LOG, S0008318.LOG, S0008331.LOG  etc...
    >     How can I clear all these 'not-really-wanted' logs from the log_dir ?
    >
    You should not delete logs from log_dir because there online Redo logs and if you delete then there will be problem in start of db.
    > 2. There is some issue with archive backup as a result the archive backups are not running fine.
    >     Since this is a very low activity system, there are not much logs generated.
    >     But the issue is :
    >     There are so many archive logs in the "log_archive" directory, I want to cleanup the directory now.
    >     The latest online backup is @ 26.07.2011 04:01:04
    >     First Log File      : S0008344.LOG
    >     Last Log File       : S0008346.LOG
    >   
    If your archive logs are backed up from log_archive directory then you can delete old logs.
    Thanks
    Sunny

  • About archive log issue

    DB version: 11.1.0.7
    when I issue cmd " alter system archive log current", the alertlog raise error "Thread 1 cannot allocate new log, sequence 149, Private strand flush not complete".
    I think it's normal, because in the log file , here some dirty data have NOT been written to the data_files, so it can raise the error Private strand flush not complete.
    BUT in my point, when I issue the cmd "alter system checkpoint" and then, subsequently, I issue the " alter system archive log current", there shoud not raise any error" becasue the dirty data already been written via the cmd "alter system checkpoint". but in the alertlog the error still here (Private strand flush not complete).
    that's why, How can I understand that. thanks!

    To understand it, please check Doc 372557.1 Alert Log Messages: Private Strand Flush Not Complete
    and
    cannot allocate new log&Private strand flush not complete
    Edited by: Fran on 25-jun-2012 1:07

  • Format archive log

    Hi All
    I am using the
    "backup as compressed backupset incremental level 0 database format '/backup/%d-%I-%U-LVL0' plus archivelog delete all input;" command
    Archive log goes to diffrent location that defined in "CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1" I want to create archive log backups in diffrent location
    How can I do this in same command ? could I type second format command for archivelogs ?
    Best Regards

    Hi,
    As ebrain mentioned, you can specify different locations for db and archivelogs in the same rman command.
    Here is my case.
    C:\downloads>rman target / nocatalog
    Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jun 16 15:12:15 2009
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    connected to target database: TEST (DBID=1987145012)
    using target database control file instead of recovery catalog
    RMAN> backup as compressed backupset incremental level 0 database
    2> format 'c:\oracle\%U' plus archivelog format 'c:\oracle\admin\%D_%U.arc' delete all input;
    Starting backup at 16-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=1 recid=1 stamp=689004392
    input archive log thread=1 sequence=2 recid=2 stamp=689176817
    input archive log thread=1 sequence=3 recid=3 stamp=689417731
    input archive log thread=1 sequence=4 recid=4 stamp=689594593
    input archive log thread=1 sequence=5 recid=5 stamp=689699030
    input archive log thread=1 sequence=6 recid=6 stamp=689699680
    input archive log thread=1 sequence=7 recid=7 stamp=689699745
    channel ORA_DISK_1: starting piece 1 at 16-JUN-09
    channel ORA_DISK_1: finished piece 1 at 16-JUN-09
    piece handle=C:\ORACLE\ADMIN\16_06KHNUT2_1_1.ARC tag=TAG20090616T151545 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:26
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_08\O1_MF_1_1_52TNS3CL_.ARC recid=1 stamp=6
    89004392
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_10\O1_MF_1_2_52ZX5CMK_.ARC recid=2 stamp=6
    89176817
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_13\O1_MF_1_3_5378FW00_.ARC recid=3 stamp=6
    89417731
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_15\O1_MF_1_4_53DO4V6T_.ARC recid=4 stamp=6
    89594593
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_16\O1_MF_1_5_53HV4O7B_.ARC recid=5 stamp=6
    89699030
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_16\O1_MF_1_6_53HVRZY8_.ARC recid=6 stamp=6
    89699680
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_16\O1_MF_1_7_53HVV0TC_.ARC recid=7 stamp=6
    89699745
    Finished backup at 16-JUN-09
    Starting backup at 16-JUN-09
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed incremental level 0 datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00001 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\SYSTEM01.DBF
    input datafile fno=00003 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\SYSAUX01.DBF
    input datafile fno=00002 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\UNDOTBS01.DBF
    input datafile fno=00004 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\USERS01.DBF
    channel ORA_DISK_1: starting piece 1 at 16-JUN-09
    channel ORA_DISK_1: finished piece 1 at 16-JUN-09
    piece handle=C:\ORACLE\07KHNUTU_1_1 tag=TAG20090616T151614 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
    channel ORA_DISK_1: starting compressed incremental level 0 datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel ORA_DISK_1: starting piece 1 at 16-JUN-09
    channel ORA_DISK_1: finished piece 1 at 16-JUN-09
    piece handle=C:\ORACLE\08KHNUVC_1_1 tag=TAG20090616T151614 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:06
    Finished backup at 16-JUN-09
    Starting backup at 16-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting compressed archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=8 recid=8 stamp=689699826
    channel ORA_DISK_1: starting piece 1 at 16-JUN-09
    channel ORA_DISK_1: finished piece 1 at 16-JUN-09
    piece handle=C:\ORACLE\ADMIN\16_09KHNUVJ_1_1.ARC tag=TAG20090616T151707 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\TEST\ARCHIVELOG\2009_06_16\O1_MF_1_8_53HVXLOH_.ARC recid=8 stamp=6
    89699826
    Finished backup at 16-JUN-09
    RMAN>
    Please check and update.
    Thanks,
    Nirmal

  • RMAN BACKUPS AND ARCHIVED LOG ISSUES

    제품 : RMAN
    작성날짜 : 2004-02-17
    RMAN BACKUPS AND ARCHIVED LOG ISSUES
    =====================================
    Scenario #1:
    1)RMAN이 모든 archived log들을 삭제할 때 실패하는 경우.
    database는 두 개의 archive destination에 archive file을 생성한다.
    다음과 같은 스크립트를 수행하여 백업후에 archived redo logfile을 삭제한다.
    run {
    allocate channel c1 type 'sbt_tape';
    backup database;
    backup archivelog all delete input;
    Archived redo logfile 삭제 유무를 확인하기 위해 CROSSCHECK 수행시 다음과
    같은 메시지가 발생함.
    RMAN> change archivelog all crosscheck;
    RMAN-03022: compiling command: change
    RMAN-06158: validation succeeded for archived log
    RMAN-08514: archivelog filename=
    /oracle/arch/dest2/arcr_1_964.arc recid=19 stamp=368726072
    2) 원인분석
    이 문제는 에러가 아니다. RMAN은 여러 개의 arhive directory중 하나의
    directoy안에 있는 archived file들만 삭제한다. 그래서 나머지 directory안의
    archived log file들은 삭제되지 않고 남게 되는 것이다.
    3) 해결책
    RMAN이 강제로 모든 directory안의 archived log file들을 삭제하게 하기 위해서는
    여러 개의 채널을 할당하여 각 채널이 각 archive destination안의 archived file을
    백업하고 삭제하도록 해야 한다.
    이것은 아래와 같이 구현될 수 있다.
    run {
    allocate channel t1 type 'sbt_tape';
    allocate channel t2 type 'sbt_tape';
    backup
    archivelog like '/oracle/arch/dest1/%' channel t1 delete input
    archivelog like '/oracle/arch/dest2/%' channel t2 delete input;
    Scenario #2:
    1)RMAN이 archived log를 찾을 수 없어 백업이 실패하는 경우.
    이 시나리오에서 database를 incremental backup한다고 가정한다.
    이 경우 RMAN은 recover시 archived redo log대신에 incremental backup을 사용할
    수 있기 때문에 백업 후 모든 archived redo log를 삭제하기 위해 OS utility를 사용한다.
    그러나 다음 번 backup시 다음과 같은 Error를 만나게 된다.
    RMAN-6089: archive log NAME not found or out of sync with catalog
    2) 원인분석
    이 문제는 OS 명령을 사용하여 archived log를 삭제하였을 경우 발생한다. 이때 RMAN은
    archived log가 삭제되었다는 것을 알지 못한다. RMAN-6089는 RMAN이 OS 명령에 의해
    삭제된 archived log가 여전히 존재하다고 생각하고 백업하려고 시도하였을 때 발생하게 된다.
    3) 해결책
    가장 쉬운 해결책은 archived log를 백업할 때 DELETE INPUT option을 사용하는 것이다.
    예를 들면
    run {
    allocate channel c1 type 'sbt_tape';
    backup archivelog all delete input;
    두 번째로 가장 쉬운 해결책은 OS utility를 사용하여 archived log를 삭제한 후에
    다음과 같은 명령어를 RMAN prompt상에서 수행하는 것이다.
    RMAN>allocate channel for maintenance type disk;
    RMAN>change archivelog all crosscheck;
    Oracle 8.0:
         RMAN> change archivelog '/disk/path/archivelog_name' validate;
    Oracle 8i:
    RMAN> change archivelog all crosscheck ;
    Oracle 9i:
    RMAN> crosscheck archivelog all ;
    catalog의 COMPATIBLE 파라미터가 8.1.5이하로 설정되어 있으면 RMAN은 찾을 수 없는
    모든 archived log의 status를 "DELETED" 로 셋팅한다. 만약에 COMPATIBLE이 8.1.6이상으로
    설정되어 있으면 RMAN은 Repository에서 record를 삭제한다.

    Very strange, I issue following command in RMAN on both primary and standby machine, but it they don't delete the 1_55_758646076.dbf, I find in v$archived_log, this "/home/oracle/app/oracle/dataguard/1_55_758646076.dbf" had already been applied.
    RMAN> connect target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    old RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters are successfully stored
    RMAN>
    ----------------------------------------------------------------------------------

  • What order are Archive logs restored in when RMAN recover database issued

    Ok, you have a run block that has restored your level-0 RMAN backup.
    Your base datafiles are down on disc.
    You are about to start recovery to point in time, lets say until this morning at 07:00am.
    run {   set until time "TO_DATE('2010/06/08_07:00:00','YYYY/MM/DD_HH24:MI:SS')";
    allocate channel d1 type disk;
    allocate channel d2 type disk;
    allocate channel d3 type disk;
    allocate channel d4 type disk;
    recover database;
    So the above runs, it analyses the earlies SCN required for recovery, checks for incremental backups (none here), works out the archivelog range
    required and starts to restore the archive logs. All as expected and works.
    My question: Is there a particular order that RMAN will restore the archive logs and is the restore / recover process implemented as per the run block.
    i.e Will all required archive logs based on the run block be restored and then the database recovered forward. Or is there something in RMAN that says restore these archive logs, ok now roll forwards, restore some more.
    When we were doing this the order of the archive logs coming back seemed to be random but obviously constrained by the run block. Is this an area we need to tune to get recoveries faster for situations where incrementals are not available?
    Any inputs on experience welcome. I am now drilling into the documentation for any references there.
    Thanks

    Hi there, thanks for the response I checked this and here are the numbers / time stamps on an example:
    This is from interpreting the list backup of archivelog commands.
    Backupset = 122672
    ==============
    Archive log sequence 120688 low time: 25th May 15:53:07 next time: 25th May 15:57:54
    Piece1 pieceNumber=123368 9th June 04:10:38 <-- catalogued by us.
    Piece2 pieceNumber=122673 25th May 16:05:18 <-- Original backup on production.
    Backupset = 122677
    ==============
    Archive log sequence 120683 low time: 25th May 15:27:50 Next time 25th May 15:32:24 <-- lower sequence number restored after above.
    Piece1 PieceNumber=123372 9th June 04:11:34 <-- Catalogued by us.
    Piece2 PieceNumber=122678 25th May 16:08:45 <-- Orignial backup on Production.
    So the above would show that if catalogue command you could influence the Piece numbering. Therefore the restore order if like you say piece number is the key. I will need to review production as to why they were backed up in different order completed on production. Would think they would use the backupset numbering and then piece within the set / availability.
    Question: You mention archive logs are restored and applied and deleted in batches if the volume of archivelogs is large enough to be spread over multiple backup sets. What determines the batches in terms of size / number?
    Thanks for inputs. Answers some questions.

  • Is there anyway to read binary archive log file in a readable format

    please advise..
    thanks

    Logminer is tool provide by Oracle to turn content of archive logs and redo logs into useful information.
    Check here for more information
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

  • Capture process issue...archive log missing!!!!!

    Hi,
    Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
    we have accidentally missing archivelogs and no backup archive logs.
    Now I am going to recreate the capture process again.
    How I can start the the capture process from new SCN ?
    And Waht is the batter way to remove the archive log files from central server, because
    SCN used by capture processes?
    Thanks,
    Faziarain
    Edited by: [email protected] on Aug 12, 2009 12:27 AM

    Using dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
    'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
    Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
    #!/usr/bin/ksh
    # program : watch_arc.sh
    # purpose : check your archive directory and if actual percentage is > MAX_PERC
    #           then undertake the action coded by -a param
    # Author : Bernard Polarski
    # Date   :  01-08-2000
    #           12-09-2005      : added option -s MAX_SIZE
    #           20-11-2005      : added option -f to check if an archive is applied on data guard site before deleting it
    #           20-12-2005      : added option -z to check if an archive is still needed by logminer in a streams operation
    # set -xv
    #--------------------------- default values if not defined --------------
    # put here default values if you don't want to code then at run time
    MAX_PERC=85
    ARC_DIR=
    ACTION=
    LOG=/tmp/watch_arch.log
    EXT_ARC=
    PART=2
    #------------------------- Function section -----------------------------
    get_perc_occup()
      cd $ARC_DIR
      if [ $MAX_SIZE -gt 0 ];then
           # size is given in mb, we calculate all in K
           TOTAL_DISK=`expr $MAX_SIZE \* 1024`
           USED=`du -ks . | tail -1| awk '{print $1}'`    # in Kb!
      else
        USED=`df -k . | tail -1| awk '{print $3}'`    # in Kb!
        if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
               TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
        elif [ `uname -s` = AIX ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        elif [ `uname -s` = ReliantUNIX-N ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        else
                 # works on Sun
                 TOTAL_DISK=`df -b . | sed  '/avail/d' | awk '{print $2}'`
        fi
      fi
      USED100=`expr $USED \* 100`
      USG_PERC=`expr $USED100 / $TOTAL_DISK`
      echo $USG_PERC
    #------------------------ Main process ------------------------------------------
    usage()
        cat <<EOF
                  Usage : watch_arc.sh -h
                          watch_arc.sh  -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
                                        -t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
                                        -s <MAX_SIZE (meg)> -i <SID> -g -f
                  Note :
                           -c compress file after move using either compress or gzip (if available)
                              if -c is given without -m then file will be compressed in ARCHIVE DIR
                           -d Delete selected files
                           -e Extention of files to be processed
                           -f Check if log has been applied, required -i <sid> and -g if v8
                           -g Version 8 (use svrmgrl instead of sqlplus /
                           -i Oracle SID
                           -l List file that will be processing using -d or -m
                           -h help
                           -m move file to TARGET_DIR
                           -p Max percentage above wich action is triggered.
                              Actions are of type -l, -d  or -m
                           -t ARCHIVE_DIR
                           -s Perform action if size of target dir is bigger than MAX_SIZE (meg)
                           -v report action performed in LOGFILE
                           -r Part of files that will be affected by action :
                               2=half, 3=a third, 4=a quater .... [ default=2 ]
                           -z Check if log is still needed by logminer (used in streams),
                                    it requires -i <sid> and also -g for Oracle 8i
                  This program list, delete or move half of all file whose extention is given [ or default 'arc']
                  It check the size of the archive directory and if the percentage occupancy is above the given limit
                  then it performs the action on the half older files.
            How to use this prg :
                    run this file from the crontab, say, each hour.
         example
         1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
         whose extention is 'arc' using default affected file (default is -r 2)
         0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
         2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
         a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
         applied (-f is a dataguard option)
         watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
         3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
         logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
         in connection to Logminer.
         watch_arc.sh -e arc -t /archive/standby/CITSPRD  -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
    EOF
    #------------------------- Function section -----------------------------
    if [ "x-$1" = "x-" ];then
          usage
          exit
    fi
    MAX_SIZE=-1  # disable this feature if it is not specificaly selected
    while getopts  c:e:p:m:r:s:i:t:v:dhlfgz ARG
      do
        case $ARG in
           e ) EXT_ARC=$OPTARG ;;
           f ) CHECK_APPLIED=YES ;;
           g ) VERSION8=TRUE;;
           i ) ORACLE_SID=$OPTARG;;
           h ) usage
               exit ;;
           c ) COMPRESS_PRG=$OPTARG ;;
           p ) MAX_PERC=$OPTARG ;;
           d ) ACTION=delete ;;
           l ) ACTION=list ;;
           m ) ACTION=move
               TARGET_DIR=$OPTARG
               if [ ! -d $TARGET_DIR ] ;then
                   echo "Dir $TARGET_DIR does not exits"
                   exit
               fi;;
           r)  PART=$OPTARG ;;
           s)  MAX_SIZE=$OPTARG ;;
           t)  ARC_DIR=$OPTARG ;;
           v)  VERBOSE=TRUE
               LOG=$OPTARG
               if [ ! -f $LOG ];then
                   > $LOG
               fi ;;
           z)  LOGMINER=TRUE;;
        esac
    done
    if [ "x-$ARC_DIR" = "x-" ];then
         echo "NO ARC_DIR : aborting"
         exit
    fi
    if [ "x-$EXT_ARC" = "x-" ];then
         echo "NO EXT_ARC : aborting"
         exit
    fi
    if [ "x-$ACTION" = "x-" ];then
         echo "NO ACTION : aborting"
         exit
    fi
    if [ ! "x-$COMPRESS_PRG" = "x-" ];then
       if [ ! "x-$ACTION" =  "x-move" ];then
             ACTION=compress
       fi
    fi
    if [ "$CHECK_APPLIED" = "YES" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
       if [ "$VERSION8" = "TRUE" ];then
          ret=`svrmgrl <<EOF
    connect internal
    select max(sequence#) from v\\$log_history ;
    EOF`
    LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
       else
        ret=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off
    select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
    EOF`
       LAST_APPLIED=`echo $ret | awk '{print $1}'`
       fi
    elif [ "$LOGMINER" = "TRUE" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
        var=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off serveroutput on
    DECLARE
    hScn number := 0;
    lScn number := 0;
    sScn number;
    ascn number;
    alog varchar2(1000);
    begin
      select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
      DBMS_OUTPUT.ENABLE(2000);
      for cr in (select distinct(a.ckpt_scn)
                 from system.logmnr_restart_ckpt\\$ a
                 where a.ckpt_scn <= ascn and a.valid = 1
                   and exists (select * from system.logmnr_log\\$ l
                       where a.ckpt_scn between l.first_change# and l.next_change#)
                  order by a.ckpt_scn desc)
      loop
        if (hScn = 0) then
           hScn := cr.ckpt_scn;
        else
           lScn := cr.ckpt_scn;
           exit;
        end if;
      end loop;
      if lScn = 0 then
        lScn := sScn;
      end if;
       select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
      dbms_output.put_line(alog);
    end;
    EOF`
      # if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
      ret=`echo $var | awk '{print $1}'`
      if [ ! "$ret" = "PL/SQL" ];then
         LAST_APPLIED=$ret
      else
         unset LOGMINER
      fi
    fi
    PERC_NOW=`get_perc_occup`
    if [ $PERC_NOW -gt $MAX_PERC ];then
         cd $ARC_DIR
         cpt=`ls -tr *.$EXT_ARC | wc -w`
         if [ ! "x-$cpt" = "x-" ];then
              MID=`expr $cpt / $PART`
              cpt=0
              ls -tr *.$EXT_ARC |while read ARC
                  do
                     cpt=`expr $cpt + 1`
                     if [ $cpt -gt $MID ];then
                          break
                     fi
                     if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
                        VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
                        if [ $VAR -gt $LAST_APPLIED ];then
                             continue
                        fi
                     fi
                     case $ACTION in
                          'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
                                     fi ;;
                          'delete' ) rm $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
                                     fi ;;
                          'list'   )   ls -l $ARC_DIR/$ARC ;;
                          'move'   ) mv  $ARC_DIR/$ARC $TARGET_DIR
                                     if [ ! "x-$COMPRESS_PRG" = "x-" ];then
                                           $COMPRESS_PRG $TARGET_DIR/$ARC
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
                                           fi
                                     else
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
                                           fi
                                     fi ;;
                      esac
              done
          else
              echo "Warning : The filesystem is not full due to archive logs !"
              exit
          fi
    elif [ "x-$VERBOSE" = "x-TRUE" ];then
         echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
    fi

Maybe you are looking for

  • Differences between Sound Blaster Z SB1500 and SB1502 (OEM)

    I gather the SB1502 is around $25.00 cheaper in the US, comes without the red shield and the mike. Also, read something about a 1 year warranty instead of 2 and cheaper capacitors. How much of this is true and what other differences am I missing? Tha

  • Oracledbconsoleorcl service is not running ?

    i installed oracle database and developer suite 10g and the service mentioned was working properly at the first time , but when i restarted the computer it did not work , and when i try to start it , this message appears windows couldn't start the or

  • Mass upload of sales orders in crm

    Hi gurus, Could any one help me out in mass upload of sales orders in CRM thanks in advance Regards [email protected]

  • Time services in Active Directory

    We have an old Ubuntu box running as an NTP server and as part updating our systems we are planning to decommission it.  However, this system has been set as the local machines NTP server via DHCP scope options and via group policy.  If I just switch

  • Problems with launching AI CS 5

    Hi. The problem is so: I have a german Windows 7 (64), but I am not a germna and therfore I use some other programs in other language. To do this I choose at language settings another language as german for all programms that don't support unicode. I