Archive log issue
HI,
i am using oracle 10.2.0.4,my database using rman.
my problem is my archive dest is full,there is no space in my file system also(without adding another log archive dest) .
how can solve this problem
Thanks&Regards
reddy
Edited by: tmadugula on Jun 9, 2010 11:50 AM
Hi,
Probably your database is hang, this is normal. You can do a backup of archivelogs by rman to that rman remove its at the end.
see the script below:
run {
allocate channel c1 type disk format 'F:\oracle\backup\PRD\PRD_LOG_%U' MAXPIECESIZE 4G;
backup as COMPRESSED BACKUPSET archivelog delete all input ;
delete noprompt obsolete;
exit
Wander(Brazil)
Similar Messages
-
RMAN BACKUPS AND ARCHIVED LOG ISSUES
제품 : RMAN
작성날짜 : 2004-02-17
RMAN BACKUPS AND ARCHIVED LOG ISSUES
=====================================
Scenario #1:
1)RMAN이 모든 archived log들을 삭제할 때 실패하는 경우.
database는 두 개의 archive destination에 archive file을 생성한다.
다음과 같은 스크립트를 수행하여 백업후에 archived redo logfile을 삭제한다.
run {
allocate channel c1 type 'sbt_tape';
backup database;
backup archivelog all delete input;
Archived redo logfile 삭제 유무를 확인하기 위해 CROSSCHECK 수행시 다음과
같은 메시지가 발생함.
RMAN> change archivelog all crosscheck;
RMAN-03022: compiling command: change
RMAN-06158: validation succeeded for archived log
RMAN-08514: archivelog filename=
/oracle/arch/dest2/arcr_1_964.arc recid=19 stamp=368726072
2) 원인분석
이 문제는 에러가 아니다. RMAN은 여러 개의 arhive directory중 하나의
directoy안에 있는 archived file들만 삭제한다. 그래서 나머지 directory안의
archived log file들은 삭제되지 않고 남게 되는 것이다.
3) 해결책
RMAN이 강제로 모든 directory안의 archived log file들을 삭제하게 하기 위해서는
여러 개의 채널을 할당하여 각 채널이 각 archive destination안의 archived file을
백업하고 삭제하도록 해야 한다.
이것은 아래와 같이 구현될 수 있다.
run {
allocate channel t1 type 'sbt_tape';
allocate channel t2 type 'sbt_tape';
backup
archivelog like '/oracle/arch/dest1/%' channel t1 delete input
archivelog like '/oracle/arch/dest2/%' channel t2 delete input;
Scenario #2:
1)RMAN이 archived log를 찾을 수 없어 백업이 실패하는 경우.
이 시나리오에서 database를 incremental backup한다고 가정한다.
이 경우 RMAN은 recover시 archived redo log대신에 incremental backup을 사용할
수 있기 때문에 백업 후 모든 archived redo log를 삭제하기 위해 OS utility를 사용한다.
그러나 다음 번 backup시 다음과 같은 Error를 만나게 된다.
RMAN-6089: archive log NAME not found or out of sync with catalog
2) 원인분석
이 문제는 OS 명령을 사용하여 archived log를 삭제하였을 경우 발생한다. 이때 RMAN은
archived log가 삭제되었다는 것을 알지 못한다. RMAN-6089는 RMAN이 OS 명령에 의해
삭제된 archived log가 여전히 존재하다고 생각하고 백업하려고 시도하였을 때 발생하게 된다.
3) 해결책
가장 쉬운 해결책은 archived log를 백업할 때 DELETE INPUT option을 사용하는 것이다.
예를 들면
run {
allocate channel c1 type 'sbt_tape';
backup archivelog all delete input;
두 번째로 가장 쉬운 해결책은 OS utility를 사용하여 archived log를 삭제한 후에
다음과 같은 명령어를 RMAN prompt상에서 수행하는 것이다.
RMAN>allocate channel for maintenance type disk;
RMAN>change archivelog all crosscheck;
Oracle 8.0:
RMAN> change archivelog '/disk/path/archivelog_name' validate;
Oracle 8i:
RMAN> change archivelog all crosscheck ;
Oracle 9i:
RMAN> crosscheck archivelog all ;
catalog의 COMPATIBLE 파라미터가 8.1.5이하로 설정되어 있으면 RMAN은 찾을 수 없는
모든 archived log의 status를 "DELETED" 로 셋팅한다. 만약에 COMPATIBLE이 8.1.6이상으로
설정되어 있으면 RMAN은 Repository에서 record를 삭제한다.Very strange, I issue following command in RMAN on both primary and standby machine, but it they don't delete the 1_55_758646076.dbf, I find in v$archived_log, this "/home/oracle/app/oracle/dataguard/1_55_758646076.dbf" had already been applied.
RMAN> connect target /
RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters are successfully stored
RMAN>
---------------------------------------------------------------------------------- -
DB version: 11.1.0.7
when I issue cmd " alter system archive log current", the alertlog raise error "Thread 1 cannot allocate new log, sequence 149, Private strand flush not complete".
I think it's normal, because in the log file , here some dirty data have NOT been written to the data_files, so it can raise the error Private strand flush not complete.
BUT in my point, when I issue the cmd "alter system checkpoint" and then, subsequently, I issue the " alter system archive log current", there shoud not raise any error" becasue the dirty data already been written via the cmd "alter system checkpoint". but in the alertlog the error still here (Private strand flush not complete).
that's why, How can I understand that. thanks!To understand it, please check Doc 372557.1 Alert Log Messages: Private Strand Flush Not Complete
and
cannot allocate new log&Private strand flush not complete
Edited by: Fran on 25-jun-2012 1:07 -
Standby creating archives log files issue!
Hello Everyone,
Working on oracle 10g R2/Windows, I have created a dataguard with one standby database, but there is a strange issue that happen, and I'll need someone to shed the light to me.
By default archived log created from the primary database should be the sent to the stanndby database, but I found that the standby database has plus one archived log file.
From the primary database:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\local_destination1_orcl
Oldest online log sequence 1021
Next log sequence to archive 1023
Current log sequence 1023
contents of C:\local_destination1_orcl
1_1022_623851185.ARC
from the standby database:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\local_destination1_orcl
Oldest online log sequence 1022
Next log sequence to archive 0
Current log sequence 1023
contents of C:\local_destination1_orcl
1_1022_623851185.ARC
1_1023_623851185.ARC ---> this is the extra archive file created in the standby database, should someone let me know how to avoid this?
Thanks for your helpSELECT FROM v$version;*
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 64-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
The standby database is a physical standby database (not logical standby)
Thanks against for your contribution, but I'm still not know why standby create and arhive files too? -
Hi,
I am facing the issue with archive log backup with external autoloader tape drive ( HP data protector software).
The archivelog backup is not successfull.
Kindly provide me a suggestion o solve this issue. Please find the log below
BR0002I BRARCHIVE 7.00 (32)
BR0262I Enter database user name[/password]:
BR0169I Value 'util_file_online' of parameter/option 'backup_dev_type/-d' ignored for 'brarchive' - 'util_file' assumed
BR0006I Start of offline redo log processing: adzulphz.sve 2009-01-28 12.12.11
BR0252E Function fopen() failed for '/oracle/SFD/saparch/adzulphz.sve' at location main-6
BR0253E errno 13: Permission denied
BR0121E Processing of log file /oracle/SFD/saparch/adzulphz.sve failed
BR0007I End of offline redo log processing: adzulphz.sve 2009-01-28 12.12.11
BR0280I BRARCHIVE time stamp: 2009-01-28 12.12.11
BR0005I BRARCHIVE terminated with errors
[Major]
From: OB2BAR_OMNISAP@sfwdqs "OMNISAP" Time: 01/28/09 12:12:11
BRARCHIVE /usr/sap/SFD/SYS/exe/run/brarchive -a -c -u system/******** returned 3
[Normal]
From: BSM@sfwsol "Archive" Time: 1/28/2009 12:19:09 PM
OB2BAR application on "sfwdqs" disconnected.
[Normal]
From: BMA@sfwsol "HP:Ultrium 3-SCSI_1_sfwsol" Time: 1/28/2009 12:19:38 PM
Tape0:0:5:0C
Medium header verification completed, 0 errors found
[Normal]
From: BMA@sfwsol "HP:Ultrium 3-SCSI_1_sfwsol" Time: 1/28/2009 12:19:58 PM
By: UMA@sfwsol@Changer0:0:5:1
Unloading medium to slot 4 from device Tape0:0:5:0C
[Normal]
From: BMA@sfwsol "HP:Ultrium 3-SCSI_1_sfwsol" Time: 1/28/2009 12:20:21 PM
ABORTED Media Agent "HP:Ultrium 3-SCSI_1_sfwsol"
[Normal]
From: BSM@sfwsol "Archive" Time: 1/28/2009 12:20:21 PM
Regards,
KumarHi ,
Please check the directory permissions for "/oracle/SFD/saparch".
Please check permissions for <sid>adm and ora<sid> for the above directory.
"Note 17163 - BRARCHIVE/BRBACKUP messages and codes" and also related notes may help you for addtional information.
Regards
Upender Reddy -
Issue with backing up Archive logs
Hi All,
Please help me with the issues/confusions I am facing :
1. Currently, the "First active log file = S0008351.LOG" from "db2 get db cfg for SMR"
In the log_dir, there should be logs >=S0008351.LOG
But in my case, in addition to these logs, there are some old logs like S0008309.LOG, S0008318.LOG, S0008331.LOG etc...
How can I clear all these 'not-really-wanted' logs from the log_dir ?
2. There is some issue with archive backup as a result the archive backups are not running fine.
Since this is a very low activity system, there are not much logs generated.
But the issue is :
There are so many archive logs in the "log_archive" directory, I want to cleanup the directory now.
The latest online backup is @ 26.07.2011 04:01:04
First Log File : S0008344.LOG
Last Log File : S0008346.LOG
Inside log_archive there are archive logs from S0008121.LOG to S0008304.LOG
I wont really require these logs, correct ?
Please clear my confusions...Hi,
>
> 1. Currently, the "First active log file = S0008351.LOG" from "db2 get db cfg for SMR"
> In the log_dir, there should be logs >=S0008351.LOG
> But in my case, in addition to these logs, there are some old logs like S0008309.LOG, S0008318.LOG, S0008331.LOG etc...
> How can I clear all these 'not-really-wanted' logs from the log_dir ?
>
You should not delete logs from log_dir because there online Redo logs and if you delete then there will be problem in start of db.
> 2. There is some issue with archive backup as a result the archive backups are not running fine.
> Since this is a very low activity system, there are not much logs generated.
> But the issue is :
> There are so many archive logs in the "log_archive" directory, I want to cleanup the directory now.
> The latest online backup is @ 26.07.2011 04:01:04
> First Log File : S0008344.LOG
> Last Log File : S0008346.LOG
>
If your archive logs are backed up from log_archive directory then you can delete old logs.
Thanks
Sunny -
What order are Archive logs restored in when RMAN recover database issued
Ok, you have a run block that has restored your level-0 RMAN backup.
Your base datafiles are down on disc.
You are about to start recovery to point in time, lets say until this morning at 07:00am.
run { set until time "TO_DATE('2010/06/08_07:00:00','YYYY/MM/DD_HH24:MI:SS')";
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
recover database;
So the above runs, it analyses the earlies SCN required for recovery, checks for incremental backups (none here), works out the archivelog range
required and starts to restore the archive logs. All as expected and works.
My question: Is there a particular order that RMAN will restore the archive logs and is the restore / recover process implemented as per the run block.
i.e Will all required archive logs based on the run block be restored and then the database recovered forward. Or is there something in RMAN that says restore these archive logs, ok now roll forwards, restore some more.
When we were doing this the order of the archive logs coming back seemed to be random but obviously constrained by the run block. Is this an area we need to tune to get recoveries faster for situations where incrementals are not available?
Any inputs on experience welcome. I am now drilling into the documentation for any references there.
ThanksHi there, thanks for the response I checked this and here are the numbers / time stamps on an example:
This is from interpreting the list backup of archivelog commands.
Backupset = 122672
==============
Archive log sequence 120688 low time: 25th May 15:53:07 next time: 25th May 15:57:54
Piece1 pieceNumber=123368 9th June 04:10:38 <-- catalogued by us.
Piece2 pieceNumber=122673 25th May 16:05:18 <-- Original backup on production.
Backupset = 122677
==============
Archive log sequence 120683 low time: 25th May 15:27:50 Next time 25th May 15:32:24 <-- lower sequence number restored after above.
Piece1 PieceNumber=123372 9th June 04:11:34 <-- Catalogued by us.
Piece2 PieceNumber=122678 25th May 16:08:45 <-- Orignial backup on Production.
So the above would show that if catalogue command you could influence the Piece numbering. Therefore the restore order if like you say piece number is the key. I will need to review production as to why they were backed up in different order completed on production. Would think they would use the backupset numbering and then piece within the set / availability.
Question: You mention archive logs are restored and applied and deleted in batches if the volume of archivelogs is large enough to be spread over multiple backup sets. What determines the batches in terms of size / number?
Thanks for inputs. Answers some questions. -
Hi DBAs,
I have 2 Archive destination. My archive log format is ARC%S_%R.%T
But In my 1 location E:\app\Administrator\product\11.1.0\db_1\RDBMS format shows ARC00025_0769191639.001
2 location shows E:\app\Administrator\flash_recovery_area\BASKAR\ARCHIVELOG\2011_12_08\O1_MF_1_25_7G15PVYX_.ARC
SQL> select destination from v$archive_dest;
DESTINATION
E:\app\Administrator\product\11.1.0\db_1\RDBMS
USE_DB_RECOVERY_FILE_DEST
My Question is that, I am using this format only ARC%S_%R.%T
but it shows different format in each location. May i know what 's the reason behind this?
Thanks in AdvanceIf you are using other archive destination other than FRA it will creates as per LOG_ARCHIVE_FORMAT,
and the FRA configured then the archive format for FRA is O1_MF_1_25_7G15PVYX_.ARC
from your query it is clear that there are two destinations are configured, So if you dont want *.ARC* files, you have to disable FRA.
But recommended to use FRA easy to manage. -
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
ARCHIVE LOGS CREATED in WRONG FOLDER
Hello,
I'm facing an issue with the Archive logs.
In my Db the parameters for Archive logs are
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
db_create_file_dest string /u01/oradata/SIEB/dbf
db_create_online_log_dest_1 string /u01/oradata/SIEB/rdo
But the archive logs are created in
/u01/app/oracle/product/9.2.0.6/dbs
Listed Below :
bash-2.05$ ls -lrt *.arc
-rw-r----- 1 oracle dba 9424384 Jan 9 09:30 SIEB_302843.arc
-rw-r----- 1 oracle dba 7678464 Jan 9 10:00 SIEB_302844.arc
-rw-r----- 1 oracle dba 1536 Jan 9 10:00 SIEB_302845.arc
-rw-r----- 1 oracle dba 20480 Jan 9 10:00 SIEB_302846.arc
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 104858112 Jan 9 10:58 SIEB_302848.arc
bash-2.05$
Does anyone have an Idea why this happens?
Is this a Bug!!!
ThxsBut in another Db I've
log_archive_dest string
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
and my archivelogs are in
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] ls -lrt /u03/archive/SIEB
total 297696
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 21573632 Jan 9 11:00 SIEB_302848.arc
-rw-r----- 1 oracle dba 101450240 Jan 9 11:30 SIEB_302849.arc
-rw-r----- 1 oracle dba 6308864 Jan 9 12:00 SIEB_302850.arc
-rw-r----- 1 oracle dba 12936704 Jan 9 12:30 SIEB_302851.arc
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] -
Archive Logs NOT APPLIED but transferred
Hi Gurus,
I have configured Primary & Standby databases in same Oracle Home. OS version is OEL 5. Database version is 10.2.0.1. I could get the archive logs in the standby site but they are not getting applied in the standby database. I don't have OLAP installed in my database version. Would this create this issue? However I attached my primary alert log details below for your reference:
Thu Aug 30 23:55:37 2012
Starting ORACLE instance (normal)
Cannot determine all dependent dynamic libraries for /proc/self/exe
Unable to find dynamic library libocr10.so in search paths
RPATH = /ade/aime1_build2101/oracle/has/lib/:/ade/aime1_build2101/oracle/lib/:/ade/aime1_build2101/oracle/has/lib/:
LD_LIBRARY_PATH is not set!
The default library directories are /lib and /usr/lib
Unable to find dynamic library libocrb10.so in search paths
Unable to find dynamic library libocrutl10.so in search paths
Unable to find dynamic library libocrutl10.so in search paths
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
processes = 150
sga_target = 289406976
control_files = /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control01.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control02.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control03.ctl
db_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim
log_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWSTAND/onlinelog, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWPRIM/onlinelog
db_block_size = 8192
compatible = 10.2.0.1.0
log_archive_config = DG_CONFIG=(newprim,newstand)
log_archive_dest_1 = LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=newprim
log_archive_dest_2 = SERVICE=newstand LGWR ASYNC VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=newstand
log_archive_dest_state_1 = enable
log_archive_dest_state_2 = enable
log_archive_max_processes= 30
log_archive_format = %t_%s_%r.dbf
fal_client = newprim
fal_server = newstand
db_file_multiblock_read_count= 16
db_recovery_file_dest = /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area
db_recovery_file_dest_size= 2147483648
standby_file_management = AUTO
undo_management = AUTO
undo_tablespace = UNDOTBS1
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=newprimXDB)
job_queue_processes = 10
background_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump
user_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump
core_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/cdump
audit_file_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/adump
db_name = newprim
db_unique_name = newprim
open_cursors = 300
pga_aggregate_target = 95420416
PMON started with pid=2, OS id=28091
PSP0 started with pid=3, OS id=28093
MMAN started with pid=4, OS id=28095
DBW0 started with pid=5, OS id=28097
LGWR started with pid=6, OS id=28100
CKPT started with pid=7, OS id=28102
SMON started with pid=8, OS id=28104
RECO started with pid=9, OS id=28106
CJQ0 started with pid=10, OS id=28108
MMON started with pid=11, OS id=28110
MMNL started with pid=12, OS id=28112
Thu Aug 30 23:55:38 2012
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
Thu Aug 30 23:55:38 2012
ALTER DATABASE MOUNT
Thu Aug 30 23:55:42 2012
Setting recovery target incarnation to 2
Thu Aug 30 23:55:43 2012
Successful mount of redo thread 1, with mount id 1090395834
Thu Aug 30 23:55:43 2012
Database mounted in Exclusive Mode
Completed: ALTER DATABASE MOUNT
Thu Aug 30 23:55:43 2012
ALTER DATABASE OPEN
Thu Aug 30 23:55:43 2012
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=16, OS id=28122
ARC1 started with pid=17, OS id=28124
ARC2 started with pid=18, OS id=28126
ARC3 started with pid=19, OS id=28128
ARC4 started with pid=20, OS id=28133
ARC5 started with pid=21, OS id=28135
ARC6 started with pid=22, OS id=28137
ARC7 started with pid=23, OS id=28139
ARC8 started with pid=24, OS id=28141
ARC9 started with pid=25, OS id=28143
ARCa started with pid=26, OS id=28145
ARCb started with pid=27, OS id=28147
ARCc started with pid=28, OS id=28149
ARCd started with pid=29, OS id=28151
ARCe started with pid=30, OS id=28153
ARCf started with pid=31, OS id=28155
ARCg started with pid=32, OS id=28157
ARCh started with pid=33, OS id=28159
ARCi started with pid=34, OS id=28161
ARCj started with pid=35, OS id=28163
ARCk started with pid=36, OS id=28165
ARCl started with pid=37, OS id=28167
ARCm started with pid=38, OS id=28169
ARCn started with pid=39, OS id=28171
ARCo started with pid=40, OS id=28173
ARCp started with pid=41, OS id=28175
ARCq started with pid=42, OS id=28177
ARCr started with pid=43, OS id=28179
ARCs started with pid=44, OS id=28181
Thu Aug 30 23:55:44 2012
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARCt started with pid=45, OS id=28183
LNS1 started with pid=46, OS id=28185
Thu Aug 30 23:55:48 2012
Thread 1 advanced to log sequence 68
Thu Aug 30 23:55:48 2012
ARCo: Becoming the 'no FAL' ARCH
ARCo: Becoming the 'no SRL' ARCH
Thu Aug 30 23:55:48 2012
ARCp: Becoming the heartbeat ARCH
Thu Aug 30 23:55:48 2012
Thread 1 opened at log sequence 68
Current log# 1 seq# 68 mem# 0: /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/redo01.log
Successful open of redo thread 1
Thu Aug 30 23:55:48 2012
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Thu Aug 30 23:55:48 2012
SMON: enabling cache recovery
Thu Aug 30 23:55:48 2012
Successfully onlined Undo Tablespace 1.
Thu Aug 30 23:55:48 2012
SMON: enabling tx recovery
Thu Aug 30 23:55:49 2012
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=47, OS id=28205
Thu Aug 30 23:55:49 2012
Error 1034 received logging on to the standby
Thu Aug 30 23:55:49 2012
Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
ORA-01034: ORACLE not available
FAL[server, ARC1]: Error 1034 creating remote archivelog file 'newstand'
FAL[server, ARC1]: FAL archive failed, see trace file.
Thu Aug 30 23:55:49 2012
Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Thu Aug 30 23:55:49 2012
ORACLE Instance newprim - Archival Error. Archiver continuing.
Thu Aug 30 23:55:49 2012
db_recovery_file_dest_size of 2048 MB is 9.77% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Thu Aug 30 23:55:50 2012
Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump/newprim_ora_28120.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-12663: Services required by client not available on the server
ORA-36961: Oracle OLAP is not available.
ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
ORA-06512: at line 15
Thu Aug 30 23:55:50 2012
Completed: ALTER DATABASE OPEN
Thu Aug 30 23:56:33 2012
FAL[server]: Fail to queue the whole FAL gap
GAP - thread 1 sequence 1-33
DBID 1090398314 branch 792689455
Kindly, guide me please..
-Vimal.CKPT: The trace file details are added below for your reference;
/home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning and Data Mining options
ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1
System name: Linux
Node name: localhost.localdomain
Release: 2.6.18-8.el5PAE
Version: #1 SMP Tue Jun 5 23:39:57 EDT 2007
Machine: i686
Instance name: newprim
Redo thread mounted by this instance: 1
Oracle process number: 17
Unix process pid: 28124, image: [email protected] (ARC1)
*** SERVICE NAME:() 2012-08-30 23:55:48.314
*** SESSION ID:(155.1) 2012-08-30 23:55:48.314
kcrrwkx: nothing to do (start)
Redo shipping client performing standby login
OCISessionBegin failed -1
.. Detailed OCI error val is 1034 and errmsg is 'ORA-01034: ORACLE not available
*** 2012-08-30 23:55:49.723 60679 kcrr.c
Error 1034 received logging on to the standby
Error 1034 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
Error 1034 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
ORA-01034: ORACLE not available
*** 2012-08-30 23:55:49.723 58941 kcrr.c
kcrrfail: dest:2 err:1034 force:0 blast:1
kcrrwkx: unknown error:1034
ORA-16055: FAL request rejected
ARCH: Connecting to console port...
ARCH: Connecting to console port...
kcrrwkx: nothing to do (end)
*** 2012-08-31 00:00:43.417
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:05:43.348
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:10:43.280
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:15:43.217
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:20:43.160
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:25:43.092
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:30:43.031
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:35:42.961
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:40:42.890
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:45:42.820
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:50:42.755
kcrrwkx: nothing to do (start)
*** 2012-08-31 00:55:42.686
kcrrwkx: nothing to do (start)
*** 2012-08-31 01:00:42.631
kcrrwkx: nothing to do (start)
*** 2012-08-31 01:05:42.565
kcrrwkx: nothing to do (start)
*** 2012-08-31 01:10:42.496
kcrrwkx: nothing to do (start)
Mahir: Yes I have my 4 standby redo logs!
I created the standby manually without using RMAN.
Hemant: if it asks for even first thread, then obviously it shows nothing is applied on Standby. By the way so it is not called a 'GAP', I think..!
Thanks. -
*HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY*
HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY
I have set the RMAN CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY; on my physical standby server.
My archivelog files are not deleted on standby.
I have set the CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default on the Primary server.
I've checked the archivelogs with the FRA and they are not beign deleted on the STANDBY. Do I have to do something for the configuation to take effect? Like run a RMAN backup?
I've done a lot ofresearch and i'm getting mixed answers. Please help. Thanks in advanced.
JSetting the Policy will not delete the Archive logs on the Standby. ( I found a thread where the Data Guard product manager says "The deletion policy on both sides will do what you want" ). However I still
like to clean them off with RMAN.
I would use RMAN to delete them so that it can use that Policy are you are protected in case of Gap, transport issue etc.
There are many ways to do this. You can simply run RMAN and have it clean out the Archive.
Example :
#!/bin/bash
# Name: db_rman_arch_standby.sh
# Purpose: Database rman backup
# Usage : db_rman_arch_standby <DBNAME>
if [ "$1" ]
then DBNAME=$1
else
echo "basename $0 : Syntax error : use . db_rman_full <DBNAME> "
exit 1
fi
. /u01/app/oracle/dba_tool/env/${DBNAME}.env
echo ${DBNAME}
MAILHEADER="Archive_cleanup_on_STANDBY_${DBNAME}"
echo "Starting RMAN..."
$ORACLE_HOME/bin/rman target / catalog <user>/<password>@<catalog> << EOF
delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-8';
exit
EOF
echo `date`
echo
echo 'End of archive cleanup on STANDBY'
mailx -s ${MAILHEADER} $MAILTO < /tmp/rmandbarchstandby.out
# End of ScriptThis uses ( calls an ENV) so the crontab has an environment.
Example ( STANDBY.env )
ORACLE_BASE=/u01/app/oracle
ULIMIT=unlimited
ORACLE_SID=STANDBY
ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
LIBPATH=$LD_LIBRARY_PATH:/usr/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
#export TERM=linux=80x25 wrong wrong wrong wrong wrong
export TERM=vt100
export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
export ORACLE_HOME
export LIBPATH LD_LIBRARY_PATH ORA_NLS33
export TNS_ADMIN
export PATH
export MAILTO=?? your email hereNote use the env command in Unix to get you settings.
There are probably ten other/better ways to do this, but this works.
other options ( you decide )
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
http://www.oracle.com/technetwork/database/features/availability/rman-dataguard-10g-wp-1-129486.pdf
Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
Tip I don't care myself but in some of the other forums people seem to mind if you use all caps in the subject. They say it shouting. My take is if somebody is shouting at me I'm probably going to just move away.
Best Regards
mseberg
Edited by: mseberg on May 8, 2012 11:53 AM
Edited by: mseberg on May 8, 2012 11:56 AM -
Error while taking archive log backup
Dear all,
We are getting the below mentioned error while taking the archive log backup
============================================================================
BR0208I Volume with name RRPA02 required in device /dev/rmt0.1
BR0210I Please mount BRARCHIVE volume, if you have not already done so
BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.41
BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRARCHIVE:
c
BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
BR0257I Your reply: 'c'
BR0259I Program execution will be continued...
BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
BR0226I Rewinding tape volume in device /dev/rmt0 ...
BR0351I Restoring /oracle/RRP/sapreorg/.tape.hdr0
BR0355I from /dev/rmt0.1 ...
BR0278W Command output of 'LANG=C cd /oracle/RRP/sapreorg && LANG=C cpio -iuvB .tape.hdr0 < /dev/rmt0.1':
Can't read input
===========================================================================
We are able to take offline, online backups but we are facing the above mentioned problem while taking archive log backup
We are on ECC 6 / Oracle / AIX
The kernel is latest
The drive is working fine and there is no problem with the tapes as we have tried using diffrent tapes
can this be a permissions issue?
I ran saproot.sh but somehow it is setting owner as sidadm and group as sapsys to some of the br* files
I tried by changing the permissions to oraSID : dba but still the error is the same
Any suggestions?Means you have not initialized the medias but trying to take backups.
First check how many medias you have entered in your tape count parameter for archive log backups (just go to initSID.sap and check)
Then increase/reduce them to according to your archive backup plan >> Initialize all the tapes according to their name (same as you have initialized in initSID.sap) >> stick physical label to all the medias according to name >> Schedule archive backups
It will not ask you for initialization as already you have initialized in second step.
Suggestion: Use 7 medias per week (one tape per day)
Regards,
Nick Loy -
Archive logs not getting shipped when we do a swtich
Hi Team.
I am facing an strange issue on one of our standby.
After the setup of the standby when we enabled the MrM mode, the archive got shipped smoothly.After after sometime when we tried to switch log file and check if it was working or not.
From the alert log of the DR , we can only see these msg " Media Recovery Waiting for thread 1 sequence ***". But the moment we cancel the MrM mode and re enable it , its ships the same smooth.
So need some guidance to debug the same .
Please advise .
ThanksHello;
On something like this I would check BOTH the Primary and Standby alert logs.
Here's a SWITCH I forced yesterday:
Thu Sep 12 16:01:11 2013
ALTER SYSTEM ARCHIVE LOG
Thu Sep 12 16:01:11 2013
Thread 1 advanced to log sequence 811 (LGWR switch)
Current log# 1 seq# 811 mem# 0: /u01/app/oracle/oradata/PRIMARY/redo01.log
Thu Sep 12 16:01:11 2013
LNS: Standby redo logfile selected for thread 1 sequence 811 for destination LOG_ARCHIVE_DEST_2
Notice the last line is logged on the Primary almost the very moment I do the switch. Does your Database in Primary mode show that?
Best Regards
mseberg -
Will RMAN delete archive log files on a Standby server?
Environment:
Oracle 11.2.0.3 EE on Solaris 10.5
I am currently NOT using an RMAN repository (coming soon).
I have a Primary database sending log files to a Standby.
My Retention Policy is set to 'RECOVERY WINDOW OF 8 DAYS'.
Question: Will RMAN delete the archive log files on the Standby server after they become obsolete based on the Retention Policy or do I need to remove them manually via O/S command?
Does the fact that I'm NOT using an RMAN Repository at the moment make a difference?
Couldn't find the answer in the docs.
Thanks very much!!
-garyHello again Gary;
Sorry for the delay.
Why is what you suggested better?
No, its not better, but I prefer to manage the archive. This method works, period.
Does that fact (running a backup every 4 hours) make my archivelog deletion policy irrelevant?
No. The policy is important.
Having the Primary set to :
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBYBut set to "NONE" on the Standby.
Means the worst thing that can happen is RMAN will bark when you try to delete something. ( this is a good thing )
How do I prevent the archive backup process from backing up an archive log file before it gets shipped to the standby?
Should be a non-issue, the archive does not move, the REDO is transported and applied. There's SQL to monitor both ( Transport and Apply )
For Data Guard I would consider getting a copy of
"Oracle Data Guard 11g Handbook" - Larry Carpenter (AKA Dr. Paranoid ) ISBN 978-0-07-162111-2
Best Oracle book I've read in 10 years. Covers a ton of ground clearly.
Also Data Guard forum here :
Data Guard
Best Regards
mseberg
Edited by: mseberg on Apr 10, 2012 4:39 PM
Maybe you are looking for
-
BI Business content in BI 7.0 flow
Is there any place where the release plan for BI Content is available? The basic thing that I want to know is when will be the BI 7.0 Version of Business Content be available? This would help in deciding to activate the current BW 3.5 flow or wait fo
-
Hi all, Can anyone let me know how the generic BOR object 'TSTC' can be used to call transactions from CRM4.0(SIE) to backend R/3 systems in a transaction launcher. Basically, i will be interested in knowing - how one can pass the transaction code -
-
GIMP and XQuartz on Mountain Lion
Just tried Gimp and had to download XQuartz, but I don't like Gimp so I removed it. Is it safe to remove XQuartz as well. Has XQuartz replaced something else and if so what is it and can it be installed again. I am also running Mountain Lion. On anot
-
How to resample a waveforem sample every n data points
I would like to convert a waveform data set that has 1746 data points into a signal that has 500 data points by selecting the x-value from every 3.5 (1746/500) points and eliminating all other points. Ideally I would like to average the x-value of ev
-
Images shrink or grow proportionally in fluid web layout?
How do I make images shrink or grow proportionally with my fluid web layout?