Local capture only seems to process archived logs - not online logs
In Oracle 10g R2 I have set up a local capture and apply on a logical standby database. I am configured to capture table level changes with LOGFILE_ASSIGNMENT = IMPLICIT, but the capture process only seems to find changes when the local redo logs are archived. I read that local capture will try to read the online redo if it can and then resort to the archive log. I don't see anything on why the online redo couldn't be used (except if LOGFILE_ASSIGNMENT = EXPLICIT). I want the capture to read the online redo rather than waiting for an archive. Can you help me? The status of the capture shows "WAITING FOR REDO" until archive time.
-Nick
Anybody else ever experience this?
Similar Messages
-
Two copies archive logs with only one defined
Hi,
On a 11g database, I have only got flash_recovery_area defined. When switched into archive log mode, I have expected only one copy of archive logs produced in the defined USE_DB_RECOVERY_FILE_DEST location, but there is another copy generated as well under $ORACLE_HOME/dbs directory. How to explain that? and how to DISABLE the second copy to be produced?
Thanks for any help
Zhuang Li
PS: more info
In spfile:
orcl.__java_pool_size=50331648
orcl.__large_pool_size=16777216
orcl.__oracle_base='/usr/oracle11g'#ORACLE_BASE set from environment
orcl.__pga_aggregate_target=1828716544
orcl.__sga_target=1056964608
orcl.__shared_io_pool_size=0
orcl.__shared_pool_size=654311424
orcl.__streams_pool_size=0
*.audit_file_dest='/usr/oracle11g/admin/orcl/adump'
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_file_record_keep_time=30
*.control_files='/db/orcl1/control0^@^@C^AC"^@^@^@^@^@^C^@^@^@^@^@^@^A^D{^S^@^@1.ctl','/db/orcl1/control02.ctl',
'/db/orcl1/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_recovery_file_dest='/usr/oracle11g/flash_recovery_area'
*.db_recovery_file_dest_size=6442450944
*.diagnostic_dest='/usr/oracle11g'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.job_queue_processes=5
*.open_cursors=300
*.pga_aggregate_target=1824522240
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_max_size=1258291200#internally adjusted
*.sga_target^@^@C^AC"^@^@^@^@^@^D^@^@^@^@^@^@^A^DCS^@^@=1056964608
*.undo_tablespace='UNDOTBS1'
==============================
SQL> select destination from V$ARCHIVE_DEST;
DESTINATION
/usr/oracle11g/R1/dbs/arch
USE_DB_RECOVERY_FILE_DEST
10 rows selected.
SQL> SQL> archive log lis
SP2-0718: illegal ARCHIVE LOG option
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 2549
Next log sequence to archive 2551
Current log sequence 2551
SQL>
===================
SQL> show parameter archive
NAME TYPE VALUE
archive_lag_target integer 0
log_archive_config string
log_archive_dest string
log_archive_dest_1 string
log_archive_dest_10 string
log_archive_dest_2 string
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string enable
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string enable
log_archive_dest_state_3 string enable
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string %t_%s_%r.dbf
log_archive_local_first boolean TRUE
log_archive_max_processes integer 4
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
standby_archive_dest string ?/dbs/archThis is the way 11g install sets up the archive destination intially after creating the database using DBA.
What you can do is to go to the Recovery Settings in Enterprise Manager. You would notice that Archive Log Destination number 1 is set to usr/oracle11g/R1/dbs/arch while number 10 is set to USE_DB_RECOVERY_FILE_DEST
Remove the entry for Number 1 (leave it blank). Apply the settings. This will force Oracle to only log to flash_recovery_area.
<br>
Oracle Database FAQs
</br> -
Archive Log vs Full Backup Concept
Hi,
I just need some clarification on how backups and archive logs work. Lets say starting at 1PM I have archive logs 1,2,3,4,5 and then I perform a full backup at 6PM.
Then I resume generating archive logs at 6PM to get logs 6,7,8,9,10. I then stop at 11PM.
If my understanding is correct, the archive logs should allow me to restore oracle to a point in time anywhere between 1PM and 11PM. But if I only have the full backup then I can only restore to a single point, which is 6PM. Is my understanding correct?
Do the archive logs only get applied to the datafiles when the backup occurs or only when a restore occurs? It doesn't seem like the archive logs get applied on the fly.
Thanks in advance.thelok wrote:
Thanks for the great explanation! So I can do a point in time restore from any time since the datafiles have last been written (or from when I have the last set of backed up datafiles plus the archive logs). From what you are saying, I can force the datafiles to be written from the redo logs (by doing a checkpoint with "alter set archive log current" or "backup database plus archivelog"), and then I can delete all the archive logs that have a SCN less than the checkpoint SCN on the datafiles. Is this true? This would be for the purposes of preserving disk space.Hi,
See this example. I hope this explain your doubt.
# My current date is 06-11-2011 17:15
# I not have backup of this database
# My retention policy is to have 1 backup
# I start listing archive logs.
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies
Key Thrd Seq S Low Time Name
29 1 8 A 29-10-2011 12:01:58 +HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837
30 1 9 A 31-10-2011 23:00:30 +HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025
31 1 10 A 03-11-2011 23:00:23 +HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105
32 1 11 A 04-11-2011 23:28:23 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065
33 1 12 A 05-11-2011 23:28:49 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349
## See I have archive logs from time "29-10-2011 12:01:58" until "05-11-2011 23:28:49" but I dont have any backup of database.
# So I perfom backup of database including archive logs.
RMAN> backup database plus archivelog delete input;
Starting backup at 06-11-2011 17:15:21
## Note above RMAN forcing archive current log, this archivelog generated will be usable only for previous backup.
## Is not my case... I don't have backup of database.
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=159 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=8 recid=29 stamp=766018840
input archive log thread=1 sequence=9 recid=30 stamp=766278027
input archive log thread=1 sequence=10 recid=31 stamp=766366111
input archive log thread=1 sequence=11 recid=32 stamp=766516067
input archive log thread=1 sequence=12 recid=33 stamp=766516350
input archive log thread=1 sequence=13 recid=34 stamp=766516521
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:23
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:15:38
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 tag=TAG20111106T171521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837 recid=29 stamp=766018840
archive log filename=+HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025 recid=30 stamp=766278027
archive log filename=+HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105 recid=31 stamp=766366111
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065 recid=32 stamp=766516067
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349 recid=33 stamp=766516350
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_13.414.766516521 recid=34 stamp=766516521
Finished backup at 06-11-2011 17:15:38
## RMAN finish backup of Archivelog and Start Backup of Database
## My backup start at "06-11-2011 17:15:38"
Starting backup at 06-11-2011 17:15:38
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+HR/dbhr/datafile/system.386.765556627
input datafile fno=00003 name=+HR/dbhr/datafile/sysaux.396.765556627
input datafile fno=00002 name=+HR/dbhr/datafile/undotbs1.393.765556627
input datafile fno=00004 name=+HR/dbhr/datafile/users.397.765557979
input datafile fno=00005 name=+BFILES/dbhr/datafile/bfiles.257.765542997
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:39
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:03
piece handle=+FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539 tag=TAG20111106T171539 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
Finished backup at 06-11-2011 17:16:03
## And finish at "06-11-2011 17:16:03", so I can recovery my database from this time.
## I will need archivelogs (transactions) which was generated during backup of database.
## Note during backup some blocks are copied others not. The SCN is inconsistent state.
## To make it consistent I need apply archivelog which have all transactions recorded.
## Starting another backup of archived log generated during backup.
Starting backup at 06-11-2011 17:16:04
## So automatically RMAN force another "checkpoint" after backup finished,
## forcing archive current log, because this archivelog have all transactions to bring database in a consistent state.
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=14 recid=35 stamp=766516564
channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:16:05
channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:06
piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565 tag=TAG20111106T171604 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: deleting archive log(s)
archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_14.414.766516565 recid=35 stamp=766516564
Finished backup at 06-11-2011 17:16:06
## Note: I can recover my database from time "06-11-2011 17:16:03" (finished backup full)
## until "06-11-2011 17:16:04" (last archivelog generated) that is my recover window in this scenary.
## Listing Backup I have:
## Archive Logs in backupset before backup full start - *BP Key: 40*
## Backup Full database in backupset - *BP Key: 41*
## Archive Logs in backupset after backup full stop - *BP Key: 42*
RMAN> list backup;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
40 196.73M DISK 00:00:15 06-11-2011 17:15:37
*BP Key: 40* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171521
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
List of Archived Logs in backup set 40
Thrd Seq Low SCN Low Time Next SCN Next Time
1 8 766216 29-10-2011 12:01:58 855033 31-10-2011 23:00:30
1 9 855033 31-10-2011 23:00:30 896458 03-11-2011 23:00:23
1 10 896458 03-11-2011 23:00:23 937172 04-11-2011 23:28:23
1 11 937172 04-11-2011 23:28:23 976938 05-11-2011 23:28:49
1 12 976938 05-11-2011 23:28:49 1023057 06-11-2011 17:12:28
1 13 1023057 06-11-2011 17:12:28 1023411 06-11-2011 17:15:21
BS Key Type LV Size Device Type Elapsed Time Completion Time
41 Full 565.66M DISK 00:00:18 06-11-2011 17:15:57
*BP Key: 41* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171539
Piece Name: +FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539
List of Datafiles in backup set 41
File LV Type Ckp SCN Ckp Time Name
1 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/system.386.765556627
2 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/undotbs1.393.765556627
3 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/sysaux.396.765556627
4 Full 1023422 06-11-2011 17:15:39 +HR/dbhr/datafile/users.397.765557979
5 Full 1023422 06-11-2011 17:15:39 +BFILES/dbhr/datafile/bfiles.257.765542997
BS Key Size Device Type Elapsed Time Completion Time
42 3.00K DISK 00:00:02 06-11-2011 17:16:06
*BP Key: 42* Status: AVAILABLE Compressed: NO Tag: TAG20111106T171604
Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565
List of Archived Logs in backup set 42
Thrd Seq Low SCN Low Time Next SCN Next Time
1 14 1023411 06-11-2011 17:15:21 1023433 06-11-2011 17:16:04
## Here make sense what I trying explain
## As I don't have backup of database before of my Last backup, all archivelogs generated before of my backup full is useless.
## Deleting what are obsolete in my env, RMAN choose backupset 40 (i.e all archived logs generated before my backup full)
RMAN> delete obsolete;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type Key Completion Time Filename/Handle
*Backup Set 40* 06-11-2011 17:15:37
Backup Piece 40 06-11-2011 17:15:37 +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
Do you really want to delete the above objects (enter YES or NO)? yes
deleted backup piece
backup piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 recid=40 stamp=766516523
Deleted 1 objectsIn the above example, I could before starting the backup run "delete archivelog all" because they would not be needed, but to show the example I follow this unnecessary way. (backup archivelog and delete after)
Regards,
Levi Pereira
Edited by: Levi Pereira on Nov 7, 2011 1:02 AM -
Oracle 10g - switch off archive logs
Hi,
I understand in oracle 10g, the following parameter is not longer available:
LOG_ARCHIVE_START=FALSE
Can i confirm that for now, only way to disable archive log is to execute the following command when in mount stage:
alter database noarchivelog;
There's not other parameter we can specify in pfile to disable it permantely?
thanksYou are correct. No other parameters.
-
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
CAPTURE process error - missing Archive log
Hi -
I am getting cannot open archived log 'xxxx.arc' message when I try to start a newly created capture process. The archive files have been moved by the DBAs.
Is there a way to set the capture process to start from a new archive ?
I tried
exec DBMS_CAPTURE_ADM.ALTER_CAPTURE ( capture_name => 'STRMADMIN_SCH_CAPTURE', start_scn =>9668840362577);
I got the new scn from DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();.
But I still get the same error.
Any ideas ?
Thanks,
SadeepaIf you are on 9i, I know that trying to reset the scn that way won't work. You have to drop and recreate the capture process. You can leave all the rules and rulesets in place, but I think you have to prepare all of the tables again.
-
Error running Archived-Log Downstream Capture Process
I have created a Archived-Log Downstream Capture Process with ref. to following link
http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
After executing the capture process get following error in trace
============================================================================
Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: localhost.localdomain
Release: 2.6.18-194.el5
Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
Machine: x86_64
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 37
Unix process pid: 13572, image: [email protected] (CP01)
*** 2011-08-20 14:21:38.899
*** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
*** CLIENT ID:() 2011-08-20 14:21:38.899
*** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
*** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
*** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
knlcCopyPartialCapCtx(), setting default poll freq to 0
knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
knlcObtainRuleSetNullLock: rule set name
knlcmaInitCapPrc+
knlcmaGetSubsInfo+
knlqgetsubinfo
subscriber name EMP_DEQ
subscriber dblinke name
subscriber name APPLY_EMP
subscriber dblinke name
knlcmaTerm+
knlcmaTermSrvs+
knlcmaTermSrvs-
knlcmaTerm-
knlcCCAInit()+, err = 26802
knlcnShouldAbort: examining error stack
ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
knlcnShouldAbort: examing error 26802
knlcnShouldAbort: returning FALSE
knlcCCAInit: no combined capture and apply optimization err = 26802
knlzglr_GetLogonRoles: usr = 91,
knlqqicbk - AQ access privilege checks:
userid=91, username=STRMADMIN
agent=STRM05_CAPTURE
knlqeqi()
knlcRecInit:
Combined Capture and Apply Optimization is OFF
Apply-state checkpoint mode is OFF
last_enqueued, last_acked
0x0000.00000000 [0] 0x0000.00000000 [0]
captured_scn, applied_scn, logminer_start, enqueue_filter
0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
flags=0
Starting persistent Logminer Session : 13
krvxats retval : 0
CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
krvxssp retval : 0
krvxsda retval : 0
krvxcfi retval : 0
#1: krvxcfi retval : 0
#2: krvxcfi retval : 0
About to call krvxpsr : startscn: 0x0000.0004688c
state before krvxpsr: 0
dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
*** 2011-08-20 14:21:41.810
Begin knlcDumpCapCtx:*******************************************
Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
Capture Name: STRM05_CAPTURE : Instantiation#: 65
*** 2011-08-20 14:21:41.810
++++ Begin KNST dump for Sid: 146 Serial#: 2274
Init Time: 08/20/2011 14:21:38
++++Begin KNSTCAP dump for : STRM05_CAPTURE
Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
Capture_Message_Number: 0x0000.00000000 [0]
Capture_Message_Create_Time: 01/01/1988 00:00:00
Enqueue_Message_Number: 0x0000.00000000 [0]
Enqueue_Message_Create_Time: 01/01/1988 00:00:00
Total_Messages_Captured: 0
Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
Total_Full_Evaluations: 0
Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
Apply_Name :
Apply_DBLink :
Apply_Messages_Sent: 0
++++End KNSTCAP dump
++++ End KNST DUMP
+++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
Capture_Type: DOWNSTREAM
Version:
Source_Database: ORCL2.LOCALDOMAIN
Use_Database_Link: NO
Logminer_Id: 13 Logfile_Assignment: EXPLICIT
Status: ENABLED
First_Scn: 0x0000.0004688c [288908]
Start_Scn: 0x0000.0004688c [288908]
Captured_Scn: 0x0000.0004688c [288908]
Applied_Scn: 0x0000.0004688c [288908]
Last_Enqueued_Scn: 0x0000.00000000 [0]
Capture_User: STRMADMIN
Queue: STRMADMIN.STREAMS_QUEUE
Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
Checkpoint_Retention_Time: 60
+++ End DBA_CAPTURE dump
+++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
PARALLELISM = 1 Set_by_User: NO
STARTUP_SECONDS = 0 Set_by_User: NO
TRACE_LEVEL = 7 Set_by_User: YES
TIME_LIMIT = -1 Set_by_User: NO
MESSAGE_LIMIT = -1 Set_by_User: NO
MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
WRITE_ALERT_LOG = TRUE Set_by_User: NO
DISABLE_ON_LIMIT = FALSE Set_by_User: NO
DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
SPLIT_THRESHOLD = 1800 Set_by_User: NO
MERGE_THRESHOLD = 60 Set_by_User: NO
+++ End DBA_CAPTURE_PARAMETERS dump
+++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
+++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
++ LogMiner Session Dump Begin::
SessionId: 13 SessionName: STRM05_CAPTURE
Start SCN: 0x0000.00000000 [0]
End SCN: 0x0000.00046c2d [289837]
Processed SCN: 0x0000.0004689e [288926]
Prepared SCN: 0x0000.000468d4 [288980]
Read SCN: 0x0000.000468e2 [288994]
Spill SCN: 0x0000.00000000 [0]
Resume SCN: 0x0000.00000000 [0]
Branch SCN: 0x0000.00000000 [0]
Branch Time: 01/01/1988 00:00:00
ResetLog SCN: 0x0000.00000001 [1]
ResetLog Time: 08/18/2011 16:46:59
DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
krvxvtm: Enabled threads: 1
Current Thread Id: 1, Thread State 0x01
Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
Current Session State: 0x20005, Current LM Compat: 0xb200000
Flags: 0x3f2802d8, Real Time Apply is Off
+++ Additional Capture Information:
Capture Flags: 4425
Logminer Start SCN: 0x0000.0004688c [288908]
Enqueue Filter SCN: 0x0000.0004688c [288908]
Low SCN: 0x0000.00000000 [0]
Capture From Date: 01/01/1988 00:00:00
Capture To Date: 01/01/1988 00:00:00
Restart Capture Flag: NO
Ping Pending: NO
Buffered Txn Count: 0
-- Xid Hash entry --
-- LOB Hash entry --
-- No TRIM LCR --
Unsupported Reason: Unknown
--- LCR Dump not possible ---
End knlcDumpCapCtx:*********************************************
*** 2011-08-20 14:21:41.810
knluSetStatus()+{
*** 2011-08-20 14:21:44.917
knlcapUpdate()+{
Updated streams$_capture_process
finished knlcapUpdate()+ }
finished knluSetStatus()+ }
knluGetObjNum()+
knlsmRaiseAlert: keltpost retval is 0
kadso = 0 0
KSV 1304 error in slave process
*** 2011-08-20 14:21:44.923
ORA-01304: subordinate process error. Check alert and trace logs
knlz_UsrrolDes()
knstdso: state object 0xb644b568, action 2
knstdso: releasing so 0xb644b568 for session 146, type 0
knldso: state object 0xa6d0dea0, action 2 memory 0x0
kadso = 0 0
knldso: releasing so 0xa6d0dea0
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01304: subordinate process error. Check alert and trace logs
Any suggestions???Output of above query
==============================
CAPTURE_NAME STATUS ERROR_MESSAGE
STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
Alert log.xml
=======================
<msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
pid='30921'>
<txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
ORA-01304: subordinate process error. Check alert and trace logs
</txt>
</msg>
The orcl_cp01_30921.trc has the same thing posted in the first message. -
Source DB on RAC, Archived Log Downstream Capture:Logs could not be shipped
I don't have much experience in Oracle RAC.
We are implementing Oracle Streams using Archived-Log Downstream capture. Source and Target DBs are 11gR2.
The source DB is in RAC (uses scan listeners).
To prevents, users from accessing the source DB, the DBA of the source DB shutdown the listener on port 1521 (changed the port number to 0000 in some file). There was one more listener on port 1523 that was up and running. We used port 1523 to create DB link between the 2 databases.
But, because the listener on Port 1521 was down, the archived logs from the source DB could not be shipped to the shared rive. As per the source DB DBA, the two instances in RAC use this listener/port to communicate with each other.
As such, when we ran DBMS_CAPTURE_ADM.CREATE_CAPTURE procedure from the target DB, the Logminer Data Dictionary that was extracted from the source DB to the Redo Logs was not avaialble to the target DB and the streams implementation failed.
It seems that for the archived logs to ship from the source DB to the shared drive, we need the listener on the port 1521 up and running. (Correct me if I am wrong ).
My question is:
Is there a way to shutdown a listener to prevent users from accessing the DB and have another listsener up so that the archived logs can be shipped to the shared drive ? If so, can you please give the details/example ?
We asked the same question to the DBA of the source DB and we were told that it could not be done.
Thanks in advance.Make sure that the dblink "using" clause is referencing a service name that uses a listener that is up and operational. There is no requirement that the listener be on port 1521 for Streams or for shipping logs.
Chapter 4 of the 2Day+ Data Replication and Integration manual has instructions for configuring downstream capture in Tutorial: Configuring Two-Database Replication with a Downstream Capture Process
http://docs.oracle.com/cd/E11882_01/server.112/e17516/tdpii_repcont.htm#BABIJCDG -
I want to setup RMAN not to delete any archive log files that will be used by GoldenGate. Once GoldenGate is completed with the archive log file, the archive log file can be backup and deleted by RMAN. It's my understanding that I can issue the following command "REGISTER EXTRACT <ext_name>, LOGRETENTION" to enable to functionally. Is this the only thing I need to do to execute to enable this functionally?
Hello,
Yes this is the rigth way using clasic capture.
Using the command : REGISTER EXTRACT Extract_name LOGRETENTION.
Create a Oracle Streams Group Capture (Artificial) that prevent RMAN archive deletion if these are pending to process for Golden Gate capture process.
You can see this integration doing a SELECT * FROM DBA_CAPTURE; after execute the register command.
Then, when RMAN try to delete a archive file pending to process for GG this warning appear AT RMAN logs:
Error: RMAN 8317 (RMAN-08317 RMAN-8317)
Text: WARNING: archived log not deleted, needed for standby or upstream capture process.
Then , this is a good manageability feature. I think is a 11.1 GG new feature.
Tip. To avoid RMAN backup multiples times a archive pending to process, there is a option called BACKUP archivelog not backed 1 times.
If you remove a Capture process that is registered with the database you need to use this comand to remove the streams capture group:
unREGISTER EXTRACT extract_name LOGRETENTION;
Then if you query dba_capture, the artificial Streams group is deleted.
I hope help.
Regards
Arturo -
Archive log miss : How to restart capture
Hi Gurus,
I configured hotlog CDC distributed on 10.2.0.4 Databases.
I make a mistake: in my source Db I have deleted an Archive log,
and now the state of Capture process in V_$STREAM_CAPTURE is "WAITING FOR REDO: LAST SCN MINED 930696".
Now I'd like to restart the capture process from the next archive (just after the missed archive)
How is it possible?
tnk FabioI'm sorry to tell you that, but it's not possible. (Just as it's not possible to recover database with missing logs...)
You will have to recreate the capture process and to re-instantiate the replicated tables.
Regards, -
Question about only new archive logs backed up in backup
Hi,
We are taking daily two online backup. We are running database in ARCHIVELOG mode. We configure database in PRIMARY and PHYSICAL STANDBY mode. Till now, we were taking all archive logs in backup. But it was causing problem of lot of space utilization of disk.
So based on search in this forum, I am planning to take only new archive logs generated since last backed up using following command.
BACKUP ARCHIVELOG all not backed up 1 times format '$dir/archivelogs_%s_%t' FORCE;
I am not sure about how it impact during restore and recovery when we take only new archivelogs in backup.
We restore database and then after perform always incomplete recovery till latest SCN capture in backup using following commands.
RESTORE DATABASE;
RECOVER DATABASE UNTIL SCN $BACKUP_LAST_SCN;
Do you see any problem/risk of implementing this solution going ahead?
Please let me provide your thoughts/inputs for this.
Thanks.
ShardulHi,
We are not deleting archive logs from actual location after backup. We keep latest 6 days archive logs at actual location. But here we are planning to put only new archive logs in backup image which were not backed up due to disk size problem.
For your reference below is our datbase backup RMAN commands. We are taking full database backup.
run {
ALLOCATE CHANNEL C1 TYPE DISK;
delete noprompt archivelog all completed before 'sysdate-5';
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP INCREMENTAL LEVEL=0 CUMULATIVE format '$dir/level0_%u' DATABASE include current controlfile
for standby force;
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP ARCHIVELOG all not backed up 1 times format '$dir/archivelogs_%s_%t' FORCE;
BACKUP CURRENT CONTROLFILE format '$dir/control_primary' FORCE;
Then in this polich do you see any problem when we restore database as PRIMARY or PHYSICAL STANDBY on server. We are using Oracle 10.2.0.3. -
How do I retrieve call history from my iPhone 4S? It only seems to log for about a month and I need an earlier date?
If you have backup of your files, you can get your call history back as long as you sync iPhone with iTunes frequently.
OK, the first thing you are required to do is close the function of "prevent iPhone from syncing automatically." in iTunes, avoiding to sync iPhone again.
Now, connect your iPhone to pc and open iTunes.
iTunes> preferences> devices
All the backup data in iTunes will be displayed.
Then,click the File menu and select Devices > Restore from Back up
your iPhone will restart and you can find them back.
This method only can find back the latest history, to get back the old, you should turn to a third party program for help. -
I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?
Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby. That way, if, for example
a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
Hemant K Chitale -
Archive a FCP Project to DVD (with used captures only)
Is there a way to backup a project with USED Captures only.
So, the FCP file with the used captures, delteing all the unused captures.
Would be a great option to use or make correction in the future!Backing up to DVD is probably the worst type of archive, not just because of the limited data space available but because the disks do deteriorate as a vastly increased speed compared to tape.
Even hard drives, if used for archiving and just left on a shelf can fail because they are not used!
Tape is still the best and most reliable.
We've been experimenting with Bluray data backups but of course, always have a tape back up on top of that! -
Is RMAN only way to take backup and delete Archive Logs?
On primary db:
===============
OS: Windows 2003 server 32-bit
Oracle: Oracle 10g (10.2.0.1.0) R2 32-bit
On Physical standby db:
===============
OS: Windows 2003 server 32-bit
Oracle: Oracle 10g (10.2.0.1.0) R2 32-bit
Data Guard just configured. Archive Logs are properly being shipped to standby db. Is RMAN only way to take backup and delete Archive Logs from both primary and standby db? What other command can be used?
ThanksNo, but anything else is way more work. Without using RMAN you would have to find another way to keep track of applied logs.
RMAN make this very simple :
RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;Using another method would be like using the handle of your screwdriver as a hammer.
See "RMAN Recipes for Oracle Database 11g" ISBN 978-1-5959-851-1
Best Regards
mseberg
Maybe you are looking for
-
Error with JAVA_HOME enviro variable
I'm running win2k Pro. I installed first j2sdk1.4.1 and then tomcat 4.1.12 on my e:\. When I click START TOMCAT i recieve the error in a dialog box: Cannot find the file '-Djava.endorsed.dirs='(or one of its components). Make sure the path and filena
-
I'm trying to run a Java applet with safari, but everytime I go to the webpage all I get is a grey box with a picture of a coffee cup that has two arrows circling it. Nothing loads after that. Any idea how to fix this?
-
Hiii, ich want to get a query result for a certain adress. I have used the search object "COBYADDRQuery". In the transaction GENIL_BOL_BROWSER I get results for this search object, when I type the city in the search object field "CITY1". But in my co
-
Hi Experts, I need to generate a report shows the hierachy of Course Groups, Course Types and the last event date of the course types. A090 relationship(is owned by) need to be displayed also. If I use the standard report 'Course Hierachy', it will n
-
GRC 10 - Request Priority Field
Hi All, I have activated the below BC set in GRC 10. GRAC_ACCESS_REQUEST_PRIORITY Below mentioned values have been updated. Problem is during access request creation, priority field is empty and these values are not coming there and priority drop dow