Is unable to write archive log fast enough the problem?
Oracle 10g:
I see very very long checkpoints in alert logs:
Mon Mar 23 12:35:57 2009
Beginning log switch checkpoint up to RBA [0x2001.2.10], SCN: 2667813010
Mon Mar 23 12:35:57 2009
Thread 1 advanced to log sequence 8193 (LGWR switch)
Current log# 3 seq# 8193 mem# 0: /u05/oradata/perf/redo03a.log
Current log# 3 seq# 8193 mem# 1: /u06/oradata/perf/redo03b.log
Mon Mar 23 12:41:02 2009
Completed checkpoint up to RBA [0x2001.2.10], SCN: 2667813010Our redo logs are 2gb and have 6 of those.
I am not sure why checkpoints are so long. Also, does this affect the requests trying to read-write to DB?
I am seeing high response time on our customer transactions when backup runs and one of the things I see is high checkpoints. But I am not sure if that would cause 20x high response time even though we do it at night time when the traffic is relatively low. There is something wrong and I am not sure if it's chekpoints, redo logs or something else.
Edited by: user628400 on Mar 23, 2009 5:50 PM
Edited by: user628400 on Mar 24, 2009 10:09 AM
Everything is indicating to me this is a big database.
My understanding, possilbly flawed, is to use a backup technique of spliting off the the third mirror the tablespaces should be put in backup mode before split, and out of backup mode after the split. (I believe this should be visible by entries in the alert log).
Whilst tablespaces are in backup mode extra redo will be generated, so it is important to check this time is minimized.
If data paths have been properly separated then there should only be a small blip in performance while the mirror split is occuring,
and possibly a drop when the third mirror is silverd back in.
The fact you are on a 3 way mirror backup design means you need to have documented the design and infrastucture of that set up and
ensure backup is documented and understood and monitored. This might also help pinpoint where in the process response times are increasing.
You might also try to monitor whether your disk access wait times are increasing during backups, or whether you have increased network latency.
(At abolute worst case in my opinion bad operation of this form of backup could lead to an unrecoverable database .... some system/backup admins might not appreciate this .... )
What could however be happening is while the third mirror is being backed up elsewhere that is causing contention between that backup and your databases access to storage.
This is all infrastructure dependent.
Slightly different topic what I mean was that the /uNN mounts may be filesystems set up by a volume manager, and may be on the same physical disk. Understanding the
3 way mirror backup takes priority over this.
In summary I suggest you seek all that is known about the design of your backup and its supporting documents. In your position I think I certainly would.
(It is past my bedtime and I may have veered off topic).
It is probably worth noting how often you switch redo logs and if that rate gets more frequent at certain times of day,
(ie graph when your redo logs were created and at what size).
Use Database Control / Grid Control to view AWR reports especially watching I/O quantity and waits overnight.
Before I forget it may be worth ensuring FAST_START_MTTR_TARGET is set to non zero value. (eg) 300s. This has an effect on incremental checkpointing.
Hope some of this helps - bigdelboy
Similar Messages
-
Hi
I am restoring a hot backup taken through RMAN using following commands:
configure controlfile autobackup on;
BACKUP DATABASE ;
BACKUP ARCHIVELOG ALL DELETE INPUT;
Now I am going to restore that using following commands:
restore spfile from autobackup;
restore controlfile from autobackup;
shutdown immediate;
startup mount;
restore database;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
But it goes fine till restore database. At recover database I get following errors:
archived log for thread 1 with sequence 2461 is already on disk as file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_8fbs9bvt_.log
archived log for thread 1 with sequence 2462 is already on disk as file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_2_8fbs9chb_.log
unable to find archived log
archived log thread=1 sequence=545
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/11/2013 20:41:43
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 545 and starting SCN of 25891726
I have checked the backup folder and there are only empty date wise folders under archivedlog folders.
If I write RMAN> ALTER DATABASE OPEN RESETLOGS; I get:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 09/11/2013 20:43:01
ORA-01190: control file or data file 1 is from before the last RESETLOGS
ORA-01110: data file 1: '/u01/app/oracle/oradata/XE/system.dbf'
If I write RMAN> recover database until sequence 545; I get
Starting recover at 11-SEP-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=695 device type=DISK
starting media recovery
unable to find archived log
archived log thread=1 sequence=545
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/11/2013 21:09:34
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 545 and starting SCN of 25891726
I don't mind if some data is lost. Will be really thankful if someone can help me get my database open;
HabibThey way you are trying to recover will try to recover up to the last know SCN. Try to do a point in time recover up to a few minutes before the database was shutdown or crashed.
Try something like this
run{
set until time "to_date('2013-09-11:00:00:00', 'yyyy-mm-dd:hh24:mi:ss')";
restore spfile from autobackup;
restore controlfile from autobackup;
shutdown immediate;
startup mount;
restore database;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS; -
HELP: flarcreate fails with ERROR: unable to write archive
Greetings,
Having trouble trying to create flar file.
I am using the following syntax, but the -X seems to be ignored and I can't find what the format for that file should be.
appdev03 # flarcreate -n appdev03 -S -H -a "author" -X /tmp/exclude /tools/appdev03.flar
Full Flash
Checking integrity...
Integrity OK.
WARNING: fdo: Ignoring duplicate filter entry. Choosen entry will be: /opt/orca/var -
WARNING: fdo: Ignoring duplicate filter entry. Choosen entry will be: /opt/orca/var -
Running precreation scripts...
Precreation scripts done.
Creating the archive...
cpio: "var/tmp/21502_00000001" ?
cpio: "var/tmp/28914_00000001" ?
cpio: "var/tmp/28939_00000001" ?
cpio: "var/tmp/26624_00000001" ?
cpio: "var/tmp/26627_00000001" ?
cpio: "var/tmp/04733_00000001" ?
cpio: "var/tmp/29298_00000001" ?
cpio: "var/tmp/13627_00000001" ?
cpio: "var/tmp/13628_00000001" ?
cpio: "var/tmp/02531_00000001" ?
cpio: "var/tmp/02532_00000001" ?
cpio: "var/tmp/04735_00000001" ?
cpio: "var/tmp/xsauthn" ?
cpio: "var/tmp/27491_00000001" ?
cpio: "var/tmp/08868_00000001" ?
cpio: "var/tmp/08869_00000001" ?
cpio: "var/tmp/20269_00000001" ?
cpio: "var/tmp/20270_00000001" ?
cpio: "var/tmp/27492_00000001" ?
cpio: "var/tmp/xsauthz" ?
cpio: "var/tmp/16103_00000001" ?
cpio: "var/tmp/16104_00000001" ?
cpio: "var/tmp/29964_00000001" ?
cpio: "var/tmp/03913_00000001" ?
cpio: "var/tmp/09305_00000001" ?
cpio: "var/tmp/10454_00000001" ?
cpio: "var/tmp/10505_00000001" ?
cpio: "var/tmp/07839_00000001" ?
cpio: "var/tmp/08129_00000001" ?
cpio: "var/tmp/00574_00000001" ?
cpio: "var/tmp/00694_00000001" ?
cpio: "var/tmp/05611_00000001" ?
cpio: "var/tmp/05612_00000001" ?
cpio: "var/tmp/01281_00000001" ?
cpio: "var/tmp/06672_00000001" ?
cpio: "var/tmp/06674_00000001" ?
cpio: "var/tmp/14242_00000001" ?
cpio: "var/tmp/14243_00000001" ?
cpio: "var/tmp/13904_00000001" ?
cpio: "var/tmp/13924_00000001" ?
cpio: "var/tmp/25133_00000001" ?
cpio: "var/tmp/25159_00000001" ?
cpio: "var/tmp/27648_00000001" ?
cpio: "var/tmp/27649_00000001" ?
cpio: "var/tmp/29284_00000001" ?
cpio: "var/tmp/29287_00000001" ?
cpio: "var/tmp/12637_00000001" ?
cpio: "var/tmp/12640_00000001" ?
cpio: "var/tmp/13947_00000001" ?
cpio: "var/tmp/03944_00000001" ?
cpio: "var/tmp/03945_00000001" ?
cpio: "var/tmp/13950_00000001" ?
cpio: Error with fstatat() of "opt/appworx/log/RmiServer1106141450.log", errno 2, No such file or directory
cpio: Error with fstatat() of "opt/appworx/log/RmiServer1106141440.log", errno 2, No such file or directory
22708784 blocks
54 error(s)
Unable to write archive file.
ERROR: Unable to write archive.
my -X file /tmp/exclude is as follows, yet it seems ignored:
appdev03 # more /tmp/exclude
/var/tmp/.oracle
/var/tmp/[0-9*]
/var/tmp/xsauthz
/var/crash
/tlg
/tacp
/depot
/tools
/toolsdb
/development
/opt/orca/var
/usr/local/CAcrypto/cacrypto_solaris.tar
/usr/local/CAcrypto/cacrypto_sun4_solaris.tar
/usr/local/CAcrypto/cacrypto.tar
/var/opt/oracle/jre/jre.tar
/var/opt/oracle/oraInventory.09252003.tar
/var/opt/oracle/oraInventory.10212003.tar
/var/opt/oracle/oraInventory.11032003.tar
/var/opt/oracle/jre.tar
/opt/ca/harvest/harvest.tar
/opt/ca/harvest/tomcat55/apache-tomcat-5.5.20.tar
/opt/ca/harvest/tomcat55/apache-tomcat-5.5.20-compat.tar
/opt/ca/caiptodbc/odbc511.tar
/opt/appworx/UWClient/web/images.tar
/opt/appworx/jre.tar
/opt/appworx/apache.tar
/opt/appworx/apache_dev.tar
/opt/appworx61/jre.tar
/opt/appworx61/apache.tar
/opt/appworx61/AWjars.tar
/opt/appworx61/AWutil.tar
/opt/appworx61/install/v61/apache.tar
/opt/appworx61/install/v61/awjars.tar
/opt/appworx61/install/v61/awutil.tar
/opt/appworx61/install/v61/jre.tar
/opt/appworx/log/RmiServer1106141450.log
/opt/appworx/log/RmiServer1106141440.log
/emc/2312fcode_2_00_6.tar
the file is there but not sure of its accuracy:
appdev03 # flar -i appdev03
files_archived_method=cpio
creation_date=20110621221025
creation_master=appdev03
content_name=appdev03
creation_node=appdev03
creation_hardware_class=sun4u
creation_platform=SUNW,Netra-T12
creation_processor=sparc
creation_release=5.9
creation_os_name=SunOS
creation_os_version=Generic_118558-39
files_compressed_method=none
content_author=author
content_architectures=sun4u
type=FULL
system is Solaris 9 04/03 with Generic_118558-39 V1280.
plenty of space in the directory being written to and I can touch a file there as well.
Any advise is appreciated, thanks.
Jeff
Edited by: user13021352 on Jun 21, 2011 3:40 PMWhat report is it ?
Did you try and search metalink ?
How to handle 'ORA-01652: unable to extend the temp segment by 128 in tablespace' error messages? (Doc ID 1359238.1)
R12 Journal Entries Report (XLAJELINESRPT) Has Performance Issue Or Fails With Error: "java.sql.SQLException: ORA-01652: unable to extend temp segment by 128 in tablespace TEMP1" (Doc ID 1141673.1)
This forum may be the right one for your question - General EBS Discussion -
RMAN- unable to find archive log
Hi All,
I am facing this problem while i am recovering my database. I have checked this archive file and it is present on the location where it should be. So what can be the solution.
Pls help...
Thanks and Regards
Amit RaghuvanshiHi Dear,
The location is on the disk and the error is...
released channel: ORA_DISK_1
allocated channel: dev2
channel dev2: sid=12 devtype=DISK
Starting recover at 08-AUG-07
starting media recovery
archive log thread 1 sequence 54266 is already on disk as file /erpp/erppdata/log01a.dbf
archive log thread 1 sequence 54267 is already on disk as file /erpp/erppdata/log02a.dbf
unable to find archive log
archive log thread=1 sequence=54259
released channel: dev2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/08/2007 11:33:57
RMAN-06054: media recovery requesting unknown log: thread 1 scn 5965732883373
Regards
Amit Raghuvanshi -
Oracle write archive log files continuosly
Hi all,
I don't know why my Oracle Database has this problem. Online log are writen to archive log file continuously(3 minutes period). My archive logfile is 300M. I have startup force my database. It work, but archive log files are writen so much. This is alert log:
>
Sat Jan 1 14:23:19 2011
Successfully onlined Undo Tablespace 5.
Sat Jan 1 14:23:19 2011
SMON: enabling tx recovery
Sat Jan 1 14:23:19 2011
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan
where NUMA PG = 1, CPUs = 16
replication_dependency_tracking turned off (no async multimaster replication found)
Sat Jan 1 14:23:40 2011
WARNING: AQ_TM_PROCESSES is set to 0. System operation might be adversely affected.
Sat Jan 1 14:24:32 2011
db_recovery_file_dest_size of 204800 MB is 28.64% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Sat Jan 1 14:24:40 2011
Completed: ALTER DATABASE OPEN
Sat Jan 1 14:27:05 2011
Warning: PROCESSES may be too low for current load
shared servers=360, want 7 more but starting only 3 more
Warning: PROCESSES may be too low for current load
shared servers=363, want 9 more but starting only 0 more
Sat Jan 1 14:27:39 2011
Warning: PROCESSES may be too low for current load
shared servers=363, want 9 more but starting only 1 more
Warning: PROCESSES may be too low for current load
shared servers=364, want 9 more but starting only 0 more
Sat Jan 1 14:28:58 2011
Thread 1 advanced to log sequence 9463 (LGWR switch)
Current log# 3 seq# 9463 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9463 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:30:20 2011
Errors in file /opt/app/oracle/admin/TNORA3/bdump/tnora_j000_17762.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00018: maximum number of sessions exceeded
Sat Jan 1 14:39:47 2011
Thread 1 advanced to log sequence 9464 (LGWR switch)
Current log# 1 seq# 9464 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9464 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 14:42:51 2011
Errors in file /opt/app/oracle/admin/TNORA3/bdump/tnora_s008_17165.trc:
ORA-07445: exception encountered: core dump [_intel_fast_memcpy.J()+80] [SIGSEGV] [Address not mapped to object] [0x2B8988CE2018] [] []
Sat Jan 1 14:42:57 2011
Thread 1 advanced to log sequence 9465 (LGWR switch)
Current log# 2 seq# 9465 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9465 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 14:43:11 2011
found dead shared server 'S008', pid = (42, 1)
Sat Jan 1 14:45:39 2011
Thread 1 advanced to log sequence 9466 (LGWR switch)
Current log# 3 seq# 9466 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9466 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:48:47 2011
Thread 1 cannot allocate new log, sequence 9467
Checkpoint not complete
Current log# 3 seq# 9466 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9466 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:48:50 2011
Thread 1 advanced to log sequence 9467 (LGWR switch)
Current log# 1 seq# 9467 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9467 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 14:52:11 2011
Thread 1 advanced to log sequence 9468 (LGWR switch)
Current log# 2 seq# 9468 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9468 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 14:55:12 2011
Thread 1 advanced to log sequence 9469 (LGWR switch)
Current log# 3 seq# 9469 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9469 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:58:12 2011
Thread 1 advanced to log sequence 9470 (LGWR switch)
Current log# 1 seq# 9470 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9470 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 15:02:00 2011
Thread 1 advanced to log sequence 9471 (LGWR switch)
Current log# 2 seq# 9471 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9471 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 15:05:16 2011
Thread 1 advanced to log sequence 9472 (LGWR switch)
Current log# 3 seq# 9472 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9472 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 15:08:30 2011
Thread 1 advanced to log sequence 9473 (LGWR switch)
Current log# 1 seq# 9473 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9473 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 15:11:12 2011
Thread 1 cannot allocate new log, sequence 9474
Checkpoint not complete
Current log# 1 seq# 9473 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9473 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 15:11:14 2011
Thread 1 advanced to log sequence 9474 (LGWR switch)
Current log# 2 seq# 9474 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9474 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 15:14:15 2011
Thread 1 advanced to log sequence 9475 (LGWR switch)
Current log# 3 seq# 9475 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9475 mem# 1: /u02/oradata/TNORA3/redo03b.log
>
PLs, help me.This is the contait of tail -100 /opt/app/oracle/admin/TNORA3/bdump/tnora_s008_17165.trc | more
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1569
KCBS: nbseg[3] is 1568
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1569
KCBS: nbseg[7] is 1568
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1569
KCBS: nbseg[11] is 1568
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1569
KCBS: nbseg[15] is 1568
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1569
KCBS: nbseg[19] is 1568
KCBS: Act cnt = 15713
KCBS: bufcnt = 31365, nb_kcbsds = 31365
KCBS: fbufcnt = 445
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1569
KCBS: nbseg[3] is 1568
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1569
KCBS: nbseg[7] is 1568
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1569
KCBS: nbseg[11] is 1568
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1569
KCBS: nbseg[15] is 1568
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1569
KCBS: nbseg[19] is 1568
KCBS: Act cnt = 15713
KCBS: bufcnt = 31365, nb_kcbsds = 31365
KCBS: fbufcnt = 445
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1568
KCBS: nbseg[3] is 1569
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1568
KCBS: nbseg[7] is 1569
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1568
KCBS: nbseg[11] is 1569
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1568
KCBS: nbseg[15] is 1569
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1568
KCBS: nbseg[19] is 1569
KCBS: Act cnt = 15713
KCBS: bufcnt = 31365, nb_kcbsds = 31365
KCBS: fbufcnt = 444
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1568
KCBS: nbseg[3] is 1569
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1568
KCBS: nbseg[7] is 1569
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1568
KCBS: nbseg[11] is 1569
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1568
KCBS: nbseg[15] is 1569
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1568
KCBS: nbseg[19] is 1569
KCBS: Act cnt = 15713
KSOLS: Begin dumping all object level stats elements
KSOLS: Done dumping all elements. Exiting.
Dump event group for SESSION
Unable to dump event group - no SESSION state objectDump event group for SYSTEM
ssexhd: crashing the process...
Shadow_Core_Dump = partial -
Unable to delete archive log.
Hi,
Our database server's archive log destination is full,but after running
backup archivelog all delete input;
it's taking archive log backup,but not deleting archivelogs.
Could you suggest me what is the reason.RMAN> backup archivelog all delete input;
Starting backup at 17-DEC-10
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=155 recid=422 stamp=737092752
input archive log thread=1 sequence=156 recid=425 stamp=737156432
input archive log thread=1 sequence=157 recid=428 stamp=737257293
input archive log thread=1 sequence=158 recid=431 stamp=737322402
input archive log thread=1 sequence=159 recid=434 stamp=737389991
input archive log thread=1 sequence=160 recid=437 stamp=737408597
input archive log thread=1 sequence=161 recid=440 stamp=737476660
input archive log thread=1 sequence=162 recid=443 stamp=737542384
input archive log thread=1 sequence=163 recid=446 stamp=737634615
input archive log thread=1 sequence=164 recid=449 stamp=737658567
input archive log thread=1 sequence=165 recid=452 stamp=737726432
input archive log thread=1 sequence=166 recid=455 stamp=737827094
input archive log thread=1 sequence=167 recid=456 stamp=737860748
input archive log thread=1 sequence=168 recid=464 stamp=737980097
input archive log thread=1 sequence=169 recid=461 stamp=737980094
input archive log thread=1 sequence=170 recid=467 stamp=737980099
input archive log thread=1 sequence=171 recid=470 stamp=737980425
input archive log thread=1 sequence=172 recid=472 stamp=737981508
input archive log thread=1 sequence=173 recid=474 stamp=737985385
channel ORA_DISK_1: starting piece 1 at 17-DEC-10
channel ORA_DISK_1: finished piece 1 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_1_1 comment=NONE
channel ORA_DISK_1: starting piece 2 at 17-DEC-10
channel ORA_DISK_1: finished piece 2 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_2_1 comment=NONE
channel ORA_DISK_1: starting piece 3 at 17-DEC-10
channel ORA_DISK_1: finished piece 3 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_3_1 comment=NONE
channel ORA_DISK_1: starting piece 4 at 17-DEC-10
channel ORA_DISK_1: finished piece 4 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_4_1 comment=NONE
channel ORA_DISK_1: starting piece 5 at 17-DEC-10
channel ORA_DISK_1: finished piece 5 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_5_1 comment=NONE
channel ORA_DISK_1: starting piece 6 at 17-DEC-10
channel ORA_DISK_1: finished piece 6 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_6_1 comment=NONE
channel ORA_DISK_1: starting piece 7 at 17-DEC-10
channel ORA_DISK_1: finished piece 7 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_7_1 comment=NONE
channel ORA_DISK_1: starting piece 8 at 17-DEC-10
channel ORA_DISK_1: finished piece 8 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_8_1 comment=NONE
channel ORA_DISK_1: starting piece 9 at 17-DEC-10
channel ORA_DISK_1: finished piece 9 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_9_1 comment=NONE
channel ORA_DISK_1: starting piece 10 at 17-DEC-10
channel ORA_DISK_1: finished piece 10 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_10_1 comment=NONE
channel ORA_DISK_1: starting piece 11 at 17-DEC-10
channel ORA_DISK_1: finished piece 11 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_11_1 comment=NONE
channel ORA_DISK_1: starting piece 12 at 17-DEC-10
channel ORA_DISK_1: finished piece 12 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_12_1 comment=NONE
channel ORA_DISK_1: starting piece 13 at 17-DEC-10
channel ORA_DISK_1: finished piece 13 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_13_1 comment=NONE
channel ORA_DISK_1: starting piece 14 at 17-DEC-10
channel ORA_DISK_1: finished piece 14 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_14_1 comment=NONE
channel ORA_DISK_1: starting piece 15 at 17-DEC-10
channel ORA_DISK_1: finished piece 15 at 17-DEC-10
piece handle=D:\RMAN\BACKUP\0MLVPGRB_15_1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:03:41
channel ORA_DISK_1: deleting archive log(s)
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00155.001 recid=422 stamp=737
092752
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00156.001 recid=425 stamp=737
156432
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00157.001 recid=428 stamp=737
257293
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00158.001 recid=431 stamp=737
322402
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00159.001 recid=434 stamp=737
389991
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00160.001 recid=437 stamp=737
408597
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00161.001 recid=440 stamp=737
476660
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00162.001 recid=443 stamp=737
542384
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00163.001 recid=446 stamp=737
634615
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00164.001 recid=449 stamp=737
658567
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00165.001 recid=452 stamp=737
726432
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00166.001 recid=455 stamp=737
827094
archive log filename=D:\ORANT\ARCLOG1\REDO01.LOGARC00167.001 recid=456 stamp=737
860748
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00168.001 recid=464 stamp=737
980097
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00169.001 recid=461 stamp=737
980094
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00170.001 recid=467 stamp=737
980099
archive log filename=D:\ORANT\ARCLOG3\REDO03.LOGARC00171.001 recid=470 stamp=737
980425
archive log filename=D:\ORANT\ARCLOG2\REDO02.LOGARC00172.001 recid=472 stamp=737
981508
archive log filename=D:\ORANT\ARCLOG1\REDO01.LOGARC00173.001 recid=474 stamp=737
985385
Finished backup at 17-DEC-10
Starting Control File and SPFILE Autobackup at 17-DEC-10
piece handle=D:\RMAN\BACKUP\C-1738882432-20101217-02 comment=NONE
Finished Control File and SPFILE Autobackup at 17-DEC-10 -
Deleting the old Archive Log Files from the Hard Disk
Hello All,
I want to delete the old archive files of Oracle. These files are very old and consuming very huge disk space.
Is there any way by which I can delete these files from the Hard Disk?
Can I directly delete them using the Operating System Command?
Will it cause any dilfference to the normal functioning of my Database?
If I need to delete these files, do I need to bring the database down?
Please guide me.
I need to do this activity to make some space free on the Hard Disk.
Thanks in advance.
HimanshuHi.
Keep archived logs from time of last backup and forward to currenct time on disk together with the backup itself. And keep older archived logs together with older warm/cold backupfiles on tape as long as there is enough capasity or following your recovery strategy. This way you might be able to roll forward a recovery without restoring these files from tape which often the faster.
Older archived logs can be deleted manually, with scheduled OS script or automatically with RMAN.
Good luck
rgds
Kjell Ove -
I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?
Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby. That way, if, for example
a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
Hemant K Chitale -
How to monitor the successful archive log shipment to the standby database
Hello,
From the primary database, which data dictionary do I view to get information whether the standby database has received and processed the archive logs from the primary database. Both databases are 11202 on linux. Another question, which views on the primary database contain a flag sent by the standby database to the primary database to let the primary database knows that the standby database is up and functional? My purpose is to query the primary database for infomation that tell me the standby database is alive and functioning. I want to only query the primary database for these information about the standby database. Thank you in advance.watson2011 wrote:
Hello,
From the primary database, which data dictionary do I view to get information whether the standby database has received and processed the archive logs from the primary database. Both databases are 11202 on linux. Another question, which views on the primary database contain a flag sent by the standby database to the primary database to let the primary database knows that the standby database is up and functional? My purpose is to query the primary database for infomation that tell me the standby database is alive and functioning. I want to only query the primary database for these information about the standby database. Thank you in advance.You can troubleshoot standby by views as
v$managed_standby (standby)
v$dataguard_status
v$dataguard_stats
Some of the new views are introduced from 11g, check below link too
http://docs.oracle.com/cd/B28359_01/server.111/b28294/views.htm#i79129 -
I want to setup RMAN not to delete any archive log files that will be used by GoldenGate. Once GoldenGate is completed with the archive log file, the archive log file can be backup and deleted by RMAN. It's my understanding that I can issue the following command "REGISTER EXTRACT <ext_name>, LOGRETENTION" to enable to functionally. Is this the only thing I need to do to execute to enable this functionally?
Hello,
Yes this is the rigth way using clasic capture.
Using the command : REGISTER EXTRACT Extract_name LOGRETENTION.
Create a Oracle Streams Group Capture (Artificial) that prevent RMAN archive deletion if these are pending to process for Golden Gate capture process.
You can see this integration doing a SELECT * FROM DBA_CAPTURE; after execute the register command.
Then, when RMAN try to delete a archive file pending to process for GG this warning appear AT RMAN logs:
Error: RMAN 8317 (RMAN-08317 RMAN-8317)
Text: WARNING: archived log not deleted, needed for standby or upstream capture process.
Then , this is a good manageability feature. I think is a 11.1 GG new feature.
Tip. To avoid RMAN backup multiples times a archive pending to process, there is a option called BACKUP archivelog not backed 1 times.
If you remove a Capture process that is registered with the database you need to use this comand to remove the streams capture group:
unREGISTER EXTRACT extract_name LOGRETENTION;
Then if you query dba_capture, the artificial Streams group is deleted.
I hope help.
Regards
Arturo -
Whicj log file locates the problem?
Everywhere on the Internet it states to check your log file to see what causes any problems (kernel errors, apps unexpectedly quiting, etc.) with the OS. The problem is there are so many log files; which do I need to review to find the problem?
Crash logs are located in ~/Library/Logs/CrashReporter and /Library/Logs/CrashReporter. They are best viewed via the Console.app. Launch it, select File->Open Quickly, and navigate to each one. Additionally, review the Console and System logs for other pertinent information, using the Console.app.
-
Understanding a kernel log and fixing the problem
Is anyone able to understand the kernel panic log below and help me fix the problem. I'm on an Intel Core 2 Duo iMac (late 2006) running 10.4.11. Here's the log:
Mon Sep 22 19:53:13 2008
panic(cpu 0 caller 0x001A49CB): Unresolved kernel trap (CPU 0, Type 14=page fault), registers:
CR0: 0x8001003b, CR2: 0x00000002, CR3: 0x00f17000, CR4: 0x000006e0
EAX: 0x000000ff, EBX: 0x00000001, ECX: 0x00000000, EDX: 0x25b73913
CR2: 0x00000002, EBP: 0x140e3d58, ESI: 0x125e2000, EDI: 0x02e0a800
EFL: 0x00010246, EIP: 0x00a0ddac, CS: 0x00000008, DS: 0x00000010
Backtrace, Format - Frame : Return Address (4 potential args on stack)
0x140e3a58 : 0x128d0d (0x3cc65c 0x140e3a7c 0x131f95 0x0)
0x140e3a98 : 0x1a49cb (0x3d2a94 0x0 0xe 0x3d22b8)
0x140e3ba8 : 0x19b3a4 (0x140e3bc0 0x287 0x140e3bf8 0x140a17)
0x140e3d58 : 0xa0fc3b (0x25b738c8 0x25b738e0 0x5f 0x2827c00)
0x140e3e78 : 0xa110cb (0x0 0x23 0x140e3ed8 0x1a2e7e)
0x140e3ed8 : 0x9ba88b (0x125e2000 0x0 0x0 0x0)
0x140e3f08 : 0x39b96f (0x2b68400 0x2b45d40 0x1 0x268dfe4)
0x140e3f58 : 0x39ab41 (0x2b45d40 0x136064 0x0 0x268dfe4)
0x140e3f88 : 0x39a877 (0x2c7c640 0x2c7c640 0x134db9 0x136064)
0x140e3fc8 : 0x19b21c (0x2c7c640 0x0 0x19e0b5 0x27e37a0) Backtrace terminated-invalid frame pointer 0x0
Kernel loadable modules in backtrace (with dependencies):
com.apple.driver.AirPortBrcm43xx(244.46.9)@0x9b3000
dependency: com.apple.iokit.IONetworkingFamily(1.5.1)@0x969000
dependency: com.apple.iokit.IOPCIFamily(2.2)@0x566000
dependency: com.apple.iokit.IO80211Family(163.1)@0x994000
Kernel version:
Darwin Kernel Version 8.11.1: Wed Oct 10 18:23:28 PDT 2007; root:xnu-792.25.20~1/RELEASE_I386
Thanks!Understanding crash logs isn’t easy and it’s hard (sometimes impossible) to decipher the cause of the problem. Take a look at Apple’s Crash Reporter document at http://developer.apple.com/technotes/tn2004/tn2123.html The log does reference Airport. Are you using AirPort?
Kernel panics are usually caused by a hardware problem – frequently RAM, a USB device or a Firewire device. When trying to troubleshoot problems, disconnect all external devices except your monitor, kbd & mouse. Do you experience the same problems?
May be a solution on one of these links.
Mac OS X Kernel Panic FAQ
Mac OS X Kernel Panic FAQ
Resolving Kernel Panics
Avoiding and eliminating Kernel panics
12-Step Program to Isolate Freezes and/or Kernel Panics
Cheers, Tom -
Unable to write a log file from EJB
Hi i have a Stateless EJB deployed on OC4J 10.1.3 and it is tryig to create a logfile with the location given in properties file.when it is trying to create the file it is getting Access denied to that particular folder I have changed the folder to another location but it is still the same.I am able to create a file to the same folder using simple jave class.
here is the stack trace.
javax.ejb.CreateException: D:\Kernel7.3\GW_EJB\log (Access is denied)
at com.xxxxx.fcubs.gw.ejb.GWEJBBean.ejbCreate(GWEJBBean.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke
(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSI
nterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.SetContextActionIntercepto
r.invoke(SetContextActionInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:619)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:606)
at com.evermind.server.ejb.LifecycleManager.postConstruct(LifecycleManag
er.java:89)
at com.evermind.server.ejb.StatelessSessionBeanPool.createContextImpl(St
atelessSessionBeanPool.java:41)
at com.evermind.server.ejb.BeanPool.createContext(BeanPool.java:405)
at com.evermind.server.ejb.BeanPool.allocateContext(BeanPool.java:232)
at com.evermind.server.ejb.StatelessSessionEJBHome.getContextInstance(St
atelessSessionEJBHome.java:51)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(S
tatelessSessionEJBObject.java:83)
at GWEJBRemote_StatelessSessionBeanWrapper2.processMsg(GWEJBRemote_State
lessSessionBeanWrapper2.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(Relea
sableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
javax.ejb.EJBException: Exception while creating bean/context instance for bean
GW_EJB_Bean; nested exception is: javax.ejb.CreateException: D:\Kernel7.3\GW_EJB
\log (Access is denied)
at com.evermind.server.rmi.RMICall.EXCEPTION_ORIGINATES_FROM_THE_REMOTE_
SERVER(RMICall.java:110)
at com.evermind.server.rmi.RMICall.throwRecordedException(RMICall.java:1
28)
at com.evermind.server.rmi.RMIClientConnection.obtainRemoteMethodRespons
e(RMIClientConnection.java:472)
at com.evermind.server.rmi.RMIClientConnection.invokeMethod(RMIClientCon
nection.java:416)
at com.evermind.server.rmi.RemoteInvocationHandler.invoke(RemoteInvocati
onHandler.java:63)
at com.evermind.server.rmi.RecoverableRemoteInvocationHandler.invoke(Rec
overableRemoteInvocationHandler.java:28)
at com.evermind.server.ejb.StatelessSessionRemoteInvocationHandler.invok
e(StatelessSessionRemoteInvocationHandler.java:43)
at __Proxy1.processMsg(Unknown Source)
at GW_EJB_Client.callEJB(GW_EJB_Client.java:68)
at GW_EJB_Client.main(GW_EJB_Client.java:22)
Caused by: javax.ejb.CreateException: D:\Kernel7.3\GW_EJB\log (Access is denied)
at com.xxxxx.fcubs.gw.ejb.GWEJBBean.ejbCreate(GWEJBBean.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke
(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSI
nterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.SetContextActionIntercepto
r.invoke(SetContextActionInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:619)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:606)
at com.evermind.server.ejb.LifecycleManager.postConstruct(LifecycleManag
er.java:89)
at com.evermind.server.ejb.StatelessSessionBeanPool.createContextImpl(St
atelessSessionBeanPool.java:41)
at com.evermind.server.ejb.BeanPool.createContext(BeanPool.java:405)
at com.evermind.server.ejb.BeanPool.allocateContext(BeanPool.java:232)
at com.evermind.server.ejb.StatelessSessionEJBHome.getContextInstance(St
atelessSessionEJBHome.java:51)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(S
tatelessSessionEJBObject.java:83)
at GWEJBRemote_StatelessSessionBeanWrapper2.processMsg(GWEJBRemote_State
lessSessionBeanWrapper2.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(Relea
sableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)public void ejbCreate ()
throws CreateException
try
initializeprop ();
catch(Exception ex)
throw new CreateException (ex.getMessage ());
on lone 140 I am throwing a create Exception.
initializeprop() method is used to initialize the logeer properties etc. -
Unable to delete archive logs from primary
Dear DBAs,
I made the necessary configuration in RMAN at both site (primary and physical standby) as below:
on primary:
configure archivelog deletion policy to applied on standby;
alter system set "_log_deletion_policy"=ALL scope=spfile;
log_archive_dest_10=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=ora10cs1
log_archive-dest_1=service="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
(HOST=LBLX-ORA10-SCS1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=ora10scs1_XPT)(INSTANCE_NAME=ora10cs1)(SERVER=dedicated)))",
ARCH SYNC NOAFFIRM delay=0 OPTIONAL max_failure=0 max_connections=1 reopen=300 db_unique_name="ora10scs1" register net_timeout=180 valid_for=
(online_logfile,primary_role)
at standby:
configure archivelog deletion policy to none;
By the way i'm using Oracle 10gR2 patch4 on RHEL 5
and till now the database won't delete the archivelogs automaticaly, since the current available size in the flash folder is only 2MB.
Please waiting your advice
Thx in advanceHi,
I have a different issue, I am not sure i can combine on the same thread.
From the DOC, this command is appeared to be useful. I am not sure if it can be applied to my environment.
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
We have some databases with DG configuration on Sun. We take the backups only primary using RMAN online & archivelog alternate days. But, the archivelogs on standby are to be deleted every day manually (i mean we have a script scheduled for it).
Note: We use "delete input" on primary backup every day after they are backedup by RMAN. Anyway, the logs that are not applied on standby will not be deleted as RMAN is intelligent to keep them safe.
In the past we attempted to delete using rman script with the command shown below.
delete archivelog all completed before 'sysdate-2/24';
This causes a problem every time for us with regard to the primary database Backup as it expects the archivelogs on primay which were deleted by RMAN on standby. We had to crosscheck archivelog all every time to resolve the issue. There seems to be a mismatch of the archlog status both on primary and standby.
Please let me know if the above configuration command "CONFIGURE ARCHIVELOG DELETION POLICY " on both primary and standby or give an idea/steps what way i can use this.
Regards,
Vasu.
Edited by: vasu77 on Aug 31, 2009 4:02 PM
Edited by: vasu77 on Aug 31, 2009 4:05 PM
Edited by: vasu77 on Aug 31, 2009 4:10 PM -
I have an ipod touch 5 gen AND am unable to unlock scree what could b the problem
I have an ipod touch 5 gen and am unable to unlock my screen what could b the problem
Try:
- iOS: Not responding or does not turn on
- Also try DFU mode after try recovery mode
How to put iPod touch / iPhone into DFU mode « Karthik's scribblings
- If not successful and you can't fully turn the iOS device fully off, let the battery fully drain. After charging for an least an hour try the above again.
- Try another cable
- Try on another computer
- If still not successful that usually indicates a hardware problem and an appointment at the Genius Bar of an Apple store is in order.
Apple Retail Store - Genius Bar
Maybe you are looking for
-
How to connect a HP LaserJet Printer P1102w to my Toshiba Laptop
I have just bought a LaserJet P1102w printer, but i cant seem to install it on my Toshiba laptop (or a HP laptop). I have tried the instruction that came with the product, the disc, the HP website, HP help support in Sydney (Who couldnt even install
-
Screen Capture not working after upgrade to Yosemite
I have Yosemite OS X 10.10.1 and since my upgrade from Mavericks I have not been able to perform any sort of screen capture. I have tried Command+Shift+4 and Command+Shift+3 and neither of those work. There is no camera shutter noise made and no file
-
My previous request had an incorrect email. This error began yesterday and I can't reply or send new emails from my PC, but email is working on my iphone.
-
Got Mac Mini with Mountain Lion ; the GOALS for this computer: Writing for publication, filling out forms, sending data, photo editing and adjustments, artwork, tasks, email, online consulting, volunteer advocacy, research, etc. Was beginning to get
-
The size limit of the OCI LOB Array Insert is 64K for one field?
I have a table with 4 field, and one is BLOB field. I want to insert 16 rows in one OCIStmtExecute. I know I can specify the iter parameter with 16 to execute sql 16 times. I found example in "Application Developer's Guide - Large Objects" in page "D