Skip archive log on logical standby
hi ,
I want to skip archive log from nmber 1150 to 1161 on logical standby dtbs.
I knw , we can skip ddl , dml on logical standby .
How can archive this ??
(oracle 10g entreprise edition )
Hello;
I do not believe this is an option. The closest to this would be "applying modifications to specific tables"
See :
9.4.3 Using DBMS_LOGSTDBY.SKIP to Prevent Changes to Specific Schema Objects
Data Guard Concepts and Administration 10g Release 2 (10.2) B14239-05
While this is not the answer you want the skip Archive would create a Gap and cause many other issues you don't want.
Best Regards
mseberg
Similar Messages
-
*HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY*
HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY
I have set the RMAN CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY; on my physical standby server.
My archivelog files are not deleted on standby.
I have set the CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default on the Primary server.
I've checked the archivelogs with the FRA and they are not beign deleted on the STANDBY. Do I have to do something for the configuation to take effect? Like run a RMAN backup?
I've done a lot ofresearch and i'm getting mixed answers. Please help. Thanks in advanced.
JSetting the Policy will not delete the Archive logs on the Standby. ( I found a thread where the Data Guard product manager says "The deletion policy on both sides will do what you want" ). However I still
like to clean them off with RMAN.
I would use RMAN to delete them so that it can use that Policy are you are protected in case of Gap, transport issue etc.
There are many ways to do this. You can simply run RMAN and have it clean out the Archive.
Example :
#!/bin/bash
# Name: db_rman_arch_standby.sh
# Purpose: Database rman backup
# Usage : db_rman_arch_standby <DBNAME>
if [ "$1" ]
then DBNAME=$1
else
echo "basename $0 : Syntax error : use . db_rman_full <DBNAME> "
exit 1
fi
. /u01/app/oracle/dba_tool/env/${DBNAME}.env
echo ${DBNAME}
MAILHEADER="Archive_cleanup_on_STANDBY_${DBNAME}"
echo "Starting RMAN..."
$ORACLE_HOME/bin/rman target / catalog <user>/<password>@<catalog> << EOF
delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-8';
exit
EOF
echo `date`
echo
echo 'End of archive cleanup on STANDBY'
mailx -s ${MAILHEADER} $MAILTO < /tmp/rmandbarchstandby.out
# End of ScriptThis uses ( calls an ENV) so the crontab has an environment.
Example ( STANDBY.env )
ORACLE_BASE=/u01/app/oracle
ULIMIT=unlimited
ORACLE_SID=STANDBY
ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
LIBPATH=$LD_LIBRARY_PATH:/usr/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
#export TERM=linux=80x25 wrong wrong wrong wrong wrong
export TERM=vt100
export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
export ORACLE_HOME
export LIBPATH LD_LIBRARY_PATH ORA_NLS33
export TNS_ADMIN
export PATH
export MAILTO=?? your email hereNote use the env command in Unix to get you settings.
There are probably ten other/better ways to do this, but this works.
other options ( you decide )
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
http://www.oracle.com/technetwork/database/features/availability/rman-dataguard-10g-wp-1-129486.pdf
Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
Tip I don't care myself but in some of the other forums people seem to mind if you use all caps in the subject. They say it shouting. My take is if somebody is shouting at me I'm probably going to just move away.
Best Regards
mseberg
Edited by: mseberg on May 8, 2012 11:53 AM
Edited by: mseberg on May 8, 2012 11:56 AM -
Archive destination on Logical Standby
Dear colleagues,
A physical standby database writes archivelogs in '/oradata3/iclgstdb/archivelog' destination because a parameter standby_archive_dest set in '/oradata3/iclgstdb/archivelog':
SQL> sho parameter standby
NAME TYPE VALUE
standby_archive_dest string /oradata3/iclgstdb/archivelog
standby_file_management string AUTO
When I switch the physical to a logical standby then archivelogs are written in '$ORACLE_HOME/dbs/arch' destination:
SQL> ARCHIVE LOG LIST
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /ora/OraHome11203/dbs/arch
Oldest online log sequence 1
Next log sequence to archive 2
Current log sequence 2
But I don't see this destination in v$parameter view.
How can I change "Archive destination /ora/OraHome11203/dbs/arch" to '/oradata3/iclgstdb/archivelog'?Hi,
Above answered given by one of our DBA friend is correct...just want to add few things,
We need to update both the parameters related to log_archive_dest_1 and log_archive_dest_2 on primary and standby location,
on primary,
log_archive_dest_1 would be local directory or would be ASM...
log_archive_dest_2 would be standby database service name and archive will generated on standby location location...
Correct ans:-
"If your version of Oracle is 11 then STANDBY_ARCHIVE_DEST parameter is deprecated.
set LOG_ARCHIVE_DEST_1 to '/oradata3/iclgstdb/archivelog on the standby"
Regards,
GRB -
How to skip DELETE command on logical standby?
IBM AIX 5.3
Oracle DB version 10.2.0.3
Can I skip just delete or update on the standy instead of all DML's on particular object using dbms_logstdby.skip,
e.g this will skip all DMLs on the test object.
exec dbms_logstdby.skip(statement => 'DML',schema_name => 'SCOTT', object_name => 'TEST');
Can I use 'DELETE' as a statement type in dbms_logstdby.skip to just skip delete on object.
exec dbms_logstdby.skip(statement => 'DELETE',schema_name => 'SCOTT', object_name => 'TEST');You can not skip all deletions on a particular table with a statement such as this....
You can use SKIP_TRANSACTION to skip a transaction or even multiple transactions but the Oracle documentation states:
is an inherently dangerous operation. Do not invoke this procedure unless you have examined the transaction in question through the V$LOGMNR_CONTENTS view and have taken compensating actions at the logical standby database.
SKIP_TRANSACTION Procedure
Specifies transactions that should not be applied on the logical standby database. Be careful in using this procedure, because not applying specific transactions may cause data corruption at the logical standby database.
Regards
Tim Boles -
Standby logs in logical standby
I am currently running a logical standby database in an Oracle 10gR2/Linux environment.
The primary and standby databases both seem to be running fine, however I am concerned that there seems to be an excessively large number of standby log files marked as 'CURRENT' - 127 files spanning more then 36 hrs at the time of writing.
According to the alert log of the standby database, more than 35 standby log files were deleted last time a series of files was deleted.
Can anybody suggest why there would be such a large number of standby files marked as 'CURRENT'?? Is it possible to find out why these files are required, and therefore clear any potential blockage??
ThanksHI eceramm,
Thanks for you reply.
The last 100 line from the logical standby alert log are:
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 08:08:41 2008
Thread 1 advanced to log sequence 8186
Current log# 1 seq# 8186 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_1_3vpp35lp_.log
Current log# 1 seq# 8186 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_1_3vpp36s4_.log
Tue May 6 08:12:35 2008
RFS[4]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 08:12:35 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 08:12:36 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6351_647020607.dbf] to LogMiner session id [1]
Tue May 6 08:12:36 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 08:12:36 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 08:38:42 2008
Thread 1 advanced to log sequence 8187
Current log# 3 seq# 8187 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_3_3vpp39z0_.log
Current log# 3 seq# 8187 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_3_3vpp3bto_.log
Tue May 6 08:42:35 2008
RFS[6]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 08:42:35 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 08:42:35 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6352_647020607.dbf] to LogMiner session id [1]
Tue May 6 08:42:35 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 08:42:35 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 09:08:42 2008
Thread 1 advanced to log sequence 8188
Current log# 2 seq# 8188 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_2_3vpp37sc_.log
Current log# 2 seq# 8188 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_2_3vpp38pk_.log
Tue May 6 09:12:37 2008
RFS[7]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 09:12:37 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 09:12:37 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6353_647020607.dbf] to LogMiner session id [1]
Tue May 6 09:12:37 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 09:12:37 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 09:38:43 2008
Thread 1 advanced to log sequence 8189
Current log# 1 seq# 8189 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_1_3vpp35lp_.log
Current log# 1 seq# 8189 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_1_3vpp36s4_.log
Tue May 6 09:42:35 2008
RFS[4]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 09:42:35 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 09:42:35 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 09:42:35 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6354_647020607.dbf] to LogMiner session id [1]
Tue May 6 09:42:35 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 10:08:41 2008
Thread 1 advanced to log sequence 8190
Current log# 3 seq# 8190 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_3_3vpp39z0_.log
Current log# 3 seq# 8190 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_3_3vpp3bto_.log
Tue May 6 10:12:37 2008
RFS[6]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 10:12:37 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 10:12:37 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6355_647020607.dbf] to LogMiner session id [1]
Tue May 6 10:12:37 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 10:12:37 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 10:38:43 2008
Thread 1 advanced to log sequence 8191
Current log# 2 seq# 8191 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_2_3vpp37sc_.log
Current log# 2 seq# 8191 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_2_3vpp38pk_.log
Tue May 6 10:42:35 2008
RFS[7]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 10:42:35 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 10:42:35 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6356_647020607.dbf] to LogMiner session id [1]
Tue May 6 10:42:35 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 10:42:35 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 11:08:41 2008
Thread 1 advanced to log sequence 8192
Current log# 1 seq# 8192 mem# 0: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_1_3vpp35lp_.log
Current log# 1 seq# 8192 mem# 1: /opt/oracle/flash_recovery_area/UATDR/onlinelog/o1_mf_1_3vpp36s4_.log
Tue May 6 11:12:36 2008
RFS[4]: Successfully opened standby log 4: '/opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log'
Tue May 6 11:12:36 2008
RFS LogMiner: Client enabled and ready for notification
Tue May 6 11:12:36 2008
LOGMINER: Begin mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Tue May 6 11:12:36 2008
RFS LogMiner: Registered logfile [opt/oracle/arch/standby/UATDR/1_6357_647020607.dbf] to LogMiner session id [1]
Tue May 6 11:12:36 2008
LOGMINER: End mining logfile: /opt/oracle/oradata/UATDR/onlinelog/o1_mf_4_3vpq26xd_.log
Thanks for your interest
Gavin -
Archived log missed in standby database
Hi,
OS; Windows 2003 server
Oracle: 10.2.0.4
Data Guard: Max Performance
Dataguard missed some of the archivelog files and but latest log files are applying. standby database is not in sync with primary.
SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND THREAD# = LOCAL.THREAD#);
I queried above command and I found some files are missed in standby.
select status, type, database_mode, recovery_mode,protection_mode, srl, synchronization_status,synchronized from V$ARCHIVE_DEST_STATUS where dest_id=2;
STATUS TYPE DATABASE_MODE RECOVERY_MODE PROTECTION_MODE SRL SYNCHRONIZATION_STATUS SYN
VALID PHYSICAL MOUNTED-STANDBY MANAGED MAXIMUM PERFORMANCE NO CHECK CONFIGURATION NO
Anyone can tell me how to apply those missed archive log files.
Thanks in advacneDeccan Charger wrote:
I got below error.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
ERROR at line 1:
ORA-01153: an incompatible media recovery is activeYou need to essentially do the following.
1) Stop managed recovery on the standby.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;2) Resolve the archive log gap - if you have configured FAL_SERVER and FAL_CLIENT Oracle should do this when you follow step 3 below, as you've manually copied the missed logs you should be OK
3) restart managed recovery using the command shown above.
You can monitor archive log catchup using the alert.log or your original query.
Niall Litchfield
http://www.orawin.info/
Edited by: Niall Litchfield on May 4, 2010 2:29 PM
missed tag -
How: Script archive log transfer to standby db
Hi,
I’m implementing disaster recovery right now. For some special reason, the only option for me is to implement non-managed standby (manual recovery) database.
The following is what I’m trying to do using shell script:
1. Compress archive logs and copy them from Primary site to Standby site every hour. ( I have a very low network )
2. Decompress archive logs at standby site
3. Check if there are missed archive logs. If no, then do the manual recovery
Did I miss something above? And I’m not skill in to build shell scripts, is there any sample scripts I can follow? Thanks.
Nabil
Message was edited by:
11iuserHi,
Take a look at data guard packages. There is a package just for this purpose: Bipul Kumar notes:
http://www.dba-oracle.com/t_oracledataguard_174_unskip_table_.htm
"the time lag between the log transfer and the log apply service can be built using the DELAY attribute of the log_archive_dest_n initialization parameter on the primary database. This delay timer starts when the archived log is completely transferred to the standby site. The default value of the DELAY attribute is 30 minutes, but this value can be overridden as shown in the following example:
LOG_ARCHIVE_DEST_3=’SERVICE=logdbstdby DELAY=60’;"
1. Compress archive logs and copy them from Primary site to Standby site every hour.Me, I use tar (or compress) and rcp, but I don't know the details of your environment. Jon Emmons has some good notes:
http://www.lifeaftercoffee.com/2006/12/05/archiving-directories-and-files-with-tar/
2. Decompress archive logs at standby siteSee the man pages for uncompress. I do it through a named pipe to simplify the process:
http://www.dba-oracle.com/linux/conditional_statements.htm
3. Check if there are missed archive logs.I keep my standby data in recovery mode, and as soon as the incoming logs are uncompressed, they are applied automatically.
Again, if you don't feel comfortable writing your own, consider using the data guard packages.
Hope this helps. . .
Donald K. Burleson
Oracle Press author -
Archive log missing on standby: FAL[client]: Failed to request gap sequence
My current environment is Oracle 10.2.0.4 with ASM 10.2.0.4 on a 2 node RAC in production and a standby that is the same setup. I'm also running on Oracle Linux 5. Almost daily now an archivelog doesnt make it to the standby and oracle doesnt seem to resolve the gap sequence from the primary. If I stop and restart recovery it gets the logfile and continues recovery just fine. I have checked my fal_client and fal_server settings and they look good. The logs after this error do continue to get written to the standby but the standby wont continue recovery until I stop and restart recovery and it fetches this missing log.
The only thing I know thats happening is that the firewall people are disconnecting any connections that are inactive for 60 minutes and recently did an upgrade that they are claiming didnt change anything:) I dont know if this is causing this problem or not. Any thoughts on what might be happening?
Error in standby alert.log:
Tue Jun 29 23:15:35 2010
RFS[258]: Possible network disconnect with primary database
Tue Jun 29 23:15:36 2010
Fetching gap sequence in thread 2, gap sequence 9206-9206
Tue Jun 29 23:16:46 2010
FAL[client]: Failed to request gap sequence
GAP - thread 2 sequence 9206-9206
DBID 661398854 branch 714087609
FAL[client]: All defined FAL servers have been attempted.
Error on primary alert.log:
Tue Jun 29 23:00:07 2010
ARC0: Creating remote archive destination LOG_ARCHIVE_DEST_2: 'WSSPRDB' (thread 1 sequence 9265)
(WSSPRD1)
ARC0: Transmitting activation ID 0x29c37469
Tue Jun 29 23:00:07 2010
Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
ORA-03135: connection lost contact
FAL[server, ARC0]: FAL archive failed, see trace file.
Tue Jun 29 23:00:07 2010
Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Tue Jun 29 23:00:07 2010
ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
Tue Jun 29 23:00:41 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
Tue Jun 29 23:00:41 2010
FAL[server, ARC2]: Begin FAL archive (dbid 0 branch 714087609 thread 2 sequence 9206 dest WSSPRDB)
FAL[server, ARC2]: FAL archive failed, see trace file.
Tue Jun 29 23:00:43 2010
Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc2_14028.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Tue Jun 29 23:00:43 2010
ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
Tue Jun 29 23:01:16 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
Tue Jun 29 23:15:01 2010
Thread 1 advanced to log sequence 9267 (LGWR switch)
I have checked the trace files that get spit out but they arent anything meaningful to me as to whats really happening. Snipit of the trace file:
tkcrrwkx: Starting to process work request
tkcrfgli: SRL header: 0
tkcrfgli: SRL tail: 0
tkcrfgli: ORL to arch: 4
tkcrfgli: le# seq thr for bck tba flags
tkcrfgli: 1 359 1 2 0 3 0x0008 ORL active cur
tkcrfgli: 2 358 1 0 1 1 0x0000 ORL active
tkcrfgli: 3 361 2 4 0 0 0x0008 ORL active cur
tkcrfgli: 4 360 2 0 3 2 0x0000 ORL active
tkcrfgli: 5 -- entry deleted --
tkcrfgli: 6 -- entry deleted --
tkcrfgli: 7 -- entry deleted --
tkcrfgli: 8 -- entry deleted --
tkcrfgli: 9 -- entry deleted --
tkcrfgli: 191 -- entry deleted --
tkcrfgli: 192 -- entry deleted --
*** 2010-03-27 01:30:32.603 20998 kcrr.c
tkcrrwkx: Request from LGWR to perform: <startup>
tkcrrcrlc: Starting CRL ARCH check
*** 2010-03-27 01:30:32.603 66085 kcrr.c
Beginning controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14011)]
*** 2010-03-27 01:30:32.645 66173 kcrr.c
Acquired controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14024)]
*** 2010-03-27 01:30:32.649 66394 kcrr.c
Ending controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20397]
tkcrrasgn: Checking for 'no FAL', 'no SRL', and 'HB' ARCH process
# HB NoF NoS CRL Name
29 NO NO NO NO ARC0
28 NO YES YES NO ARC1
27 NO NO NO NO ARC2
26 NO NO NO NO ARC3
25 YES NO NO NO ARC4
24 NO NO NO NO ARC5
23 NO NO NO NO ARC6
22 NO NO NO NO ARC7
21 NO NO NO NO ARC8
20 NO NO NO NO ARC9
Thanks.
KristiIt's the network that's messing up; unlikely due to firewall timeout as it waits for 60 minutes and you are switching every 15 minutes. There may be some other network glitch that needs rectified.
In any case - arch file missing/ corrupt / halfway through - FAL setting should have refetched the problematic archive log automatically.
As many had suggested already, the best way to resolve RFS issues I believe is to use real-time apply by configuring standby redo logs. It's very easy to configure it and you can opt for real-time apply even in max-performance mode that you are using right now.
Even though you are maintaining (I guess) 1-1 between primary & standby instances, you can provide both primary instances in fal_server (like fal_server=string1,string2). See if that helps.
lastly, check if you are having simiar issue at other times as well that might be getting rectified automatically as expected.
col message for a80
col time for a20
select message, to_char(timestamp,'dd-mon-rr hh24:mi:ss') time
from v$dataguard_status
where severity in ('Error','Fatal')
order by timestamp;
Cheers. -
How to delete archive logs on the standby database....in 9i
Hello,
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
thanks,
C.We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.From 10g there is option to purge on deletion policy when archives were applied. Check this note.
*Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]*
Still it is on 9i, So you need to schedule RMAN job or Shell script file to delete archives.
Before deleting archives
1) you need to check is all the archives are applied or not
2) then you can remove all the archives completed before 'sysdate-2';
RMAN> delete archvielog all completed before 'sysdate-2';
As per your requirement. -
Skipping dependent Tables in Logical Standby
Hello DBAs
I need your expertise here. Let me explain the scenario. Suppose a table is skipped in a logical standby. This table is referred by other tables and there are dependencies on this table. Now my question is what happens when a transaction is committed at the primary which is dependent on this table ?
Does the transaction go through even though that table is not replicated. What happens to data integrity ?
I appreciate your help. Thanks.Have a go at
[The Documentation...|http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ls.htm#SBYDB00800] -
Determining the last archive applied on Logical Standby
Hi,
I am trying to determine the last log received and applied on my Logical Standby
SQL> select thread#, max(sequence#) "Last Standby Seq Received"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
group by thread# order by 1;
THREAD# Last Standby Seq Received
1 14
SQL> select thread#, max(sequence#) "Last Standby Seq Applied"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
and applied='YES'
group by thread# order by 1;
Does not return anything
These statements work ok on a physical standby.
I know Sql Apply is enabled on my logical standby - my broker configuration is enabled and for the logical standby database, it is showing the Intended State as APPLY-ON with no Transport or Apply Lag on any of the databases
Q. How do I determine the last seq applied on my logical standby ?
thanks,
JimHello;
I have this from my notes:
SELECT
L.SEQUENCE#,
L.FIRST_TIME,
(CASE WHEN L.NEXT_CHANGE# < P.READ_SCN THEN 'YES'WHEN L.FIRST_CHANGE# < P.APPLIED_SCN THEN 'CURRENT' ELSE 'NO' END) APPLIED
FROM
DBA_LOGSTDBY_LOG L, DBA_LOGSTDBY_PROGRESS P
ORDER BY SEQUENCE#;
Best Regards
mseberg -
Archive log generation in standby
Dear all,
DB: 11.1.0.7
We are configuring physical standby for our production system.we have the same file
system and configuration for both the servers.. now primary archive
destination is d:/arch and the standby server also have d:/arch .Now
archive logs are properly logged into the standby and the data is
intact . the problem we have archive log generation proper in the
primary arch destionation. but no archive logs are getting
generated in the standby archive location. but archive logs are being
applied to the standby database ?
is this normal ?..in standby archive logs will not be generated ?
Please guide
KaiThere are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
/ -
Hi All,
I have a question regarding oracle archive log configuration .
My DB is : Ora10gR2
Unix : HPUX
To support Data guard functionality DBA has put DB in ARCHIVE LOG mode with forced logging mode =YES.
1* select LOG_MODE,FORCE_LOGGING from v$database
SQL> /
LOG_MODE FORCE_LOGGING
ARCHIVELOG YES
Now i have a table called PARAMETER where i need to load aroung 700 Million records . Since DB is in forced logging mode this will create lot of log information and that will take long time to load as well.
Is thr any option to keep a table in nologging mode , even if the DB is in forced logging mode ??
Thanksam_73798 wrote:
Hi All,
I have a question regarding oracle archive log configuration .
My DB is : Ora10gR2
Unix : HPUX
To support Data guard functionality DBA has put DB in ARCHIVE LOG mode with forced logging mode =YES.
1* select LOG_MODE,FORCE_LOGGING from v$database
SQL> /
LOG_MODE FORCE_LOGGING
ARCHIVELOG YES
Now i have a table called PARAMETER where i need to load aroung 700 Million records . Since DB is in forced logging mode this will create lot of log information and that will take long time to load as well.
Is thr any option to keep a table in nologging mode , even if the DB is in forced logging mode ??
ThanksHi,
No there is no option to keep a table in nologging mode if DB is in force logging.
Regards
Anurag -
Physycal Standby archive log gap....
Archive log gap caused... The reason being before the logs can be shipped to standby location where deleted by rman backup... So restored the archives on primary database site back again... These old logs from the gap are not getting shipped to the standby site, but the new ones generated currently are getting shipped.
Can some one help what action do I have to take to resolve the gap? And how to know what's causing and not letting this shipping happen?
Or shall I manually ship these gap archive logs to the standby site?1) Yep running 9i.. But still its not shipping...Are FAL_CLIENT & FAL_SERVER parameters are defined at standby level?
If not, define them at standby level. Those parameter will help to get missing (gap) archives from primary database.
2) If so shipped manually do have to register the archive logs? Just copy from primary to standby and don't need to register any gap, that was in 8i and when there was no background process MRP (media recovery process). If the standby database is in auto media recovery, then, it will automatically applies all the archived logs.
Jaffar -
Standby database Archive log destination confusion
Hi All,
I need your help here..
This is the first time that this situation is arising. We had sync issues in the oracle 10g standby database prior to this archive log destination confusion.So we rebuilt the standby to overcome this sync issue. But ever since then the archive logs in the standby database are moving to two different locations.
The spfile entries are provided below:
*.log_archive_dest_1='LOCATION=/m99/oradata/MARDB/archive/'
*.standby_archive_dest='/m99/oradata/MARDB/standby'
Prior to rebuilding the standby databases the archive logs were moving to /m99/oradata/MARDB/archive/ location which is the correct location. But now the archive logs are moving to both /m99/oradata/MARDB/archive/ and /m99/oradata/MARDB/standby location, with the majority of them moving to /m99/oradata/MARDB/standby location. This is pretty unusual.
The archives in the production are moving to /m99/oradata/MARDB/archive/ location itself.
Could you kindly help me overcome this issue.
Regards,
DanHi Anurag,
Thank you for update.
Prior to rebuilding the standby database the standby_archive_dest was set as it is. No modifications were made to the archive destination locations.
The primary and standby databases are on different servers and dataguard is used to transfer the files.
I wanted to highlight one more point here, The archive locations are similar to the ones i mentioned for the other stndby databases. But the archive logs are moving only to /archive location and not to the /standby location.
Maybe you are looking for
-
Transferring apps from iPhone 3GS to iPhone 4.
I recently purchased an iPhone 4. I already have an iPhone 3GS which I still use. I wanted to download some of the apps onto the iP4 but it won't let me. Does anyone have any ideas or can't it be done? I have looked at similar posts but they aren't e
-
Foreign Currency Translation at Year End - How SAP Works for P&L items?
Hi All, I wanted to know "How SAP works on Foreign Currency Translation at year end" from Local Currency to Group Currency for P&L Items. I know how SAP works for Balance sheet items but am really confused with when the translation was done for P&L I
-
And, in general, how can I control the size of the firefox window?
The Firefox window has become bigger, even before I clicked "full screen." So again, how do I control the size of the window/screen? == This happened == Every time Firefox opened
-
How to Call Scapscript program in bapi
Hi All there, wanted to call a sap-script program in a bapi my coding is line FUNCTION ZBAPI_ZDIPNB. ""Local interface: *" EXPORTING *" VALUE(HTML_STRING) TYPE STRING *" TABLES *" SELTAB STRUCTURE RSPARAMS *" HTML_REPORT STRUCTURE
-
Shopping cart with I1111 - Item in Transfer Process.
Hi All, I have a shopping cart in our Production system with 5 line items, all had the status "I1111 - Item in Transfer Process". I have run report BBP_ALERT_SB_NOTTRANSFERED and 4 out of 5 items have changed the status from "I1111 - Item in Transfer