Archive log generation in standby
Dear all,
DB: 11.1.0.7
We are configuring physical standby for our production system.we have the same file
system and configuration for both the servers.. now primary archive
destination is d:/arch and the standby server also have d:/arch .Now
archive logs are properly logged into the standby and the data is
intact . the problem we have archive log generation proper in the
primary arch destionation. but no archive logs are getting
generated in the standby archive location. but archive logs are being
applied to the standby database ?
is this normal ?..in standby archive logs will not be generated ?
Please guide
Kai
There are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
/
Similar Messages
-
Hi,
I am working in Oracle 10g RAC database on Hp-UX... in the standby environment..
Instance name :
R1
R2
R3
for the above three instance... i need to find the hourly archive log generation in standby site.....
Hours 1 2 3
R1
R2
R3
Total
Share the query...set parameter archive_lag_target to required value. its a dynamic parameter and specified in secs.
-
*HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY*
HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY
I have set the RMAN CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY; on my physical standby server.
My archivelog files are not deleted on standby.
I have set the CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default on the Primary server.
I've checked the archivelogs with the FRA and they are not beign deleted on the STANDBY. Do I have to do something for the configuation to take effect? Like run a RMAN backup?
I've done a lot ofresearch and i'm getting mixed answers. Please help. Thanks in advanced.
JSetting the Policy will not delete the Archive logs on the Standby. ( I found a thread where the Data Guard product manager says "The deletion policy on both sides will do what you want" ). However I still
like to clean them off with RMAN.
I would use RMAN to delete them so that it can use that Policy are you are protected in case of Gap, transport issue etc.
There are many ways to do this. You can simply run RMAN and have it clean out the Archive.
Example :
#!/bin/bash
# Name: db_rman_arch_standby.sh
# Purpose: Database rman backup
# Usage : db_rman_arch_standby <DBNAME>
if [ "$1" ]
then DBNAME=$1
else
echo "basename $0 : Syntax error : use . db_rman_full <DBNAME> "
exit 1
fi
. /u01/app/oracle/dba_tool/env/${DBNAME}.env
echo ${DBNAME}
MAILHEADER="Archive_cleanup_on_STANDBY_${DBNAME}"
echo "Starting RMAN..."
$ORACLE_HOME/bin/rman target / catalog <user>/<password>@<catalog> << EOF
delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-8';
exit
EOF
echo `date`
echo
echo 'End of archive cleanup on STANDBY'
mailx -s ${MAILHEADER} $MAILTO < /tmp/rmandbarchstandby.out
# End of ScriptThis uses ( calls an ENV) so the crontab has an environment.
Example ( STANDBY.env )
ORACLE_BASE=/u01/app/oracle
ULIMIT=unlimited
ORACLE_SID=STANDBY
ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
LIBPATH=$LD_LIBRARY_PATH:/usr/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
#export TERM=linux=80x25 wrong wrong wrong wrong wrong
export TERM=vt100
export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
export ORACLE_HOME
export LIBPATH LD_LIBRARY_PATH ORA_NLS33
export TNS_ADMIN
export PATH
export MAILTO=?? your email hereNote use the env command in Unix to get you settings.
There are probably ten other/better ways to do this, but this works.
other options ( you decide )
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
http://www.oracle.com/technetwork/database/features/availability/rman-dataguard-10g-wp-1-129486.pdf
Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
Tip I don't care myself but in some of the other forums people seem to mind if you use all caps in the subject. They say it shouting. My take is if somebody is shouting at me I'm probably going to just move away.
Best Regards
mseberg
Edited by: mseberg on May 8, 2012 11:53 AM
Edited by: mseberg on May 8, 2012 11:56 AM -
How to control too much of archived log generation
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,796843 wrote:
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,do not do any DML, since only DML generates REDO -
Archive Log Generation in EBusiness Suite
Hi,
I am responsible for EBusiness suite 11.5.10.2 AIX Production server. Until last week (for the past 1.5 years), there were excessive archive log generation (200 MB for every 10 mins) which has been reduced to (200 MB for every 4.5 hours).
I am unable to understand this behavior. The number of users still remain the same and the usage is as usual.
Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
Please suggest what can be done.
(I have raised this issue in Metalink Forum also and awaiting a response)
Thanks
qALog/archive logs generation is directly related to the level of activities on the database, so it is almost certain that the level of activities have dropped significantly.
If possible, can you run this query and post the result:
select trunc(FIRST_TIME), count(SEQUENCE#) from v$archived_log
where to_char(trunc(FIRST_TIME),'MONYYYY') = 'SEP2007'
group by trunc(first_time)
order by 1
--Adams -
Query help for archive log generation details
Hi All,
Do you have a query to know the archive log generation details for today.
Best regards,
Rafi.Dear user13311731,
You may use below query and i hope you will find it helpful;
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM < 8;Hope That Helps.
Ogan -
Growth of Archive log generation
Hi,
In my case the the rate of archive log generation has been increased, so I want to know the query
to find out the rate of archive log generation per hour.
Regards
SyedHi Syed;
What is your DB version? Also ebs and os?
I use below query for my issue:
select to_char(first_time,'MM-DD') day, to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') "23"
from v$log_history group by to_char(first_time,'MM-DD')
Regard
Helios -
Hi,
Database Version: Oracle 11.1.0.6
Platform: Enterprise Linux 5
Can someone please tell me the troubleshooting steps_ in a situation where there is heavy inflow of archive log generation i mean, say around 3 files of around 50MB every minute and eating away the space on disk.
1) How to find out what activity is causing such heavy archive log generation? I can run the below query to find out the currently running sql queries with status;
select a.username, b.sql_text,a.status from v$session a inner join v$sqlarea b on a.sql_id=b.sql_id;
But is there any other query or a better way to find out current db activity in this situation.
Tried using DBMS_LGMNR Log Miner but failed because (i) utl_file_dir is not set in init param file (So, mining the archive log file on production is presently ruled out as I cannot take an outage)
(ii) alter database add supplemental log data (all) columns query takes for ever because of the locks. (So, cannot mine the generated archive log file on another machine due to DBID mismatch.)
2) How to deal with this situation? I read here in OTN discussion board that increasing the number of redo log groups or redo log members will help to manage this situation when there is lot of DML activity going on application side....But I didn't understand how it is going to help in controlling the rigorous archive logs generation
Edited by: user10313587 on Feb 11, 2011 8:43 AM
Edited by: user10313587 on Feb 11, 2011 8:44 AMHi,
Other than logminer, which will tell you exactly what the redo is by definition, you can run something like the following:
select value/((sysdate-logon_time) * 1440) redo_per_minute,s.sid,serial#,logon_time,value
from v$statname sn,
v$sesstat ss,
v$session s
where s.sid = ss.sid
and sn.statistic# = ss.statistic#
and name = 'redo size'
and value>0Then trace the "high" sessions above and it should jump out at you. If not, then run logmnr with something like...
set serveroutput on size unlimited
begin
dbms_logmnr.add_logfile(logfilename => '&log_file_name');
dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog + dbms_logmnr.no_rowid_in_stmt);
FOR cur IN (SELECT *
FROM v$logmnr_contents) loop
dbms_output.put_line(cur.sql_redo);
end loop;
end;
/Note you don't need utl_file_dir for log miner if you use the online catalog.
HTH,
Steve -
Archive log generation in every 7 minute interval
One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
Current settings are
fast_start_mttr_target integer 300
log_buffer integer 5242880
Regards
ManojYou can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
how much blocks have been changed by the session. High values indicate a
session generating lots of redo.
The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 i.block_changes
3 FROM v$session s, v$sess_io i
4 WHERE s.sid = i.sid
5 ORDER BY 5 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
2) Query V$TRANSACTION. This view contains information about the amount of
undo blocks and undo records accessed by the transaction (as found in the
USED_UBLK and USED_UREC columns).
The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 t.used_ublk, t.used_urec
3 FROM v$session s, v$transaction t
4 WHERE s.taddr = t.addr
5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
the session. -
Archived log missed in standby database
Hi,
OS; Windows 2003 server
Oracle: 10.2.0.4
Data Guard: Max Performance
Dataguard missed some of the archivelog files and but latest log files are applying. standby database is not in sync with primary.
SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND THREAD# = LOCAL.THREAD#);
I queried above command and I found some files are missed in standby.
select status, type, database_mode, recovery_mode,protection_mode, srl, synchronization_status,synchronized from V$ARCHIVE_DEST_STATUS where dest_id=2;
STATUS TYPE DATABASE_MODE RECOVERY_MODE PROTECTION_MODE SRL SYNCHRONIZATION_STATUS SYN
VALID PHYSICAL MOUNTED-STANDBY MANAGED MAXIMUM PERFORMANCE NO CHECK CONFIGURATION NO
Anyone can tell me how to apply those missed archive log files.
Thanks in advacneDeccan Charger wrote:
I got below error.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
ERROR at line 1:
ORA-01153: an incompatible media recovery is activeYou need to essentially do the following.
1) Stop managed recovery on the standby.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;2) Resolve the archive log gap - if you have configured FAL_SERVER and FAL_CLIENT Oracle should do this when you follow step 3 below, as you've manually copied the missed logs you should be OK
3) restart managed recovery using the command shown above.
You can monitor archive log catchup using the alert.log or your original query.
Niall Litchfield
http://www.orawin.info/
Edited by: Niall Litchfield on May 4, 2010 2:29 PM
missed tag -
How: Script archive log transfer to standby db
Hi,
I’m implementing disaster recovery right now. For some special reason, the only option for me is to implement non-managed standby (manual recovery) database.
The following is what I’m trying to do using shell script:
1. Compress archive logs and copy them from Primary site to Standby site every hour. ( I have a very low network )
2. Decompress archive logs at standby site
3. Check if there are missed archive logs. If no, then do the manual recovery
Did I miss something above? And I’m not skill in to build shell scripts, is there any sample scripts I can follow? Thanks.
Nabil
Message was edited by:
11iuserHi,
Take a look at data guard packages. There is a package just for this purpose: Bipul Kumar notes:
http://www.dba-oracle.com/t_oracledataguard_174_unskip_table_.htm
"the time lag between the log transfer and the log apply service can be built using the DELAY attribute of the log_archive_dest_n initialization parameter on the primary database. This delay timer starts when the archived log is completely transferred to the standby site. The default value of the DELAY attribute is 30 minutes, but this value can be overridden as shown in the following example:
LOG_ARCHIVE_DEST_3=’SERVICE=logdbstdby DELAY=60’;"
1. Compress archive logs and copy them from Primary site to Standby site every hour.Me, I use tar (or compress) and rcp, but I don't know the details of your environment. Jon Emmons has some good notes:
http://www.lifeaftercoffee.com/2006/12/05/archiving-directories-and-files-with-tar/
2. Decompress archive logs at standby siteSee the man pages for uncompress. I do it through a named pipe to simplify the process:
http://www.dba-oracle.com/linux/conditional_statements.htm
3. Check if there are missed archive logs.I keep my standby data in recovery mode, and as soon as the incoming logs are uncompressed, they are applied automatically.
Again, if you don't feel comfortable writing your own, consider using the data guard packages.
Hope this helps. . .
Donald K. Burleson
Oracle Press author -
Archive log missing on standby: FAL[client]: Failed to request gap sequence
My current environment is Oracle 10.2.0.4 with ASM 10.2.0.4 on a 2 node RAC in production and a standby that is the same setup. I'm also running on Oracle Linux 5. Almost daily now an archivelog doesnt make it to the standby and oracle doesnt seem to resolve the gap sequence from the primary. If I stop and restart recovery it gets the logfile and continues recovery just fine. I have checked my fal_client and fal_server settings and they look good. The logs after this error do continue to get written to the standby but the standby wont continue recovery until I stop and restart recovery and it fetches this missing log.
The only thing I know thats happening is that the firewall people are disconnecting any connections that are inactive for 60 minutes and recently did an upgrade that they are claiming didnt change anything:) I dont know if this is causing this problem or not. Any thoughts on what might be happening?
Error in standby alert.log:
Tue Jun 29 23:15:35 2010
RFS[258]: Possible network disconnect with primary database
Tue Jun 29 23:15:36 2010
Fetching gap sequence in thread 2, gap sequence 9206-9206
Tue Jun 29 23:16:46 2010
FAL[client]: Failed to request gap sequence
GAP - thread 2 sequence 9206-9206
DBID 661398854 branch 714087609
FAL[client]: All defined FAL servers have been attempted.
Error on primary alert.log:
Tue Jun 29 23:00:07 2010
ARC0: Creating remote archive destination LOG_ARCHIVE_DEST_2: 'WSSPRDB' (thread 1 sequence 9265)
(WSSPRD1)
ARC0: Transmitting activation ID 0x29c37469
Tue Jun 29 23:00:07 2010
Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
ORA-03135: connection lost contact
FAL[server, ARC0]: FAL archive failed, see trace file.
Tue Jun 29 23:00:07 2010
Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc0_14024.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Tue Jun 29 23:00:07 2010
ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
Tue Jun 29 23:00:41 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
Tue Jun 29 23:00:41 2010
FAL[server, ARC2]: Begin FAL archive (dbid 0 branch 714087609 thread 2 sequence 9206 dest WSSPRDB)
FAL[server, ARC2]: FAL archive failed, see trace file.
Tue Jun 29 23:00:43 2010
Errors in file /u01/app/oracle/admin/WSSPRD/bdump/wssprd1_arc2_14028.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Tue Jun 29 23:00:43 2010
ORACLE Instance WSSPRD1 - Archival Error. Archiver continuing.
Tue Jun 29 23:01:16 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
Tue Jun 29 23:15:01 2010
Thread 1 advanced to log sequence 9267 (LGWR switch)
I have checked the trace files that get spit out but they arent anything meaningful to me as to whats really happening. Snipit of the trace file:
tkcrrwkx: Starting to process work request
tkcrfgli: SRL header: 0
tkcrfgli: SRL tail: 0
tkcrfgli: ORL to arch: 4
tkcrfgli: le# seq thr for bck tba flags
tkcrfgli: 1 359 1 2 0 3 0x0008 ORL active cur
tkcrfgli: 2 358 1 0 1 1 0x0000 ORL active
tkcrfgli: 3 361 2 4 0 0 0x0008 ORL active cur
tkcrfgli: 4 360 2 0 3 2 0x0000 ORL active
tkcrfgli: 5 -- entry deleted --
tkcrfgli: 6 -- entry deleted --
tkcrfgli: 7 -- entry deleted --
tkcrfgli: 8 -- entry deleted --
tkcrfgli: 9 -- entry deleted --
tkcrfgli: 191 -- entry deleted --
tkcrfgli: 192 -- entry deleted --
*** 2010-03-27 01:30:32.603 20998 kcrr.c
tkcrrwkx: Request from LGWR to perform: <startup>
tkcrrcrlc: Starting CRL ARCH check
*** 2010-03-27 01:30:32.603 66085 kcrr.c
Beginning controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14011)]
*** 2010-03-27 01:30:32.645 66173 kcrr.c
Acquired controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20395 (14024)]
*** 2010-03-27 01:30:32.649 66394 kcrr.c
Ending controlfile transaction 0x0x7fffd0b53198 [kcrr.c:20397]
tkcrrasgn: Checking for 'no FAL', 'no SRL', and 'HB' ARCH process
# HB NoF NoS CRL Name
29 NO NO NO NO ARC0
28 NO YES YES NO ARC1
27 NO NO NO NO ARC2
26 NO NO NO NO ARC3
25 YES NO NO NO ARC4
24 NO NO NO NO ARC5
23 NO NO NO NO ARC6
22 NO NO NO NO ARC7
21 NO NO NO NO ARC8
20 NO NO NO NO ARC9
Thanks.
KristiIt's the network that's messing up; unlikely due to firewall timeout as it waits for 60 minutes and you are switching every 15 minutes. There may be some other network glitch that needs rectified.
In any case - arch file missing/ corrupt / halfway through - FAL setting should have refetched the problematic archive log automatically.
As many had suggested already, the best way to resolve RFS issues I believe is to use real-time apply by configuring standby redo logs. It's very easy to configure it and you can opt for real-time apply even in max-performance mode that you are using right now.
Even though you are maintaining (I guess) 1-1 between primary & standby instances, you can provide both primary instances in fal_server (like fal_server=string1,string2). See if that helps.
lastly, check if you are having simiar issue at other times as well that might be getting rectified automatically as expected.
col message for a80
col time for a20
select message, to_char(timestamp,'dd-mon-rr hh24:mi:ss') time
from v$dataguard_status
where severity in ('Error','Fatal')
order by timestamp;
Cheers. -
How to delete archive logs on the standby database....in 9i
Hello,
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
thanks,
C.We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.From 10g there is option to purge on deletion policy when archives were applied. Check this note.
*Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]*
Still it is on 9i, So you need to schedule RMAN job or Shell script file to delete archives.
Before deleting archives
1) you need to check is all the archives are applied or not
2) then you can remove all the archives completed before 'sysdate-2';
RMAN> delete archvielog all completed before 'sysdate-2';
As per your requirement. -
Skip archive log on logical standby
hi ,
I want to skip archive log from nmber 1150 to 1161 on logical standby dtbs.
I knw , we can skip ddl , dml on logical standby .
How can archive this ??
(oracle 10g entreprise edition )Hello;
I do not believe this is an option. The closest to this would be "applying modifications to specific tables"
See :
9.4.3 Using DBMS_LOGSTDBY.SKIP to Prevent Changes to Specific Schema Objects
Data Guard Concepts and Administration 10g Release 2 (10.2) B14239-05
While this is not the answer you want the skip Archive would create a Gap and cause many other issues you don't want.
Best Regards
mseberg -
Hai all,
In My production environment, some times archives log are generating 5-6 logs a min.. very less no of users are connected to the database now..
rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4810.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4811.arc
-rw-r----- 1 oraprod dba 10483712 Jan 12 14:10 prod_arch_4812.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4813.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4814.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4815.arc
Y is this happenin ?
Any comments or ideas to resolve this?
YusufWhenever you create a thread, it is always advisable to specify your current OS and DB versions.
You could be generating this redo information by means of your scheduled tasks or by the current users activity, the few concurrent number of users doesn't mean they won't be generating a lot of transactions. Check your v$undostat and v$rollstat, v$transaction, and v$session to monitor users and transaction activity.
10M for a redo log size IMO is very little for the current transaction requirements for most databases. Your database currently generates transaction information at a rate of about 50M/min. With 100M redolog files you would be generating one archivelog file around each two minutes, instead of the current 5 archivelog per minute.
Since your database is highly transactional, make sure you have enough free space to store your generated archive log files, you will be generating about 3G/hr.
~ Madrid
Maybe you are looking for
-
Realistically, how many of you guys can edit in real time? I mean have three or four video tracks with effects applied and be able to click play and watch it in real-time? I have two video tracks with one track scaled down (ala picture-in-picture) la
-
SQL Query giving error in SAP B1 8.82 PL 09
Hi all, Please check the query below. If we remove the Where condition from the Query it works fine. Otherwise it gives error. The error screen is also attached. [Microsoft][SQL Server Native Client 10.0][SQL Server]The data types ntext and ntext are
-
How to Calculate Seniority of an Employee in Time Evaluation????
Hi All, I am working on Carry Forward of the Vacation Quota from one year to the next. The number of hours of the Vacation Quota that an Employee could carry forward to the next year is based on the Seniority of the employee. My question to you all i
-
I have a programme that sorts through test results and picks out the rejects according to its "delta" value. I've noticed now that some rejects weren't picked up for some reason. As a fail safe I want to be able to highlight any cell whose value is a
-
SAP GRC AC 5.3 RAR Background jobs are cancelled
Hi Experts, we have newly implemented theS AP GRC AC 5.3 RAR Help me in troubleshooting the Background jobs cancellation in SAP GRC AC5.3 RAR.. we have reported this issue to customersupport they asked us to upgrade the front end patch level to Sp1