Archive log (DBA)
Hi
I am looking for a SQL script where it generates 100MB archive log file in 2 minutes.
Just to play, something like this ?
SQL> create table titi as select * from all_objects where 1=2;
Table created.
SQL> set serveroutput on
SQL> declare
2 v_redo_size1 number;
3 v_redo_size2 number;
4 v_start number;
5 v_end number;
6 begin
7 select a.value, dbms_utility.get_time
8 into v_redo_size1,v_start
9 from v$mystat a,v$statname b
10 where a.statistic#=b.statistic#
11 and b.name like 'redo size%';
12
13 for i in 1..15 loop
14 insert into titi select * from all_objects;
15 end loop;
16
17 select a.value, dbms_utility.get_time
18 into v_redo_size2,v_end
19 from v$mystat a,v$statname b
20 where a.statistic#=b.statistic#
21 and b.name like 'redo size%';
22
23 dbms_output.put_line('Redo size generated '||trunc((v_redo_size2-v_redo_size1)/1024/1024)||' Mb');
24 dbms_output.put_line('Elapse time '||trunc((v_end-v_start)/100)||' sec.');
25 end;
26 /
Redo size generated 106 Mb
Elapse time 91
PL/SQL procedure successfully completed.
SQL> Nicolas.
Similar Messages
-
ARCHIVE LOGS CREATED in WRONG FOLDER
Hello,
I'm facing an issue with the Archive logs.
In my Db the parameters for Archive logs are
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
db_create_file_dest string /u01/oradata/SIEB/dbf
db_create_online_log_dest_1 string /u01/oradata/SIEB/rdo
But the archive logs are created in
/u01/app/oracle/product/9.2.0.6/dbs
Listed Below :
bash-2.05$ ls -lrt *.arc
-rw-r----- 1 oracle dba 9424384 Jan 9 09:30 SIEB_302843.arc
-rw-r----- 1 oracle dba 7678464 Jan 9 10:00 SIEB_302844.arc
-rw-r----- 1 oracle dba 1536 Jan 9 10:00 SIEB_302845.arc
-rw-r----- 1 oracle dba 20480 Jan 9 10:00 SIEB_302846.arc
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 104858112 Jan 9 10:58 SIEB_302848.arc
bash-2.05$
Does anyone have an Idea why this happens?
Is this a Bug!!!
ThxsBut in another Db I've
log_archive_dest string
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
and my archivelogs are in
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] ls -lrt /u03/archive/SIEB
total 297696
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 21573632 Jan 9 11:00 SIEB_302848.arc
-rw-r----- 1 oracle dba 101450240 Jan 9 11:30 SIEB_302849.arc
-rw-r----- 1 oracle dba 6308864 Jan 9 12:00 SIEB_302850.arc
-rw-r----- 1 oracle dba 12936704 Jan 9 12:30 SIEB_302851.arc
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] -
Error while taking archive log backup
Dear all,
We are getting the below mentioned error while taking the archive log backup
============================================================================
BR0208I Volume with name RRPA02 required in device /dev/rmt0.1
BR0210I Please mount BRARCHIVE volume, if you have not already done so
BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.41
BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRARCHIVE:
c
BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
BR0257I Your reply: 'c'
BR0259I Program execution will be continued...
BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
BR0226I Rewinding tape volume in device /dev/rmt0 ...
BR0351I Restoring /oracle/RRP/sapreorg/.tape.hdr0
BR0355I from /dev/rmt0.1 ...
BR0278W Command output of 'LANG=C cd /oracle/RRP/sapreorg && LANG=C cpio -iuvB .tape.hdr0 < /dev/rmt0.1':
Can't read input
===========================================================================
We are able to take offline, online backups but we are facing the above mentioned problem while taking archive log backup
We are on ECC 6 / Oracle / AIX
The kernel is latest
The drive is working fine and there is no problem with the tapes as we have tried using diffrent tapes
can this be a permissions issue?
I ran saproot.sh but somehow it is setting owner as sidadm and group as sapsys to some of the br* files
I tried by changing the permissions to oraSID : dba but still the error is the same
Any suggestions?Means you have not initialized the medias but trying to take backups.
First check how many medias you have entered in your tape count parameter for archive log backups (just go to initSID.sap and check)
Then increase/reduce them to according to your archive backup plan >> Initialize all the tapes according to their name (same as you have initialized in initSID.sap) >> stick physical label to all the medias according to name >> Schedule archive backups
It will not ask you for initialization as already you have initialized in second step.
Suggestion: Use 7 medias per week (one tape per day)
Regards,
Nick Loy -
Shell script for archive log transfer
hi
I dont want to reinvent the wheel.
I am looking for shell script for log shipping to provide standby db.
What I want to do is, get the last applied archived log number from alert.log
Copy the files from archive destination according to this value.
CheersIf you don't want to re-invent the wheel you use Dataguard, no scripts.
And your script should use the dictionary, instead of some bs method to read the alert.
v$archived_log has all information!
Also as far as I know, the documentation describes manual standby.
So apparently you not only don't want to reinvent the wheel, but you want the script on a silver plate on your doorstep!
Typical attitude of most DBAs here. Use OTN for a permanent vacation.
Sybrand Bakker
Senior Oracle DBA -
Missing archived logs in v$archived_log
I want to monitor if system accounts such as system, sys or other DBA accounts
did any DML in my application schema. What are the minimal audit statements?
needed to accomplish this. I know using Oracle Database Vault one can set realm to restrict this but I do
not have Data Vault, just looking to audit this using Oracle’s database auditing.
My organization also has Oracle Audi Vault, but Database Auditing Events have to be set in database
first for collection agent to pass this information to Oracle Audit Vault server.
Thanks a lot.Duplicate post
Please ignore
Missing archived logs in v$archived_log
Handle: user632098
Status Level: Newbie
Registered: Apr 20, 2008
Total Posts: 115
Total Questions: 29 (28 unresolved)
Edited by: sb92075 on Dec 19, 2009 4:49 PM -
Two entries for each archive log in v$archived_log
Hi,
I have noticied that there are two entries for each archive log. Why this is so...?
I have fired following command.
==================
set pages 300
set lines 120
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';
SELECT sequence#, first_time, next_time
FROM v$archived_log
ORDER BY sequence#;
==================
output is as follows.
==================
1436 24-FEB-2012 00:04:09 24-FEB-2012 08:24:21
1436 24-FEB-2012 00:04:09 24-FEB-2012 08:24:21
1437 24-FEB-2012 08:24:21 24-FEB-2012 15:45:01
1437 24-FEB-2012 08:24:21 24-FEB-2012 15:45:01
1438 24-FEB-2012 15:45:01 24-FEB-2012 15:45:04
1438 24-FEB-2012 15:45:01 24-FEB-2012 15:45:04
1439 24-FEB-2012 15:45:04 24-FEB-2012 15:45:57
1439 24-FEB-2012 15:45:04 24-FEB-2012 15:45:57
1440 24-FEB-2012 15:45:57 24-FEB-2012 17:26:41
1440 24-FEB-2012 15:45:57 24-FEB-2012 17:26:41
1441 24-FEB-2012 17:26:41 24-FEB-2012 18:40:07
1441 24-FEB-2012 17:26:41 24-FEB-2012 18:40:07
1442 24-FEB-2012 18:40:07 24-FEB-2012 19:36:17
1442 24-FEB-2012 18:40:07 24-FEB-2012 19:36:17
1443 24-FEB-2012 19:36:17 24-FEB-2012 19:36:18
1443 24-FEB-2012 19:36:17 24-FEB-2012 19:36:18
==================
Regards
DBA.I have noticied that there are two entries for each archive log. Why this is so...?Mseberg already mentioned.. little in detail as below
Check for the name column in v$archived_log,
One location refers to Local destination LOG_ARCHIVE_DEST_1
Other location refers to your standby/DR location, But it will shows you only service name instead of full archive name.
select dest_id,name from v$archived_log where name is not null and completion_time like '%24%FEB%'
DEST_ID NAME
1 +ORAARCHIVE/prod1/archivelogs/arch_0001_0671689302_0000240097.arc
2 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=sldb1srv)(POR
T=9101)))(CONNECT_DATA=(SERVICE_NAME=prod_sldb1srv_XPT)(INSTANCE_N
AME=prod1)(SERVER=dedicated)))Edited by: CKPT on Feb 24, 2012 8:26 PM -
Newbie DBA
DB: Oracle 10 g OS: windows
I have 2 databases A(production) and B(Standby- DR) . I am manually copying the archive log files from A to B and applying it .
SQL> recover standby database;
ORA-00279: change 23515270208 generated at 10/13/2010 10:59:39 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74498_0665930822_001.ARC
ORA-00280: change 23515270208 for thread 1 is in sequence #74498
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
auto
ORA-00279: change 23515555165 generated at 10/13/2010 11:43:46 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74499_0665930822_001.ARC
ORA-00280: change 23515555165 for thread 1 is in sequence #74499
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74498_0665930822_001.ARC'
no longer needed for this recovery
ORA-00279: change 23515638382 generated at 10/13/2010 12:18:50 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74500_0665930822_001.ARC
ORA-00280: change 23515638382 for thread 1 is in sequence #74500
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74499_0665930822_001.ARC'
no longer needed for this recovery
ORA-00279: change 23515659690 generated at 10/13/2010 12:18:55 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74501_0665930822_001.ARC
ORA-00280: change 23515659690 for thread 1 is in sequence #74501
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74500_0665930822_001.ARC'
no longer needed for this recovery
ORA-00279: change 23516026341 generated at 10/13/2010 12:19:25 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74502_0665930822_001.ARC
ORA-00280: change 23516026341 for thread 1 is in sequence #74502
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74501_0665930822_001.ARC'
no longer needed for this recovery
ORA-00279: change 23516535747 generated at 10/13/2010 12:20:02 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74503_0665930822_001.ARC
ORA-00280: change 23516535747 for thread 1 is in sequence #74503
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74502_0665930822_001.ARC'
no longer needed for this recovery
ORA-00279: change 23516855012 generated at 10/13/2010 12:20:30 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74504_0665930822_001.ARC
ORA-00280: change 23516855012 for thread 1 is in sequence #74504
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74503_0665930822_001.ARC'
no longer needed for this recovery
ORA-00279: change 23516876618 generated at 10/13/2010 12:20:42 needed for
thread 1
ORA-00289: suggestion :
H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74505_0665930822_001.ARC
ORA-00280: change 23516876618 for thread 1 is in sequence #74505
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74504_0665930822_001.ARC'
no longer needed for this recovery
ORA-00308: cannot open archived log
'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74505_0665930822_001.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
SQL>Please tell me if there is anything wrong ?
Edited by: Renjith Madhavan on Oct 13, 2010 3:00 PMRenjith Madhavan wrote:
I was just curious if
ORA-00278: log file 'H:\ORACLE\ORA10G\DPM\ARCHP\DPMLOG74499_0665930822_001.ARC'
no longer needed for this recoveryis an error or not . Doest it mean that all ORA - messages are not errors ? I was told that the archived logs are applied successfully and no need to worry .Have a look into the error message documentation :
http://download.oracle.com/docs/cd/B19306_01/server.102/b14219/e0.htm#sthref202
"+*ORA-00278: log file "string" no longer needed for this recovery *+
+*Cause*: The specified redo log file is no longer needed for the current recovery.+
+*Action*: No action required. The archived redo log file may be removed from its current location to conserve disk space, if needed. However, the redo log file may still be required for another recovery session in the future.+"
Nicolas. -
Question about import in archive log mode
Hello.
I am a developer, I have ordered to write a script that makes the import of a schema of a database (release 9.2.0.7). That import will be done once a day. I have seen that in my development environment the import creates 54 archivers files (10M aprox. each), that means more that half a Gb a day, it seems too much to me.
I cannot see why all those archivers can be useful. Would a good way of proceeding the following?
1. Forcing an archiver just before the import (I do not know how to do that) so that a backup could be done to the state before the import.
2. Disabling archive log mode during the import and enabling it just after the import (I do not know how to do that).
3. Forcing a new archiver just after the import (I do not know how to do that).
Thanks in advance.540M is not that much.
One would question why you need to import every day,
there must be better ways to do this.
The three steps you propose look like an awful
scenario, as it would require the database to switch
from archivelog to noarchivelog and vice versa.
This would require the database to close twice a
day.
The scenario is also incomplete as one would need to
take a backup after the import
If you can convince your DBA to close the database
twice a day, he should write the script, which he can
easily derive from the docs.
But likely he will visit Billy Verreynne to borrow
his lead pipe, and rightly so ;)
Sybrand Bakker
Senior Oracle DBAThanks for the answer.
A few things:
- Sorry for my ignorance, I have no experience in database backups, I do not understand why I need to backup just after the import.
- That database is not critical, it is just for a team who will test on that database several applications, so the database will only need to be open during the office schedule.
- I have no dba. The client has a dba for several databases but I have no contact with him/her nor my boss has. -
Urgent: Huge diff in total redo log size and archive log size
Dear DBAs
I have a concern regarding size of redo log and archive log generated.
Is the equation below is correct?
total size of redo generated by all sessions = total size of archive log files generated
I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
Before i start measuring i cleared up archive directory and started to monitor from a specific time.
Environment: Oracle 9i Release 2
How I tracked the sizing information is below
logon as SYS user and run the following statements
DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
CREATE TABLE REDOSTAT
AUDSID NUMBER,
SID NUMBER,
SERIAL# NUMBER,
SESSION_ID CHAR(27 BYTE),
STATUS VARCHAR2(8 BYTE),
DB_USERNAME VARCHAR2(30 BYTE),
SCHEMANAME VARCHAR2(30 BYTE),
OSUSER VARCHAR2(30 BYTE),
PROCESS VARCHAR2(12 BYTE),
MACHINE VARCHAR2(64 BYTE),
TERMINAL VARCHAR2(16 BYTE),
PROGRAM VARCHAR2(64 BYTE),
DBCONN_TYPE VARCHAR2(10 BYTE),
LOGON_TIME DATE,
LOGOUT_TIME DATE,
REDO_SIZE NUMBER
TABLESPACE SYSTEM
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT SELECT ON REDOSTAT TO PUBLIC;
CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO SYS.REDOSTAT
(AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
WHERE
A.SID = B.SID
AND
B.STATISTIC# = C.STATISTIC#
AND
C.NAME = 'redo size'
AND
A.AUDSID = sys_context ('USERENV', 'SESSIONID');
COMMIT;
END TR_SESS_LOGOFF;
Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
I have seen the similar implementation as above at many sites.
Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
If I didnt find a solution I would raise a SR with Oracle.
Thanks
[V]You can query v$sess_io, column block_changes to find out which session generating how much redo.
The following query gives you the session redo statistics:
select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
where a.statistic# = b.statistic#
and b.name like '%redo%'
and a.value > 0
group by a.sid,b.name
If you want, you can only look for redo size for all the current sessions.
Jaffar -
Why the flashback log'size smaller than the archived log ?
hi, all . why the flashback log'size smaller than the archived log ?
Lonion wrote:
hi, all . why the flashback log'size smaller than the archived log ?Both are different.
Flash logs size depends on parameter DB_FLASHBACK_RETENTION_TARGET , how much you want to keep.
Archive log files is dumped file of Online redo log files, It can be either size of Online redo log file size or less depending on online redo size when switch occurred.
Some more information:-
Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
Source:- http://dba-blog.blogspot.in/2006/05/flashback-database-feature.html
Edited by: CKPT on Jun 14, 2012 7:34 PM -
Archive log mode in oracle 10g
Hi,
I would like to know the archive log mode in oracle 10g and I use this code in SQLPlus
select log_mode from v$database
But it displayed: "2" not : NOARCHIVELOG or ARCHIVELOG
It displayed a number, not a String.
How could I know this?
ThanksHi Paul
Because I am a newbie in DBA Oracle so I got many difficulties.
You are very kind to help me.
So I have some more questions:
1. when I executed this code, it always reported error:
$ tmp=`${ORACLE_HOME}/bin/sqlplus -s / as sysdba << EOF
set heading off feedback off;
exit
EOF`
tmp='ERROR:
ORA-01031: insufficient privileges
SP2-0306: Invalid option.
Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER}]
where <logon> ::= <username>[<password>][@<connect_identifier>] | /
SP2-0306: Invalid option.
Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER}]
where <logon> ::= <username>[<password>][@<connect_identifier>] | /
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus'
so when I updated like this:
tmp=`${ORACLE_HOME}/bin/sqlplus -s sys/syspass@db02 as sysdba <<EOF
set heading off feedback off;
exit
EOF`
It run correctly.
2. With Paul's guide:
Do not execute Oracle commands from root, execute them as oracle user. This works to me :
$ tmp=`${ORACLE_HOME}/bin/sqlplus -s / as sysdba << EOF
set heading off feedback off
alter database backup controlfile to '${CONTROLFILE_DIR}/<file name>';
alter database backup controlfile to trace;
exit
EOF`
Of course CONTROLFILE_DIR must be set to a directory with write permission for oracle user.
For ex: I have an Unix account: unix/unix
and a Sys Oracle account: oracle/oracle
I login with Unix acount (unix/unix) and call script file that contains above code.
tmp=`${ORACLE_HOME}/bin/sqlplus -s oracle/oracle@db02 as sysdba <<EOF
set heading off feedback off
alter database backup controlfile to '${CONTROLFILE_DIR}/backup_control.ctl';
alter database backup controlfile to trace;
exit
EOF`
Unix report as following: Linux error: 13: Permission denied.
CONTROLFILE_DIR directory is read,write,execute for account unix/unix.
Of course CONTROLFILE_DIR must be set to a directory with write permission for oracle user. You mean I have to create a Unix user is the same to Oracle user so that Oracle user can have permission to write.
Please guilde more detail.
Thanks for your attention.
Message was edited by:
user481034 -
How: Script archive log transfer to standby db
Hi,
I’m implementing disaster recovery right now. For some special reason, the only option for me is to implement non-managed standby (manual recovery) database.
The following is what I’m trying to do using shell script:
1. Compress archive logs and copy them from Primary site to Standby site every hour. ( I have a very low network )
2. Decompress archive logs at standby site
3. Check if there are missed archive logs. If no, then do the manual recovery
Did I miss something above? And I’m not skill in to build shell scripts, is there any sample scripts I can follow? Thanks.
Nabil
Message was edited by:
11iuserHi,
Take a look at data guard packages. There is a package just for this purpose: Bipul Kumar notes:
http://www.dba-oracle.com/t_oracledataguard_174_unskip_table_.htm
"the time lag between the log transfer and the log apply service can be built using the DELAY attribute of the log_archive_dest_n initialization parameter on the primary database. This delay timer starts when the archived log is completely transferred to the standby site. The default value of the DELAY attribute is 30 minutes, but this value can be overridden as shown in the following example:
LOG_ARCHIVE_DEST_3=’SERVICE=logdbstdby DELAY=60’;"
1. Compress archive logs and copy them from Primary site to Standby site every hour.Me, I use tar (or compress) and rcp, but I don't know the details of your environment. Jon Emmons has some good notes:
http://www.lifeaftercoffee.com/2006/12/05/archiving-directories-and-files-with-tar/
2. Decompress archive logs at standby siteSee the man pages for uncompress. I do it through a named pipe to simplify the process:
http://www.dba-oracle.com/linux/conditional_statements.htm
3. Check if there are missed archive logs.I keep my standby data in recovery mode, and as soon as the incoming logs are uncompressed, they are applied automatically.
Again, if you don't feel comfortable writing your own, consider using the data guard packages.
Hope this helps. . .
Donald K. Burleson
Oracle Press author -
Archive log gap is created is standby when ever audit trail is set to DB
Hi
I am a new dba. I am facing a problem at production server that whenever audit_trail parameter is set to db , archive log gap is created at the standby site.
My database version is 10.2.0.4
Os is windows 2003 R2
Audit_trail parameter is set to db only in primary site, after setting the parameter as db when I bounced the database and switched the logfile , archive log gap is created in the standby..I am using LGWR mode of log transport.
Is there any relation beteen audit_trail and log transport ?
Please note that my archive log location of both the sites has sufficient disk space and the drive is working fine.Also my primary and standby is in WAN.
Please help me in this.Any help will be highly appreciated.
Here a trace file which may be helpful to give any opinion.
Dump file d:\oracle\admin\sbiofac\bdump\sbiofac_lns1_6480.trc
Tue Jun 05 13:46:02 2012
ORACLE V10.2.0.4.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Windows Server 2003 Version V5.2 Service Pack 2
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:16504M/18420M, Ph+PgF:41103M/45775M, VA:311M/2047M
Instance name: sbiofac
Redo thread mounted by this instance: 1
Oracle process number: 21
Windows thread id: 6480, image: ORACLE.EXE (LNS1)
*** SERVICE NAME:() 2012-06-05 13:46:02.703
*** SESSION ID:(534.1) 2012-06-05 13:46:02.703
*** 2012-06-05 13:46:02.703 58902 kcrr.c
LNS1: initializing for LGWR communication
LNS1: connecting to KSR channel
Success
LNS1: subscribing to KSR channel
Success
*** 2012-06-05 13:46:02.750 58955 kcrr.c
LNS1: initialized successfully ASYNC=1
Destination is specified with ASYNC=61440
*** 2012-06-05 13:46:02.875 73045 kcrr.c
Sending online log thread 1 seq 2217 [logfile 1] to standby
Redo shipping client performing standby login
*** 2012-06-05 13:46:03.656 66535 kcrr.c
Logged on to standby successfully
Client logon and security negotiation successful!
Archiving to destination sbiofacdr ASYNC blocks=20480
Allocate ASYNC blocks: Previous blocks=0 New blocks=20480
Log file opened [logno 1]
*** 2012-06-05 13:46:44.046
Error 272 writing standby archive log file at host 'sbiofacdr'
ORA-00272: error writing archive log
*** 2012-06-05 13:46:44.078 62692 kcrr.c
LGWR: I/O error 272 archiving log 1 to 'sbiofacdr'
*** 2012-06-05 13:46:44.078 60970 kcrr.c
kcrrfail: dest:2 err:272 force:0 blast:1
*** 2012-06-05 13:47:37.031
*** 2012-06-05 13:47:37.031 73045 kcrr.c
Sending online log thread 1 seq 2218 [logfile 2] to standby
*** 2012-06-05 13:47:37.046 73221 kcrr.c
Shutting down [due to no more ASYNC destination]
Redo Push Server: Freeing ASYNC PGA buffer
LNS1: Doing a channel reset for next time around...OK
Great details thanks!!
Are The SDU/TDU settings are configured in the Oracle Net files on both primary and standby ? I will see if I have an example.
The parameters appear fine.
There was an Oracle document 386417.1 on this, I have not double checked if its still available. ( CHECK - Oracle 9 but worth a galance )
Will Check and post here.
I have these listed too. ( Will check all three and see if they still exist )
When to modify, when not to modify the Session data unit (SDU) [ID 99715.1] ( CHECK - still there but very old )
SQL*Net Packet Sizes (SDU & TDU Parameters) [ID 44694.1] ( CHECK - Best by far WOULD REVIEW FIRST )
Any chance your firewall limit the Packet size?
Best Regards
mseberg
Edited by: mseberg on Jun 6, 2012 12:36 PM
Edited by: mseberg on Jun 6, 2012 12:43 PM
Additional document
The relation between MTU (Maximum Transmission Unit) and SDU (Session Data Unit) [ID 274483.1]
Edited by: mseberg on Jun 6, 2012 12:50 PM
Still later
Not sure if this helps but I played around will this on Oracle 11 a little, here that example:
# listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = yourdomain.com)(PORT = 1521))
SID_LIST_LISTENER = (SID_LIST =(SID_DESC =(SID_NAME = STANDBY)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0)
(SDU=32767)
(GLOBAL_DBNAME = STANDBY_DGMGRL.yourdomain.com)))
ADR_BASE_LISTENER = /u01/app/oracle
INBOUND_CONNECT_TIMEOUT_LISTENER=120Edited by: mseberg on Jun 6, 2012 12:57 PM
Also of interest
Redo is transporting in 10gR2 versions.
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-dataguardnetworkbestpr-134557.pdf
Edited by: mseberg on Jun 6, 2012 1:11 PM -
Generate archive logs are not in sequence number?
On last friday... the latest archive log number was ARC00024.ARC. Tomorrow when I come backup, the archive logs ARC00001.ARC and ARC00002.ARC were being generated by oracle itself. I wondering the archive log sequence should be in sequence. What is happening?
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\oracle\ora92\RDBMS
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
SQL>
FAN
Edited by: user623471 on Jun 7, 2009 7:35 PMkhurram,
Its our production instance and havent issued resetlogs option but when listing the arvchives it shows in different sequence number...
and also while copying the archives by RMAN it doesnt copy in sequence
-rw-r----- 1 xxx dba 69363859 May 28 19:16 2_10373.arc.gz
-rw-r----- 1 xxx dba 43446622 May 28 19:16 1_10553.arc.gz
-rw-r----- 1 xxx dba 52587365 May 28 19:16 1_10578.arc.gz
-rw-r----- 1 xxx dba 45251820 May 28 19:16 1_10543.arc.gz
-rw-r----- 1 xxx dba 60890256 May 28 19:17 1_10579.arc.gz
-rw-r----- 1 xxx dba 46659008 May 28 19:17 1_10548.arc.gz
-rw-r----- 1 xxx dba 116899466 May 28 19:17 2_10353.arc.gz
-rw-r----- 1 xxx dba 77769517 May 28 19:17 1_10531.arc.gz
-rw-r----- 1 xxx dba 66401923 May 28 19:18 1_10530.arc.gz
-rw-r----- 1 xxx dba 45972697 May 28 19:18 1_10605.arc.gz
-rw-r----- 1 xxx dba 55082543 May 28 19:18 1_10600.arc.gz
-rw-r----- 1 xxxq dba 42682207 May 28 19:19 1_10547.arc.gz
thanks,
baskar.l -
Hello All,
I'm confused when it comes to archive logs. I did research the subject. The post at https://forums.oracle.com/thread/363272 somewhat
sounds like the problem I'm having. Our database goes down after a certain amount of time. We fix the problem by
issuing the commands:
rman target /
backup archivelog all delete input;
As far as I can tell, we are in archive log mode.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 6456
Next log sequence to archive 6458
Current log sequence 6458
This post mentioned something about automatic archive on. How do you check if automatic archive is on? Or is this way off base?
Is there a way to backup the archive logs automatically without manually running the following: rman target /
backup archivelog all delete input;
Thank you in advance. I really need to solve this problem soon. Any suggestions or explanations are greatly appreciated!
SharonSharon wrote:
John,
I appreciate the information. Unfortunately, I am the programmer that became the DBA after attending Oracle DBA classes once. I am learning as I go. There are no implementation consultants. Thank you again
for the information.
Sharon
Me again, Sharon. I appreciate your situation: I drifted into Oracle DBA work (decades ago...) without any porper training or mentor. Your classes should have covered the basics of backup and recovery, the ones I run (in-class and on-line) certainly do. But for the immediate future, I would suggest expanding your one line backup routine. Create a text file rman_script.txt with this content:
backup as compressed backupset database ;
backup as compressed backupset archivelog all delete all input;
backup current controlfile ;
delete obsolete;
And then run this every day:
rman target / @rman_script.txt
There is much more, but that should protect your database while you complete your studies.
Best of luck.
John Watson
Oracle Certified Maser DBA
http://skillbuilders.com
Maybe you are looking for
-
Hi All, Here i want poll 20,000 the records per minute and sent Siebel service . so what configuration can i do for MaxRaiseSize ,MaxTransactionSize,PollingInterval,etc....and my PollingStrategy is "Delete the row(s) that were Read" in single activ
-
REUSE_ALV_GRID_DISPLAY == How to add a new line?
Hello SDN community, I'm using the "REUSE_ALV_GRID_DISPLAY" function with "i_save = 'X'". How can I solve the following problem: I want to insert a new line into the ALV and after a click on the SAVE Button, the internal table should have this new li
-
Sequential Numbering (Ex: Invoices) and #-Up per Sheet
I have a 3 part NCR Invoice which measures 5.5x8.5. In the bottom left corner I need to put sequential numbering beginning with 31677 and continuing on 1,000 times. I am going to copy this sheet and put it 4-up on 11x17 sheet size. Is there anyway i
-
MB5b Value is not matching with MB52 value
Hi Expert, Please tell me what is the difference between tcode mb5b and mb52.In our system MB5B Valuated Stock Qty and Value is not Matching with MB52 (Unrestricted + GIT+ stock in QI) stock and value. Edited by: lifesgud on Apr 26,
-
I have purchased a surround sound system, connected it to my iMac using digital out. This appears in Audio Midi but doesnt let me configure the settings. Is this normal? also it doesnt allow me to select any of the multi channel speaker configuration