Trouble querying v$archived_log
Hi Gurus.
I am trying to use the v$archived_log view to check the last applied (it must be applied) archivelog to our standby.
This works with the following:
SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG WHERE APPLIED = 'YES';
The above provides me with a list of applied logs:
SEQUENCE# APP
1234 YES
1235 YES
etc etc
However, I would like to query the same view and extract the sequence number of the last applied log.
I would like just the last number so that I can append to a text file and compare with the Primary database.
Can someone please assist?
I am trying this:
SELECT MAX (SEQUENCE#), APPLIED FROM V$ARCHIVED_LOG WHERE APPLIED = 'YES';
However this returns an error:
ORA-00937: not a single-group group function
Thanks and regards,
DA
Linux Red Hat 4
10.2.0.4
Hi,
SELECT MAX (SEQUENCE#), APPLIED FROM V$ARCHIVED_LOG WHERE APPLIED = 'YES' group by sequence;correction
chirnar & SB are correct...
Edited by: CKPT on Nov 1, 2010 9:41 PM
Similar Messages
-
Have trouble query after creating a linked server from SQL Server 2008 to EXCEL 2007
I created a linked server from SQL Server 2008 Management Studio Express to an EXCEL 2007 workbook using:
sp_addlinkedserver @server='LSERVER_EX0', @srvproduct='EXCELDATA', @provider='Microsoft.ACE.OLEDB.12.0', @datasrc='C:\Temp\abc.xlsx', @provstr='EXCEL 12.0'
The linked server LSERVER_EX0 was created, but I cannot see any table(excel sheet), and when i ran the following to query tables,
sp_tables_ex 'LSERVER_EX0'
I got:Cannot obtain the schema rowset "DBSCHEMA_TABLES" for OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "LSERVER_EX0". The provider supports the interface, but returns a failure code when it is used.
Any hint why?
on another note, i was able to import the EXCEL sheets using the Import and Export Data wizard, but i cannot control the column data type and size in this way.Open management studio, goto "Server Objects"->"Linked Servers"->Providers, select the provider you use, right click it and in provider options, check "Allow inprocess".
-
V$archived_log and gv$archived_log
2 node RAC on 10.2.0.4
Archived Logs stored in Local file systems of each node (not in ASM)
To determine the amount of redo generated on each nodes I googled and got the below query.
SELECT ROUND(SUM(blocks*block_size)/1024/1024/1024) arc_size,
TRUNC(first_time) arc_date
FROM v$archived_log
WHERE dest_id=1
GROUP BY TRUNC(first_time)
ORDER BY 2 DESCWhen i checked the actual size of archive logs on each nodes, i realized that the below query is returning the sum of archive logs of both nodes.(accurately though)
When i queried gv$archived_log, it returns
Size of archive logs from both nodes multiplied by 2 (inaccurate)Why can't v$archived_log just show the archivelog size of its own instance? Are there any other RAC related v$views which behaves similarly ?
Edited by: Herbaceous on May 26, 2011 8:13 AMHi,
Why can't v$archived_log just show the archivelog size of its own instance? Are there any other RAC related v$views which behaves similarly ?To understand It we need understand the difference between instance and database. An database (files) can be opened by many instances.
Database:
CONTROLFILE
DATAFILE
ONLINELOG
ARCHIVELOG
SPFILE
Instances:
PARAMETERS
MEMORY STRUCTURE
The fact that each instance have its own REDO/UNDO, but not mean they are part only of the instance (nodes).
So, archivelog doesn't have your own instance (node), but it's generate by one of instances and belong to database. So, it's recommended place archivelogs on a shared disk, such as ONLINELOGS.
You not need gv$ to query databases files, because it's the same on all instances.
gv$controlfile = (v$controlfile * qtd instances)
gv$datafile = (v$datafile * qtd instances)
gv$log = (v$log * qtd instances)
gv$archive_log = (v$archive_log * qtd instances)
So, I recommend you running your query specifying THREAD#, but the values can be inaccurate because THREAD# 2 can archive REDO of THREAD# 1.
Also you can use this query to estimate archivelogs size by THREAD.
SELECT distinct(to_char((bytes*0.000001),'9990.999')) size_mb FROM v$log;
column ord noprint
column date_ heading 'Date' format A15
column no heading '#Arch files' format 9999999
column no_size heading 'Size Mb' format 9999999
compute avg of no on report
compute avg of no_size on report
break on report
select MAX(first_time) ord, to_char(first_time,'DD-MON-YYYY') date_,
count(recid) no, count(recid) * <logfile_size> no_size
from v$log_history
group by to_char(first_time,'DD-MON-YYYY') ,THREAD#
order by ord
clear breaks
clear computes
clear columnsSee this tech note on MOS:
*Determine How Much Disk Space is Needed for the Archive Files [ID 122555.1]*
Regards,
Levi Pereira
Edited by: Levi Pereira on May 26, 2011 4:49 PM -
XSD with xs:any and then querying in the xs:any part
Hi,
I'm having troubles querying for elements in an xs:any part of a schema. Trouble as in: can't get to the data anymore using simple xquery/xpath expressions.
I have a XSD with an xs:any element:
(the actual XSD is bigger, this is a smaller, test version)
<?xml version="1.0" encoding="windows-1252"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://localhost/anydata.xsd"
xmlns:xdb="http://xmlns.oracle.com/xdb"
xmlns="http://localhost/anydata.xsd"
elementFormDefault="qualified"
attributeFormDefault="unqualified"
version="1.0">
<xs:element name="anydata" type="anydataType" xdb:defaultTable="ANYDATA"/>
<xs:complexType name="anydataType">
<xs:sequence>
<xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded" processContents="lax" />
</xs:sequence>
<xs:attribute name="refId" type="xs:string" use="required" />
</xs:complexType>
</xs:schema>I register the xsd and create the default table (following binary xml) with the following code (after uploading the anydata.XSD using ftp to /public/tmp):
set serveroutput on
VAR schemaURL VARCHAR2(256)
VAR schemaPath VARCHAR2(256)
BEGIN
:schemaURL := 'http://localhost/anydata.xsd';
:schemaPath := '/public/tmp/anydata.xsd';
END;
-- Delete schema if already there, delete with cascade
BEGIN
DBMS_XMLSchema.deleteSchema(
schemaurl=>:schemaURL,
delete_option=>DBMS_XMLSchema.Delete_Cascade_Force);
END;
BEGIN
dbms_xmlschema.registerSchema(
SCHEMAURL => 'http://localhost/anydata.xsd',
SCHEMADOC => xdbURIType('/public/tmp/anydata.xsd').getClob(),
LOCAL => TRUE,
GENTYPES => FALSE,
GENBEAN => FALSE,
GENTABLES => TRUE,
FORCE => FALSE,
OPTIONS => DBMS_XMLSCHEMA.REGISTER_BINARYXML,
OWNER => USER);
END;
/Inserting some data with an <a>a</a> element:
INSERT INTO anydata VALUES (XMLType('<?xml version="1.0" encoding="ISO-8859-1"?><anydata xmlns="http://localhost/anydata.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://localhost/anydata.xsd http://localhost/anydata.xsd" refId="a12ab"><a>a</a></anydata>'));No problems here:
SQL> select * from anydata;
SYS_NC_ROWINFO$
<?xml version="1.0" encoding="ISO-8859-1"?>
<anydata xmlns="http://localhost/anydata.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://local
host/anydata.xsd http://localhost/anydata.xsd" refId="a12ab">
<a>a</a>
</anydata>Select the refId is no problem:
SQL> SELECT * FROM XMLTable('
2 declare default element namespace "http://localhost/anydata.xsd"; (: :)
3 for $f in ora:view("anydata")
4 return $f/anydata/@refId'
5 ) xtab;
COLUMN_VALUE
a12abBut getting the <a> value... seems to be impossible, whatever I try:
SQL> SELECT * FROM XMLTable('
2 declare default element namespace "http://localhost/anydata.xsd"; (: :)
3 for $f in ora:view("anydata")
4 return $f/anydata/a'
5 ) xtab;
SELECT * FROM XMLTable('
ERROR at line 1:
ORA-19276: XPST0005 - XPath step specifies an invalid element/attribute name: (a ='http://localhost/anydata.xsd')So probably something to do with <a> not being part of the schema. I've also tried the following, to see if I can only apply the namespace to the appropriate elements:
SQL> SELECT * FROM XMLTable('
2 declare namespace ad="http://localhost/anydata.xsd"; (: :)
3 for $f in ora:view("anydata")
4 return $f/ad:anydata/a'
5 ) xtab;
SELECT * FROM XMLTable('
ERROR at line 1:
ORA-19276: XPST0005 - XPath step specifies an invalid element/attribute name: (a)Doesn't help either.
Anyone ever seen this and solved it?
Message was edited by:
TijinkHi,
I've patch my system with the following patch (id 7009297):
ORACLE 11G 11.1.0.6 PATCH 3 BUG FOR WINDOWS X-64 BIT AMD
And the result
SQL> SELECT * FROM XMLTable('
2 declare namespace ad="http://localhost/anydata.xsd"; (: :)
3 for $f in ora:view("anydata")
4 return $f/ad:anydata/ad:a'
5 ) xtab;
<a xmlns="http://localhost/anydata.xsd">a</a>So, it works. Thanks for the help!
null -
How can I determine what is the minimum SCN number I need to restore up to.
Say if I have a full database backup, I know I have file inconsistency, but I want to know what is the minimum time or SCN number a need to roll forward to in order to be able to open the database?
For example: I do a database restore.
restore database ;
RMAN> sql 'alter database open read only';
sql statement: alter database open read only
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 03/16/2009 15:00:04
RMAN-11003: failure during parse/execution of SQL statement: alter database open read only
ORA-16004: backup database requires recovery
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
I need need to apply archive log files. All references I find for ORA-00194 state the solution is to "apply more logs until the file is consistent " But "HOW MANY LOGS", or more apporaite up to what time or SCN? How does one determine what TIME or SCN is required to get all file consistent?
I thought this query might provide the answer, but it doesn't
select max(checkpoint_change#)
from v$datafile_header
MAX(CHECKPOINT_CHANGE#)
7985876903
--It applies a bit more redo, but not enough to make my datafiles consistent.
recover database until SCN=7985876903 ;
Starting recover at 03/16/09 15:04:54
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
starting media recovery
channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=18436
channel ORA_DISK_1: reading from backup piece /temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1 tag=TAG20090315T041716
channel ORA_DISK_1: restore complete, elapsed time: 00:02:26
archive log filename=/u01/app/oracle/flash_recovery_area/P1/archivelog/2009_03_16/o1_mf_1_18436_4vxd81yc_.arc thread=1 se quence=18436
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
I've discover I need to apply archive logs until this query reports all datafiles as FUZZY=NO , but this only works by guessing at some time periord to roll forward to, then checking the FUZZY column, and try again. Is there a way to know, I have to roll forward to a specific SNC in order for all my datafiles to be consistent?
select file#
, status
, checkpoint_change#
, checkpoint_time
, FUZZY
, RECOVER
,LAST_DEALLOC_SCN
from v$datafile_header
order by checkpoint_time
Thanks,
JasonThe minimum point in time is the time when the last backup piece for datafiles in that backup was completed.
Your alert.log should show the redo log sequence number at that time.
You can query V$ARCHIVED_LOG and get the FIRST_CHANGE# of the first archivedlog generated after that backup piece completed.
A
LIST BACKUP;in RMAN should also show you the SCNs at the time of the backups.
You can also query SCN_TO_TIMESTAMP -- eg
select timestamp_to_scn(to_timestamp('15-MAR-09 09:24:01','DD-MON-RR HH24:MI:SS')) from dual;will return an approximation of the SCN.
Hemant K Chitale
http://hemantoracledba.blogspot.com
Edited by: Hemant K Chitale on Mar 17, 2009 9:41 AM
added the LIST BACKUP command from RMAN. -
How to i determine what a 10.10.00 memory error means on a laserjet 3600
Printer: HP 3600n
Error message: 10.10.00 memory error
What is this error and what can be done to resolve it.The minimum point in time is the time when the last backup piece for datafiles in that backup was completed.
Your alert.log should show the redo log sequence number at that time.
You can query V$ARCHIVED_LOG and get the FIRST_CHANGE# of the first archivedlog generated after that backup piece completed.
A
LIST BACKUP;in RMAN should also show you the SCNs at the time of the backups.
You can also query SCN_TO_TIMESTAMP -- eg
select timestamp_to_scn(to_timestamp('15-MAR-09 09:24:01','DD-MON-RR HH24:MI:SS')) from dual;will return an approximation of the SCN.
Hemant K Chitale
http://hemantoracledba.blogspot.com
Edited by: Hemant K Chitale on Mar 17, 2009 9:41 AM
added the LIST BACKUP command from RMAN. -
How to find out which archived logs needed to recover a hot backup?
I'm using Oracle 11gR2 (11.2.0.1.0).
I have backed up a database when it is online using the following backup script through RMAN
connect target /
run {
allocate channel d1 type disk;
backup
incremental level=0 cumulative
filesperset 4
format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
database
}The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
I have not deleted any archived log.
How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
Thanks for any help.A few ways :
1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
1b. Review the alert.log of the database that was backed up.
1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
2a. Do a RESTORE DATABASE alone.
2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
(why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
Hemant K Chitale -
Unable to start logical standby
Hi All,
I'm having some problems trying to start my logical standby. I managed to successfullly create a physical standby and ensured that it was collecting and applying archive logs. Running Oracle 10g on 11g Grid ASM
I then converted it to a logical and i'm getting this problem : -
SYS@Logical > ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
ERROR at line 1:
ORA-00254: error in archive control string ''
ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 615
ORA-06512: at line 1
The following is my parameters on the logical db
log_archive_dest_1 string LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=Primary
log_archive_dest_3 string LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=Logical
*log_archive_dest_2 points to another physical that is working ok.
standby_archive_dest string LOCATION=USE_DB_RECOVERY_FILE_DEST
db_recovery_file_dest string +FLASH_RECOVERY_AREA
This is the only line relevant in my alert log
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
ORA-254 signalled during: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE...
These are the parameters on my primary db
log_archive_dest_1 string LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=Primary
log_archive_dest_3 string SERVICE=logical LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=logical
log_archive_dest_4 string LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=primary
SYS@Primary > select SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI,FORCE_LOGGING from v$database;
SUP SUP FOR
YES YES YES
Can anyone tell me what i'm doing wrong? I 've looked up google and metalink and it keeps coming down to me putting in a archive parameter wrong. I don't htink i'ts on the primary, or it would have errored with the creation of the physical database.
Any help woudl be appreciated.
Thanks.Hey Damorgan,
I should have made it clear, sorry. That isn't the syntax I used to create the parameters, that is a screen grab of the parameter with me editing it to make it easier to read. So apologies for the confusion.
To test the physical standby, I switched an archive log on the primary and then queried v$archived_log to make sure the logs were applied and compared the values between both the primary and physical. I did several log switches and it i could see it replicate on the physical.
Do you have any other suggestions on how to check for a physical standby? -
Dataguard physical standby archive log question
Hi all,
I will try to keep this simple..
I have a 4 node RAC primary shipping logs to a 2 node physical standby.
On the primary when I run 'alter system archive log current' on an instance I only see 1 log being applied on the standby, that is by querying v$archived_log.
If I run the following on the standby:
select thread#,sequence#,substr(name,43,70)"NAME",registrar,applied,status,first_time from v$archived_log where first_time
in
(select max(first_time) from v$archived_log group by thread#)
order by thread#
I get:
THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
1 602 thread_1_seq_602.2603.721918617 RFS YES A 17-jun-2010 12:56:58
2 314 thread_2_seq_314.2609.721918627 RFS NO A 17-jun-2010 12:56:59
3 311 thread_3_seq_311.2604.721918621 RFS NO A 17-jun-2010 12:57:00
4 319 thread_4_seq_319.2606.721918625 RFS NO A 17-jun-2010 12:57:00
Why do we only see the max(sequence#) having been applied and not all of them?
This is the same no matter how many times I archive the current log files on any of the instances on the primary and also the standby does not have any gaps.
Hope this is clear..
any ideas?
jdok output from gv$archived_log on standby BEFORE 'alter system archive log current' on primary
THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
1 679 thread_1_seq_679.1267.722001505 RFS NO A 18-jun-2010 11:58:22
1 679 thread_1_seq_679.1267.722001505 RFS NO A 18-jun-2010 11:58:22
2 390 thread_2_seq_390.1314.722001507 RFS NO A 18-jun-2010 11:58:23
2 390 thread_2_seq_390.1314.722001507 RFS NO A 18-jun-2010 11:58:23
3 386 thread_3_seq_386.1266.722001505 RFS YES A 18-jun-2010 11:58:22
3 386 thread_3_seq_386.1266.722001505 RFS YES A 18-jun-2010 11:58:22
4 393 thread_4_seq_393.1269.722001507 RFS NO A 18-jun-2010 11:58:23
4 393 thread_4_seq_393.1269.722001507 RFS NO A 18-jun-2010 11:58:23
Output from v$archived_log on standby AFTER 'alter system archive log current' on primary
THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
1 680 thread_1_seq_680.1333.722004227 RFS NO A 18-jun-2010 11:58:29
1 680 thread_1_seq_680.1333.722004227 RFS NO A 18-jun-2010 11:58:29
2 391 thread_2_seq_391.1332.722004227 RFS NO A 18-jun-2010 11:58:30
2 391 thread_2_seq_391.1332.722004227 RFS NO A 18-jun-2010 11:58:30
3 387 thread_3_seq_387.1271.722004225 RFS NO A 18-jun-2010 11:58:28
3 387 thread_3_seq_387.1271.722004225 RFS NO A 18-jun-2010 11:58:28
4 394 thread_4_seq_394.1270.722004225 RFS YES A 18-jun-2010 11:58:29
4 394 thread_4_seq_394.1270.722004225 RFS YES A 18-jun-2010 11:58:29
as a reminder we have a 4 node RAC system shipping logs to a 2 node RAC standby. There are no gaps but only one log is ever registered as being applied.
Why is that? Why arnt all logs registered as being applied? -
How to find what archivelogs are needed for recovery
From Hemants sir's blog:
http://hemantoracledba.blogspot.com/2010/03/misinterpreting-restore-database.html
"You can query V$ARCHIVED_LOG for FIRST_CHANGE# and FIRST_TIME besides SEQUENCE#. That way, you can match SCN (FIRST_CHANGE#), Time and Sequence to determine which ArchiveLogs are need. The RMAN LIST BACKUP command shows you the Checkpoint SCN for all datafiles in a backup, so you need ArchiveLogs from the point of the earliest Checkpoint SCN in that backup set."
How can i find the earliest checkpoint SCN from the backupsets?
Any queries to find the earliest SCN in backupset(RC views)?
Edited by: user9097501 on Aug 10, 2010 6:38 AMIf i have to refresh the database from the backup of 9-AUG so the earliest SCN from where recovery would be needed is 60279026593 ?
RMAN> list backup of datafile 1;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
43210 Incr 0 13.25G DISK 03:55:03 25-DEC-2009 15:52
List of Datafiles in backup set 43210
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 46646244284 25-DEC-2009 11:57 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43210
Device Type Elapsed Time Completion Time Compressed Tag
DISK 03:55:03 25-DEC-2009 15:51 YES FORCLONE
List of Backup Pieces for backup set 43210 Copy #1
BP Key Pc# Status Piece Name
51423 1 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_1.bak
51424 2 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_2.bak
51425 3 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_3.bak
51426 4 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_4.bak
51427 5 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_5.bak
51428 6 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_6.bak
51429 7 AVAILABLE /backup/bkup_for_clone_GOCPRD_20091225_44019_7.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
43367 Incr 0 15.33G DISK 03:39:25 19-FEB-2010 23:43
List of Datafiles in backup set 43367
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 49260361949 19-FEB-2010 20:03 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43367
Device Type Elapsed Time Completion Time Compressed Tag
DISK 03:39:25 19-FEB-2010 23:42 YES FORCLONE
List of Backup Pieces for backup set 43367 Copy #1
BP Key Pc# Status Piece Name
51819 1 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_1.bak
51820 2 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_2.bak
51821 3 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_3.bak
51822 4 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_4.bak
51823 5 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_5.bak
51824 6 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_6.bak
51825 7 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_7.bak
51826 8 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/na1stg/bkup_for_clone_GOCPRD_20100219_44170_8.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
43518 Incr 0 13.70G DISK 04:26:50 03-APR-2010 12:12
List of Datafiles in backup set 43518
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 51275194891 03-APR-2010 07:46 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43518
Device Type Elapsed Time Completion Time Compressed Tag
DISK 04:26:50 03-APR-2010 12:12 YES FORCLONE
List of Backup Pieces for backup set 43518 Copy #1
BP Key Pc# Status Piece Name
52481 1 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_1.bak
52482 2 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_2.bak
52483 3 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_3.bak
52484 4 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_4.bak
52485 5 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_5.bak
52486 6 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_6.bak
52487 7 EXPIRED /dnbusr1/dnbinas/dnbi_clone/backup/GOCPRD/bkup_for_clone_GOCPRD_20100403_44346_7.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
43599 Incr 0 16.56G DISK 04:36:51 11-APR-2010 03:35
List of Datafiles in backup set 43599
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 51583056741 10-APR-2010 22:58 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43599
Device Type Elapsed Time Completion Time Compressed Tag
DISK 04:36:51 11-APR-2010 03:35 YES FORCLONE
List of Backup Pieces for backup set 43599 Copy #1
BP Key Pc# Status Piece Name
52841 1 EXPIRED /backup/bkup_for_clone_GOCPRD_20100410_44440_1.bak
52842 2 EXPIRED /backup/bkup_for_clone_GOCPRD_20100410_44440_2.bak
52843 3 EXPIRED /backup/bkup_for_clone_GOCPRD_20100410_44440_3.bak
52844 4 EXPIRED /backup/bkup_for_clone_GOCPRD_20100411_44440_4.bak
52845 5 EXPIRED /backup/bkup_for_clone_GOCPRD_20100411_44440_5.bak
52846 6 EXPIRED /backup/bkup_for_clone_GOCPRD_20100411_44440_6.bak
52847 7 EXPIRED /backup/bkup_for_clone_GOCPRD_20100411_44440_7.bak
52848 8 EXPIRED /backup/bkup_for_clone_GOCPRD_20100411_44440_8.bak
52849 9 EXPIRED /backup/bkup_for_clone_GOCPRD_20100411_44440_9.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
43637 Incr 0 16.37G DISK 05:13:41 18-APR-2010 04:09
Keep: LOGS Until: FOREVER
List of Datafiles in backup set 43637
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 51953576598 17-APR-2010 22:56 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43637
Device Type Elapsed Time Completion Time Compressed Tag
DISK 05:13:41 18-APR-2010 04:09 YES FORCLONE_17APR
List of Backup Pieces for backup set 43637 Copy #1
BP Key Pc# Status Piece Name
53112 1 EXPIRED /backup/bkup_for_clone_GOCPRD_20100417_44486_1.bak
53113 2 EXPIRED /backup/bkup_for_clone_GOCPRD_20100417_44486_2.bak
53114 3 EXPIRED /backup/bkup_for_clone_GOCPRD_20100417_44486_3.bak
53115 4 EXPIRED /backup/bkup_for_clone_GOCPRD_20100418_44486_4.bak
53116 5 EXPIRED /backup/bkup_for_clone_GOCPRD_20100418_44486_5.bak
53117 6 EXPIRED /backup/bkup_for_clone_GOCPRD_20100418_44486_6.bak
53118 7 EXPIRED /backup/bkup_for_clone_GOCPRD_20100418_44486_7.bak
53119 8 EXPIRED /backup/bkup_for_clone_GOCPRD_20100418_44486_8.bak
53120 9 EXPIRED /backup/bkup_for_clone_GOCPRD_20100418_44486_9.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
43773 Incr 0 18.31G DISK 03:45:59 27-JUN-2010 16:23
List of Datafiles in backup set 43773
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 58235899541 27-JUN-2010 12:37 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43773
Device Type Elapsed Time Completion Time Compressed Tag
DISK 03:45:59 27-JUN-2010 16:23 YES FOR_DUP
List of Backup Pieces for backup set 43773 Copy #1
BP Key Pc# Status Piece Name
53784 1 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_1.bak
53785 2 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_2.bak
53786 3 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_3.bak
53787 4 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_4.bak
53788 5 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_5.bak
53789 6 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_6.bak
53790 7 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_7.bak
53791 8 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_8.bak
53792 9 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_9.bak
53793 10 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100627_44623_10.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
43964 Incr 0 10.59G DISK 03:17:50 05-AUG-2010 03:05
List of Datafiles in backup set 43964
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 60108165915 04-AUG-2010 23:47 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 43964
Device Type Elapsed Time Completion Time Compressed Tag
DISK 03:17:50 05-AUG-2010 03:05 YES FOR_DUP
List of Backup Pieces for backup set 43964 Copy #1
BP Key Pc# Status Piece Name
54470 1 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100804_44966_1.bak
54471 2 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100805_44966_2.bak
54472 3 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100805_44966_3.bak
54473 4 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100805_44966_4.bak
54474 5 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100805_44966_5.bak
54475 6 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100805_44966_6.bak
BS Key Type LV Size Device Type Elapsed Time Completion Time
44020 Incr 0 19.75G DISK 04:13:42 09-AUG-2010 03:15
List of Datafiles in backup set 44020
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 60279026593 08-AUG-2010 23:01 +DBDATA/dnbib/datafile/system.455.675750017
Backup Set Copy #1 of backup set 44020
Device Type Elapsed Time Completion Time Compressed Tag
DISK 04:13:42 09-AUG-2010 03:15 YES FOR_DUP
List of Backup Pieces for backup set 44020 Copy #1
BP Key Pc# Status Piece Name
54990 1 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100808_45009_1.bak
54991 2 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100808_45009_2.bak
54992 3 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100808_45009_3.bak
54993 4 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100808_45009_4.bak
54994 5 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100809_45009_5.bak
54995 6 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100809_45009_6.bak
54996 7 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100809_45009_7.bak
54997 8 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100809_45009_8.bak
54998 9 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100809_45009_9.bak
54999 10 AVAILABLE /backup/bkup_for_dup_GOCPRD_20100809_45009_10.bak
RMAN> -
Data Guard -- v$archive_log applied column shows wrong info
I'm playing with 10g Data Guard. Both Primary and Physical Standby are in Maximum Availability mode. When I query v$archived_log column applied for dest_id=2 (which is physical standby) for some files it shows NO value but alert log on both primary and standby shows file transferred info. Even on physical standby v$archived_log shows log is applied YES value. My question is : So why is Primary database's v$archived_log shows value NO ?
I am trying to setup crontab so that once I see value YES in primary's v$archived_log for dest_id = 2 then I can backup archived log file and delete it from primary database machine. But my perl script won't work because primary v$archived_log shows value NO for applied column for dest_id = 2.
Thanks.Hi OrionNet,
I think I am looking at the wrong column and also on the wrong database for what I need to do. Let me explain what I am trying to achieve.
I have a shell script to check if archived logs are shipped from Primary to Standby AND if Standby successfully applied it or not. My shell script was looking at Primary databases using following query
select sequence#, archived, applied
from v$archived_log
where dest_id = 2 -- running on Primary BUT looking at standby archived log destination
order by sequence# ;
SEQUENCE# ARCHIEVED APPLIED
=====================
58 YES YES
59 YES YES
*60* YES NO
61 YES YES
After reading [v$archived_log reference entry in manual|http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1016.htm#REFRN30011]
APPLIED Indicates whether the archivelog has been applied to its corresponding standby database (YES) or not (NO). The value is always NO for local destinations.
This column is meaningful at the physical standby site for the ARCHIVED_LOG entries with REGISTRAR='RFS' (which means this log is shipped from the primary to the standby database). If REGISTRAR='RFS' and APPLIED is NO, then the log has arrived at the standby but has not yet been applied. If REGISTRAR='RFS' and APPLIED is YES, the log has arrived and been applied at the standby database.
You can use this field to identify archivelogs that can be backed up and removed from disk.
I think I should use following query on Standby database and not on primary database
select sequence#, registrar, applied
from v$archived_log
where dest_id = 1 -- query running on standby so dest_id = 1 which is standby archive log destination
and registrar = 'RFS'
order by sequence# ;
SEQUENCE# REGISTRAR APPLIED
=====================
58 RFS YES
59 RFS YES
*60* RFS YES
61 RFS YES
So, my shell script should connect to standby database from primary database and evaluate which archive logs to delete after backup from primary machine.
Now I'll generate some gaps on Standby and check query again to make sure what I understand and expect is correct.
Hope I am clear now. Thanks for your help. My bad, I didn't read the manual correctly the first time. -
V$archived noy sync in Dataguard
Hi,
My question is about a doubt that I had y my Dataguard.
Few days ago I did a scn incremental restore in order to sync Primary and Standby instances, the problem is v$archived_log is not updated. Let me show you
Primary:
select max(sequence#), to_char(next_time,'DD-MON-YY:HH24:MI:SS'), applied from v$archived_log group by to_char(next_time,'DD-MON-YY:HH24:MI:SS'), applied order by 1 desc
923426 17-FEB-13:07:16:38 NO
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
915252
Standby:
select max(sequence#), to_char(next_time,'DD-MON-YY:HH24:MI:SS'), applied from v$archived_log group by to_char(next_time,'DD-MON-YY:HH24:MI:SS'), applied order by 1 desc
923426 17-FEB-13:07:16:38 YES
When I do automatic recover the gap is:
Fetching gap sequence in thread 1, gap sequence 923427-923526
My question is why I have different values on that queries?
Thanks a lot!
Edited by: 970676 on Feb 21, 2013 4:31 PMHello;
If I understand the question correctly its because of how you query v$archived_log. The max(sequence#) hurts the query because it does not allow you see the different values of DEST_ID.
Try changing the query to include DEST_ID and use WHERE sysdate > -1 ( on one of the date columns ) to see a smaller range.
clear screen
set linesize 100
column STANDBY format a20
column applied format a10
SELECT
name as STANDBY,
SEQUENCE#,
applied,
completion_time
from
v$archived_log
WHERE
DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;The try this query from your Primary :
http://www.visi.com/~mseberg/data_guard/monitor_data_guard_transport.html
Best Regards
mseberg
Edited by: mseberg on Feb 21, 2013 6:49 PM -
Where is the location of tablespace file and control file
Hi, all
where is the location of tablespace file and control file? tksFor DataFiles, query DBA_DATA_FILES or V$DATAFILE
For TempFiles, query DBA_TEMP_FILES or V$TEMPFILE
For Online Redo Logs, query v$LOGFILE
For Archived Redo Logs, query v$ARCHIVED_LOG
for Controlfiles, query v$CONTROLFILE
Hemant K Chitale
http://hemantoracledba.blogspot.com -
Change standby protection mode
hello
i have setup data guard using oracle 10g on windowx xp,
when i change the protection mode to maximum availability it works fine but when i change to maximum protection the database shuts down ..
i tried again also it shuts down i reverted to maximum availability it works ...plz help
another question, can the primary database and the standby database have the same names ?
many thanksHi,
Is data actually being applied on the standby database? In maximum protection the primary will only commit once the redo data is available on a standby db. If the standby is unreachable the primary would essentially stop processing.
On the standby, try querying v$archived_log to see if the logs are actually being received. Also check the alert log on the primary to see if there are any errors with the archiving process. -
Hi
We have Oracle 11.1 database with 2 Physical Standby Database confgured. Archives are applied noramlly on Standby Database and when i query v$archived_log is shows applied.
However when i query the same on Primary database , the applied column shows 'NO'.
Can some one please explain what may be the reason for this behavior
Thank YouIndicates whether the archivelog has been applied to its corresponding standby database (YES) or not (NO). The value is always NO for local destinations.
This column is meaningful at the physical standby site for the ARCHIVED_LOG entries with REGISTRAR='RFS' (which means this log is shipped from the primary to the standby database). If REGISTRAR='RFS' and APPLIED is NO, then the log has arrived at the standby but has not yet been applied. If REGISTRAR='RFS' and APPLIED is YES, the log has arrived and been applied at the standby database.
Maybe you are looking for
-
How to broadcast reports in pdf format
Hello Experts, I am running reports on the web and I need to broadcast report via email in the pdf format. I do not see the option of pdf at all in the braodcasting tabl. Here's what I am doing. I right click on the report and click Broadcast -and Ex
-
Does Time Machine save my songs from itunes?
I am trying to buy a new MacBook but want to make sure everything important is saved from my Old Mac I backed up everything on Time Machine Using an external hardrive I just want to make sure all of my Itunes songs are backed up as well. Can anyone h
-
Need help with Rules for mail accounts
Today I deleted a rule for a mail account and now I cant receive mail in 4 of 8 accounts. I'm not experienced with Mac's Mail program. Id there something you have to do for multiple accounts to recieve all mail? I can send but not receive. Help! ~ Ta
-
My itunes wont backup or restore to backup properly
hi, when i try either backup or restore to backup my itunes keeps giving me this error message [IMG]http://i835.photobucket.com/albums/zz273/paliometoxo/itunesBackup.png[/IMG] the iphone "my iphones name" cannot by synced. the required file cannot be
-
Hi. I am currently runing a macbook with 10.6.8 OS and mail 4.5, We have AT&T U-Verse as our internet provider, and our email adresses through them. Our macbook shows it received our last e mail on April 13. However, we receive multiple messages