Data Guard Archive log size
Hi Experts,
I would like to know do we have any views where we can see the size of the archive log file transfered and applied to the physical standby database. I wanted to see how much space it takes in a day.
Thanks
Shaan
Message was edited by:
Shaan_dmp
SQL> desc v$archived_log
Name Null? Type
RECID NUMBER
STAMP NUMBER
NAME VARCHAR2(257)
DEST_ID NUMBER
THREAD# NUMBER
SEQUENCE# NUMBER
RESETLOGS_CHANGE# NUMBER
RESETLOGS_TIME DATE
FIRST_CHANGE# NUMBER
FIRST_TIME DATE
NEXT_CHANGE# NUMBER
NEXT_TIME DATE
BLOCKS NUMBER
BLOCK_SIZE NUMBER
CREATOR VARCHAR2(7)
REGISTRAR VARCHAR2(7)
STANDBY_DEST VARCHAR2(3)
ARCHIVED VARCHAR2(3)
APPLIED VARCHAR2(3)
DELETED VARCHAR2(3)
STATUS VARCHAR2(1)
COMPLETION_TIME DATE
DICTIONARY_BEGIN VARCHAR2(3)
DICTIONARY_END VARCHAR2(3)
END_OF_REDO VARCHAR2(3)
BACKUP_COUNT NUMBER
ARCHIVAL_THREAD# NUMBER
ACTIVATION# NUMBER
Refer to blocks and block_size
Other than that, you can look this up in the documentation on v$archived_log.
Why do you act as a spoiled 3 year old, who wants everything on a golden plate, and can't be bothered to do anything himself?
Blocks and block_size: That is really obvious, isn't it?
It is just an issue of using your brains!!!
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Hi,
I am using 9i Data Guard now. I try to set up automatic procedure to remove the archive log on the standby site once it got applied. But except the manual remove/delete, there is no option to set the automatic procedure in Oracle Data Guard setting.
Do anyone has solution for it?
Thanksuser3076922 wrote:
Hi
Standby database configured with broker and applying the redo in really time; however, I want to change this to archive log apply mode without losing the broker configuration. Is it possible? If it is not possible to use broker to do archive log apply, can I remove the broker and use data guard to set up the standby to use archive log apply?
RegardsHi
I think mseberg is answered correct, you can use enable/disable apply log with change of state on standby database with DGMGRL, as writen mseberg.
or you can disable recover standby database with following script from SQL*Plus.
SQL> alter database recover managed standby database cancel;Regards
Mahir M. Quluzade
www.mahir-quluzade.com -
Data Guard Archive Log Latency Between Primary and Physical Standby
How can I get the time it takes (latency) for the primary instance to get an archive log over to the physical standby instance and get it "archived" and "applied". I have been looking at the V$ARCHIVED_LOG view on each side but the COLUMN "COMPLETION_TIME" always shows a date "MM/DD/YY" and no timestamp. I thought the DATE datatype include data and time. Any ideas on how I can get the latency info I'm looking for?
Thanks
Stevethe COLUMN "COMPLETION_TIME" always shows a date "MM/DD/YY"
and no timestamp. Did you try using TO_CHAR ? e.g.
to_char(completion_time,'dd/mm/yyyy hh24:mi:ss') -
Hi....
I have done data guard .......every thing is fine.......archives are bring transferred to standby..........
Also, during configuration, I had created standby redolog groups 4,5,6 and copied to standby.....
But in real time apply.......the standby is not using standby redolog groups 4,5,6........when i am query v$log it is showing group 1,2,3.
Actually, its should use standby redo log for maximum availability.
Please help....
Thanks in advance.There was a similar question here just a few days ago:
Data Guard - redo log files -
Urgent: Huge diff in total redo log size and archive log size
Dear DBAs
I have a concern regarding size of redo log and archive log generated.
Is the equation below is correct?
total size of redo generated by all sessions = total size of archive log files generated
I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
Before i start measuring i cleared up archive directory and started to monitor from a specific time.
Environment: Oracle 9i Release 2
How I tracked the sizing information is below
logon as SYS user and run the following statements
DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
CREATE TABLE REDOSTAT
AUDSID NUMBER,
SID NUMBER,
SERIAL# NUMBER,
SESSION_ID CHAR(27 BYTE),
STATUS VARCHAR2(8 BYTE),
DB_USERNAME VARCHAR2(30 BYTE),
SCHEMANAME VARCHAR2(30 BYTE),
OSUSER VARCHAR2(30 BYTE),
PROCESS VARCHAR2(12 BYTE),
MACHINE VARCHAR2(64 BYTE),
TERMINAL VARCHAR2(16 BYTE),
PROGRAM VARCHAR2(64 BYTE),
DBCONN_TYPE VARCHAR2(10 BYTE),
LOGON_TIME DATE,
LOGOUT_TIME DATE,
REDO_SIZE NUMBER
TABLESPACE SYSTEM
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT SELECT ON REDOSTAT TO PUBLIC;
CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO SYS.REDOSTAT
(AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
WHERE
A.SID = B.SID
AND
B.STATISTIC# = C.STATISTIC#
AND
C.NAME = 'redo size'
AND
A.AUDSID = sys_context ('USERENV', 'SESSIONID');
COMMIT;
END TR_SESS_LOGOFF;
Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
I have seen the similar implementation as above at many sites.
Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
If I didnt find a solution I would raise a SR with Oracle.
Thanks
[V]You can query v$sess_io, column block_changes to find out which session generating how much redo.
The following query gives you the session redo statistics:
select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
where a.statistic# = b.statistic#
and b.name like '%redo%'
and a.value > 0
group by a.sid,b.name
If you want, you can only look for redo size for all the current sessions.
Jaffar -
Does anyone have any recommandations on how to read data from archive logs. When i use log minor, i am getting only bind variables for DML operations. But i need actual data from the archive logs..
Any thoughts
Thanks
-PrasadLog miner is the closest to command issued as possible. Depending on the Oracle version you will be able to see DML or DML and DDL. From 9i and on Oracle was able to translate the DML against data dictionary as the actual DDL command. On its first 8i release only DML was visible.
~ Madrid -
Data Warehouse Archive logging questions
Hi all,
I'd like some opinions/advice on archive logging and OWB 10.2 with a 10.2 database.
Do you use archive logging on your non-production OWB instances? I have a development system that only has "on demand" backups done and the archive logs fill frequently. In this scenario, should I disable archive logging? I realize that this limits my recovery options to cold backups but on a development environment, this seems sufficient for me. Would I be messing up any OWB features by turning off archive logging?
For production instances, how large do you make your archive log (as a percentage of your total DW size perhaps)?
How do you manage them? With Flash recovery areas? Manually? RMAN or other tools?
Thanks in Advance,
MikeUsualy, I don't set any DW tables to log. Since it's a data warehouse, I believe it's better to make cold backups. In some cases, ETL Mappings may work like backup procedures themselves.
In OWB, select the object you need (table or index) to create. Right-click it, select Configuration -> Performance Parameters -> Logging Mode -> NOLOGGING
Flash RecoveryDon't think it's going to help you, since most of your data manipulation is based on batch jobs.
RMANIf you want to make hot backups, this is something that can really help you manage backup procedures.
ManuallyMaybe... Why not?
I don't take hot backups from DW databases. I prefer to take cold backups. In a recovery scenario, you restore the cold backup and if it's 3 days late, I execute the ETL mappings for the last 3 days.
Regards,
Marcos -
How to delete the data in archived log files
hi
how can i delete the enteries in archived log files. and what is the disadvantage of deleting archived log enteries.There is no documented way to delete data stored in archived log files: you can only remove the archived log files if needed.
-
Oracle8i Data Guard with log shipping
Is it true that :
in Oralce8i, with the data guard, there will be zero data loss if online redologs have been mirrored in DR site and in the event of DR, the last un-finished redolog can be used to recover the database.
What product is used to apply the redolog ?
I know Oracle9i claim this is possible, but when will Oracle9i available to Sun platform ?>
Thomas Schulz wrote:
> Here are my questions:
>
> 1. Is it correct, that I have to restore the last successful restored log (if not the first) from the previous session with "recover_start", before I can restore the next log with "recover_replace" in a new session?
Yes, that's correct. As soon as you leave the restore session, you have to provide a 'overlap' of log information so that the recovery can continue.
> 2. Can't I mix the restoring of incremental and log backups in this way: log001, incremental data, log002, ...? In my tests I was able to restore the incremental data direct after the complete data, but not between the log backups.
No, that's not possible. After you've recovered some log-information the incremental backup cannot be appliead as a "delta" to the data are anymore as the data area has already changed.
> 3. Can I avoid the state change to OFFLINE after a log restore?
Of course - don't use recover_cancel
As soon as you stop the recovery, the database is stopped - no way around this.
There are some 3rd party tools available for this, like LIBELLE.
KR Lars -
AIX Data Guard Isolating Log Traffic
Anyone have experience in configuration of Data Guard on AIX? We would like to isolate log traffic to a separate circuit and are struggling with configuration to accomplish this. Would appreciate some suggestions.
Allan Vath
[email protected]We have successfully configured data guard between our two sites, but the network traffic for transporting the logs was significant. We purchased a separate data circuit for log transport but are struggling with a configuration to force the log file shipment to use the dedicated circuit. This is more of a network/aix issue, but hoped someone has implemented the same type of traffic isolation.
Allan -
Data Guard archive deleted from Primary datbase before transfaring Standby
Dear all
I have a physical standby database. And Primary database on RAC node 1 and node 2.
Accidently archive log deleted from Primary db before shipped standby db .
I saw this error which is showing the GAP.
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 1_7090 1_7090
ir
Edited by: user8244545 on Jun 7, 2010 11:33 AM
Edited by: user8244545 on Jun 7, 2010 6:47 PMWell, I feel sorry for you after all these guys gave you such a hard time :^)
You should be using at least an RMAN Archive Log deletion policy to manage your archive log files.
See "Deletion Policy for Archived Redo Log Files In Flash Recovery Areas" at http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/rman.htm#SBYDB00750 and http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/rman.htm#i1031870 for 10gR2. 10g does require you use a Flash Recovery Area (FRA, renamed to Fast Recovery Area) . Read the paper at http://www.oracle.com/technology/deploy/availability/pdf/RMAN_DataGuard_10g_wp.pdf to sort out not having to use MANDATORY (which we do not recommend).
Now, if you are at 11g (R1 or R2) you don't even have to be using the FRA nor mandatory to have it work. See http://www.oracle.com/pls/db111/to_URL?remark=ranked&urlname=http:%2F%2Fdownload.oracle.com%2Fdocs%2Fcd%2FB28359_01%2Fbackup.111%2Fb28270%2Frcmconfb.htm%23BRADV89439.
If you do not use RMAN to backup the archive logs at least these deletion policy will prevent this from happening again since you will no longer be deleting archive logs manually.
Now, it may be too late for this, I am behind in my mail, but you could get the standby past this gap by using an incremental backup of the primary from the last SCN that was applied at the standby. See the manual at http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#CIHIAADC for more information.
Larry -
Are there any performance implications for archive log in DWH?
Is it recommended to enable archive log in DWH?General speaking, it's still recommend to enable archive logging in DWH environment. With non-archivelog your only viable backup strategy is cold backup which require database shutdown.
Turn on archivelog does have some performance impact on massive loading. However you can use nologging with direct patch/parallele loading to counter that, remember to take an immediate backup before/after loading. -
Data Guard Archive Latency with Physical Standby
I'm trying to get a latency number that tells me how long it took ARCH to move an archive file from the Primary instance to the physical standby instance and "apply it". I see lots of numbers but I can't make heads or tails of them. Does a metric such as this exist in a log file, a V$ view, or can it possibly be calculated or extrapolated?
Thanks
Steve ([email protected])This is how calculated the same in a test env.
1. stop standby listener
2. shutdown standby
3. perform a data load (import)
4. start the standby listener. This will cause arch files to be shipped to the standby. Monitor the files being shipped and time it
5. start the standby db in recovery mode
6. repeatedly execute the following query while timing the log apply services:
SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; -
Total combined size of Archived logs
DB version : 11.2
Platform : AIX
How can I determine the total size of archive logs for a particular DB?
Googling and OTN search didn't provide much details
Didn't get the solution from the following thread either as it digressed from the subject
Re: archive log size
The redo log size for our DB is 100 mb.
SQL> select count(*) from v$archived_log where status = 'A' and name is not null;
COUNT(*)
22
So, I can multiply 22*100 = 2200 mb . But there has been some manual switches, the size of those files will be less. This is why I am looking for an accurate way to determine the total size of Archive logs.Hello;
V$ARCHIVED_LOG contains BLOCKS ( Size of the archived log (in blocks) ) and BLOCK_SIZE ( which is the same as the logical block size of the online log from which the archived log was copied )
So with a little help in the query you should be able to get it.
Archivelog size each day
select
trunc(COMPLETION_TIME) TIME,
SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
from
V$ARCHIVED_LOG
group by
trunc (COMPLETION_TIME) order by 1;Since COMPLETION_TIME is a DATE you can add another SUM to the query to get the exact total you want for the exact date range you want.
Archivelog size each hour
alter session set nls_date_format = 'YYYY-MM-DD HH24';
select
trunc(COMPLETION_TIME,'HH24') TIME,
SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
from
V$ARCHIVED_LOG
group by
trunc (COMPLETION_TIME,'HH24') order by 1;Another example
SELECT To_char(completion_time,'YYYYMMDD') run_date,
Round(Sum(blocks * block_size + block_size) / 1024 / 1024 / 1024) redo_blocks
FROM v$archived_log
GROUP BY To_char(completion_time,'YYYYMMDD')
ORDER BY 2
/Best Regards
mseberg
Edited by: mseberg on Feb 23, 2012 2:30 AM -
Archived redo log size more less than online redo logs
Hi,
My database size around 27 GB and redo logs sizes are 50M. But archive log size 11M or 13M, and logs are switching every 5-10min. Why?
Regards
Azer ImamaliyevAzer_OCA11g wrote:
1) Almost all archive logs sizes are 11 or 13M.. But sometimes 30, 37M.
2)
select to_char(completion_time, 'dd.mm.yyyy HH24:MI:SS')
from v$archived_log
order by completion_time desc;
10.02.2012 11:00:26
10.02.2012 10:50:23
10.02.2012 10:40:05
10.02.2012 10:29:34
10.02.2012 10:28:26
10.02.2012 10:18:07
10.02.2012 10:05:04
10.02.2012 09:55:03
10.02.2012 09:40:54
10.02.2012 09:28:06
10.02.2012 09:13:44
10.02.2012 09:00:17
10.02.2012 08:45:04
10.02.2012 08:25:04
10.02.2012 08:07:12
10.02.2012 07:50:06
10.02.2012 07:25:05
10.02.2012 07:04:50
10.02.2012 06:45:04
10.02.2012 06:20:04
10.02.2012 06:00:12
3) There arent any serious change at DB level.. almost these messages show in alert log since creating DB..Two simple thoughts:
1) Are you running with archive log compression - add the "compressed" column to the query above to see if the archived log files are compressed
2) The difference may simply be a reflection of the number and sizes of the public and private redo threads you have enabled - when anticipating a log file switch Oracle leaves enough space to cater for threads that need to be flushed into the log file, and then doesn't necessarily have to use it.
Here's a query (if you can run as SYS) to show you your allocation of public and private threads
select
PTR_KCRF_PVT_STRAND ,
FIRST_BUF_KCRFA ,
LAST_BUF_KCRFA ,
TOTAL_BUFS_KCRFA ,
STRAND_SIZE_KCRFA ,
indx
from
x$kcrfstrand
;Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b>
Maybe you are looking for
-
Error while activating IM in CFM2
Dear Friends, I would like to have a solution to the folowing problem. I've tried to activate integration models in ECC with transaction CFM2. I've build several integration models for class, customer, masterdata and transaction data. When I try to a
-
Error opening PDF attachment (via email)
Hello, I've problems to send a PDF file with the function: 'SO_DOCUMENT_SEND_API1' At first a small overview of my process: - 'SCMS_DOC_READ' -> to read the file from the archive - 'SCMS_BINARY_TO_FTEXT' -> to convert from bin. to
-
Alert Monitor with exceptions in WAD
Hi All, I'm trying to use alert monitor in web application designer to show two queries with red, green and yellow statuses. In my web template, I have inserted two queries in two tables (both queries have exceptions) and alert monitor as web item.
-
[Solved] Urxvt + Tmux + wide consoles
Is there anything magic you need to do in order to get urxvt and tmux be able to handle mouse coordinates past column 223? This works fine in konsole, and my vim works fine if it isn't run inside tmux, but it doesn't seem to like what urxvt is sendin
-
Whenever i use the Shift or apple key
my mac wants to eject a CD. it makes a clicking noise and the eject cd sign appears. very annoying. any suggestions before i have to walk to the genious bar hi hi.. (transit strike in nyc)