Excess archive log generation
Hello to All,
lot of archives are generating by the production database since 15 days.Nearly 30 archives are generating per hour.Before only one or two archives are going to be genarated per hour.The size of logfile is 300m and database is having 3 log groups.Now i want to know which application or which user is generating this much of archives.How i can find the reason for this much of archive generation.
Thankx...
Can you tell us the Oracle version which you are using?
For the time, you can query v$sess_io to findout the session which is generating too much redo.
Jaffar
Similar Messages
-
How to control too much of archived log generation
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,796843 wrote:
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,do not do any DML, since only DML generates REDO -
Archive Log Generation in EBusiness Suite
Hi,
I am responsible for EBusiness suite 11.5.10.2 AIX Production server. Until last week (for the past 1.5 years), there were excessive archive log generation (200 MB for every 10 mins) which has been reduced to (200 MB for every 4.5 hours).
I am unable to understand this behavior. The number of users still remain the same and the usage is as usual.
Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
Please suggest what can be done.
(I have raised this issue in Metalink Forum also and awaiting a response)
Thanks
qALog/archive logs generation is directly related to the level of activities on the database, so it is almost certain that the level of activities have dropped significantly.
If possible, can you run this query and post the result:
select trunc(FIRST_TIME), count(SEQUENCE#) from v$archived_log
where to_char(trunc(FIRST_TIME),'MONYYYY') = 'SEP2007'
group by trunc(first_time)
order by 1
--Adams -
How to reduce excessive redo log generation in Oracle 10G
Hi All,
Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
below is the size of redo log file members:
L.BYTES/1024/1024 MEMBER
200 /u05/applprod/prdnlog/redolog1a.dbf
200 /u06/applprod/prdnlog/redolog1b.dbf
200 /u05/applprod/prdnlog/redolog2a.dbf
200 /u06/applprod/prdnlog/redolog2b.dbf
200 /u05/applprod/prdnlog/redolog3a.dbf
200 /u06/applprod/prdnlog/redolog3b.dbf
here is the some content of alert message for your reference how frequent log switch is occuring:
Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Thread 1 advanced to log sequence 17439
Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Tue Jul 13 14:46:17 2010
Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Tue Jul 13 14:46:38 2010
Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Thread 1 advanced to log sequence 17440
Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
Tue Jul 13 14:46:52 2010
Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Tue Jul 13 14:53:33 2010
Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Thread 1 advanced to log sequence 17441
Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
Tue Jul 13 14:53:37 2010
Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Tue Jul 13 14:55:37 2010
Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
Tue Jul 13 15:15:37 2010
Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
Tue Jul 13 15:35:38 2010
Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
Tue Jul 13 15:55:39 2010
Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
Tue Jul 13 16:15:41 2010
Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
Tue Jul 13 16:35:41 2010
Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
Tue Jul 13 16:42:28 2010
Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
Thread 1 advanced to log sequence 17442
Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Thanks in advancehi,
Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
L
1 select
2 to_char(first_time,'DD-MM-YY') day,
3 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
4 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
5 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
6 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
7 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
8 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
9 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
10 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
11 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
12 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
13 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
14 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
15 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
16 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
17 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
18 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
19 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
20 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
21 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
22 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
23 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
24 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
25 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
26 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
27 COUNT(*) TOT
28 from v$log_history
29 group by to_char(first_time,'DD-MM-YY')
30 order by daythanks,
baskar.l -
Archive log generation in standby
Dear all,
DB: 11.1.0.7
We are configuring physical standby for our production system.we have the same file
system and configuration for both the servers.. now primary archive
destination is d:/arch and the standby server also have d:/arch .Now
archive logs are properly logged into the standby and the data is
intact . the problem we have archive log generation proper in the
primary arch destionation. but no archive logs are getting
generated in the standby archive location. but archive logs are being
applied to the standby database ?
is this normal ?..in standby archive logs will not be generated ?
Please guide
KaiThere are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
/ -
Urgent: Excessive archive logs 10g 10.2.0.4
Hi,
I am experiencing excessive archive logging.
This has been happenning for the past 3 days. Usually achive logs were taking about 250 Mb.
Yesteday they jumped to 129Gb, today - 30Gb until the hard drive ran out of space.
Now I obviously have
"Archiver is unable to archive a redo log because the output device is full or unavailable" message.
The database is servicing a low number of transations application and awaiting deployment into production.
Not sure where to start.
Any advice would be appreciated.
I can provide any other information necessary to troubleshoot.
Thanks in advance.run this....it should point out the session currently going:
SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec, vsql.sql_text
FROM v$session s, v$transaction t, v$sqlarea vsql
WHERE s.taddr = t.addr
and s.sql_id = vsql.sql_id (+)
ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
This assumes you can log in by maybe moving some of your archives to free up space to resume db operations.
Also, reach out to developers/users (if you can) and see if anyone is testing something or loading tables, etc.
Edited by: DBA_Mike on Apr 14, 2009 10:10 AM -
Query help for archive log generation details
Hi All,
Do you have a query to know the archive log generation details for today.
Best regards,
Rafi.Dear user13311731,
You may use below query and i hope you will find it helpful;
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM < 8;Hope That Helps.
Ogan -
Growth of Archive log generation
Hi,
In my case the the rate of archive log generation has been increased, so I want to know the query
to find out the rate of archive log generation per hour.
Regards
SyedHi Syed;
What is your DB version? Also ebs and os?
I use below query for my issue:
select to_char(first_time,'MM-DD') day, to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') "23"
from v$log_history group by to_char(first_time,'MM-DD')
Regard
Helios -
Hi,
I am working in Oracle 10g RAC database on Hp-UX... in the standby environment..
Instance name :
R1
R2
R3
for the above three instance... i need to find the hourly archive log generation in standby site.....
Hours 1 2 3
R1
R2
R3
Total
Share the query...set parameter archive_lag_target to required value. its a dynamic parameter and specified in secs.
-
Hi,
Database Version: Oracle 11.1.0.6
Platform: Enterprise Linux 5
Can someone please tell me the troubleshooting steps_ in a situation where there is heavy inflow of archive log generation i mean, say around 3 files of around 50MB every minute and eating away the space on disk.
1) How to find out what activity is causing such heavy archive log generation? I can run the below query to find out the currently running sql queries with status;
select a.username, b.sql_text,a.status from v$session a inner join v$sqlarea b on a.sql_id=b.sql_id;
But is there any other query or a better way to find out current db activity in this situation.
Tried using DBMS_LGMNR Log Miner but failed because (i) utl_file_dir is not set in init param file (So, mining the archive log file on production is presently ruled out as I cannot take an outage)
(ii) alter database add supplemental log data (all) columns query takes for ever because of the locks. (So, cannot mine the generated archive log file on another machine due to DBID mismatch.)
2) How to deal with this situation? I read here in OTN discussion board that increasing the number of redo log groups or redo log members will help to manage this situation when there is lot of DML activity going on application side....But I didn't understand how it is going to help in controlling the rigorous archive logs generation
Edited by: user10313587 on Feb 11, 2011 8:43 AM
Edited by: user10313587 on Feb 11, 2011 8:44 AMHi,
Other than logminer, which will tell you exactly what the redo is by definition, you can run something like the following:
select value/((sysdate-logon_time) * 1440) redo_per_minute,s.sid,serial#,logon_time,value
from v$statname sn,
v$sesstat ss,
v$session s
where s.sid = ss.sid
and sn.statistic# = ss.statistic#
and name = 'redo size'
and value>0Then trace the "high" sessions above and it should jump out at you. If not, then run logmnr with something like...
set serveroutput on size unlimited
begin
dbms_logmnr.add_logfile(logfilename => '&log_file_name');
dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog + dbms_logmnr.no_rowid_in_stmt);
FOR cur IN (SELECT *
FROM v$logmnr_contents) loop
dbms_output.put_line(cur.sql_redo);
end loop;
end;
/Note you don't need utl_file_dir for log miner if you use the online catalog.
HTH,
Steve -
Archive log generation in every 7 minute interval
One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
Current settings are
fast_start_mttr_target integer 300
log_buffer integer 5242880
Regards
ManojYou can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
how much blocks have been changed by the session. High values indicate a
session generating lots of redo.
The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 i.block_changes
3 FROM v$session s, v$sess_io i
4 WHERE s.sid = i.sid
5 ORDER BY 5 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
2) Query V$TRANSACTION. This view contains information about the amount of
undo blocks and undo records accessed by the transaction (as found in the
USED_UBLK and USED_UREC columns).
The query you can use is:
SQL> SELECT s.sid, s.serial#, s.username, s.program,
2 t.used_ublk, t.used_urec
3 FROM v$session s, v$transaction t
4 WHERE s.taddr = t.addr
5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
the session. -
Hi DBAS,
We are using oracle 10g(10.2.0.4) some times excessive archives is generating in my database,how can we identify what are the dml operations executed in particular time in database,is the any query to find out dmls.
Thanks
Tirupathitmadugula wrote:
Hi DBAS,
We are using oracle 10g(10.2.0.4) some times excessive archives is generating in my database,how can we identify what are the dml operations executed in particular time in database,is the any query to find out dmls.
Thanks
TirupathiFirstly that is depend your system activities(transactions) and investigate that using log mining.In additionally may be your size of online redo logs are very small.And what are those size?. Also did you set ARCHIVE_LAG_TARGET? if yes what its value? -
Hai all,
In My production environment, some times archives log are generating 5-6 logs a min.. very less no of users are connected to the database now..
rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4810.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4811.arc
-rw-r----- 1 oraprod dba 10483712 Jan 12 14:10 prod_arch_4812.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4813.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4814.arc
-rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4815.arc
Y is this happenin ?
Any comments or ideas to resolve this?
YusufWhenever you create a thread, it is always advisable to specify your current OS and DB versions.
You could be generating this redo information by means of your scheduled tasks or by the current users activity, the few concurrent number of users doesn't mean they won't be generating a lot of transactions. Check your v$undostat and v$rollstat, v$transaction, and v$session to monitor users and transaction activity.
10M for a redo log size IMO is very little for the current transaction requirements for most databases. Your database currently generates transaction information at a rate of about 50M/min. With 100M redolog files you would be generating one archivelog file around each two minutes, instead of the current 5 archivelog per minute.
Since your database is highly transactional, make sure you have enough free space to store your generated archive log files, you will be generating about 3G/hr.
~ Madrid -
Hello All,
we are on 11.1.0.6 on AIX
On our database side, finding "excessive redo log generation" at a specific time only (between 11 -12 midnight)
How to find what exactly going on at that time whos causing so much redo generation.
ThanksThnaks for the update....
Yes I am licenced.
Taken AWR report, can you please suggest which part of the report give me the idea about root cause.
@faran,
Nothig been schedule at that time like backup script etc....
Mean by eccessive is we have 4 grups with each 2 members with 200 MB size each
Yesterday between said time its hight for rest of the time. 38 archives generated at this perticular time only.
Edited by: DOA on 7 Sep, 2012 1:48 AM -
Duplexing the Archive Log - is there a potential performance hit
Good Afternoon Oracle People -
I apologize if this question is silly or out of place.
Basically, we are looking at options for implementing a (cheap) DR solution for our Oracle Database.
Bottom line objective is to have a second copy of our production system (not running, offline) with a usable archive log to recover from at a remote site with a similar set of disk technologies etc.
The sites are linked by 100MBPs link - so any access to secondary site is relatively fast.
What I was thinking of doing is creating an iSCSI target on the destination SAN and adding this as a disk into the production database. Then, I was going to go in and define a duplex archive log destination to this iSCSI target.
My fear is that if the archiver waits for the write to the destination, I would imagine that this would slow down database access. Is this a valid concern? Or does Oracle treat the Duplex as a replicated (slower) copy?
Again, sorry if this question is stupid - i have tried the google machine but couldn't find anything, and it isn't all that clear from the documentation.
Kind Regards,
AlekseiOr does Oracle treat the Duplex as a replicated (slower) copy?
Oracle treats it in a way you tell Oracle to treat it – please take a look at LOG_ARCHIVE_DEST_n parameters such as MANDATORY and OPTIONAL, MAX_FAILURE, etc
There are two major things to consider:
->Is the throughput of the destination enough to handle max archive log generation?
You need to take a look at the size of archive logs and how frequently they are
generated during peak load.
->What happens when the destination is not available?
HTH,
Iordan Iotzov
Maybe you are looking for
-
An iPhone Burned My Girlfriend... And Now Will Not Charge/Sync
So last week my girlfriend's iPhone 3gs did something new. She had just plugged it into the charger - the one that came with the phone when she purchased it - and plugged the charger into a wall outlet. My iPhone charger occupied the other socket. Wi
-
Flash Player version with Captivate 7
The publish option (for swf) on Captivate 7 starts at Flash Player version 10. Does that mean if you had content working perfectly well on Flash Player 9 in a previous version of Captivate, and you import it into Captivate 7 it now needs the learner
-
Passing SQL Server identity attributes values into adf entity objects
Hi all. I'm using Jdeveloper 10g for developing an ADF Swing application based on MS SQL Server DB. Does anyone know if it is possible to pass SQL Server identity attributes values into the correspondent attrributes of adf entity objects, like we do
-
How to make player interact with game world objects?
I am trying to make a side scrolling rpg game similar to South Park Stick of Truth. Right now I have the player done and i sketched the outline of the room i wanted in order to test the the collision detector. So right now there are only 4 classes. T
-
When i click an image in my camera roll it enlarges to black
when i click an image in my camera roll it enlarges to black