Use or redo log
if the database is in NON ARCHIVE mode then
what is the use of redo log
In that case is there any redo generate when an insert occurs?
SQL*Plus: Release 9.2.0.6.0 - Production on Thu Mar 13 18:57:20 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
[email protected]> set autotrace on
[email protected]> create table redo(redo varchar2(10));
Table created.
[email protected]> insert into redo values ('&redo');
Enter value for redo: Maran
old 1: insert into redo values ('&redo')
new 1: insert into redo values ('Maran')
1 row created.
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE
Statistics
2 recursive calls
7 db block gets
2 consistent gets
0 physical reads
588 redo size
618 bytes sent via SQL*Net to client
533 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processed
[email protected]> Message was edited by:
Maran Viswarayar
Similar Messages
-
Can we use online redo log to recover lost datafile in NOARCHIVE mode?
I am working on OCA exam and confued about these 2 sample questions. (similar questions with totally different answer)
Please give me hint about the different between these 2 questions.
** If the database is in NOARCHIVELOG mode, and one of the datafile for tablespace USERS is lost, what kind of recovery is possible? (answer: B)
A. All transactions except those in the USERS tablespace are recoverable up to the loss of the datafile.
B. Recovery is possible only up to the point in time of the last full database backup.
C. The USERS tablespace is recoverable from the online redo log file as long as none of the redo log files have been reused since the last backup.
D. Tablespace point in time recovery is available as long as a full backup of the USERS tablespace exists.
** The database of your company is running in the NOARCHIVELOG mode. You perform a complete backup of the database every night. On Monday morning, you lose the USER1.dbf file belonging to the USERS tablespace. Your database has four redo log groups, and there have been two log switches since Sunday night's backup.
Which is true (answer: B)
A. The database cannot be recovered.
B. The database can be recovered up to the last commit.
C. The database can be recovered only up to the last completed backup.
D. The database can be recovered by performing an incomplete recovery.
E. The database can be recovered by restoring only the USER!.dbf datafile from the most recent backup.I think Gaurav is correct, you can recover to the last commit even in NOARCHIVELOG, as long as all the changes in the redo logs have not been overwritten. So answer should be B for question 2.
Here is my test:
SQL> select log_mode from v$database;
LOG_MODE
NOARCHIVELOG
SQL> select tablespace_name, file_name from dba_data_files;
TABLESPACE_NAME
FILE_NAME
USERS
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DBF
SYSAUX
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSAUX01.DBF
UNDOTBS1
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\UNDOTBS01.DBF
SYSTEM
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSTEM01.DBF
DATA
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\DATA01.DBF
SQL> create table names
2 ( name varchar(16))
3 tablespace users;
Table created.
so this segment 'names' is created in the datafile users01.
At this point I shut down and mount the DB, then:
RMAN> backup database;
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:29
Finished backup at 06-OCT-07
SQL>alter database open
SQL> insert into names values ('pippo');
1 row created.
SQL> commit;
Commit complete.
SQL>shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
At this point I delete datafile users01 and restart:
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 67110244 bytes
Database Buffers 96468992 bytes
Redo Buffers 2945024 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DBF'
restoring the backup taken before inserting the value 'pippo' in table names:
RMAN> restore database;
Starting restore at 06-OCT-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSTEM01.D
BF
restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\UNDOTBS01.
DBF
restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSAUX01.D
BF
restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DB
F
restoring datafile 00005 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\DATA01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATA
BASE\0AITR52K_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\0AITR52K_1_1 tag=TAG20071006
T181337
channel ORA_DISK_1: restore complete, elapsed time: 00:02:07
Finished restore at 06-OCT-07
RMAN> recover database;
Starting recover at 06-OCT-07
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:05
Finished recover at 06-OCT-07
SQL> alter database open;
Database altered.
SQL> select * from names;
NAME
pippo
SQL>
enrico -
Is it possible to use Archive Redo log file
Hi friends,
My database is runing in archive log mode.I had taken cold backup on Sunday but i am taking archive log file daily evening.
Wednesday my database crash that means i lost all control file,redo log file,datafile etc.
I have archived log file backup till Tuesday Night and other files like control file,datafile etc of Sunday .
1)Is it possible to recover database till tuesday if yes HOW to use archive log file.
(See SCN no of control file and datafiles is same ,if we use RECOVER DATABASE command oracle shows that media recovery is not requide.)
we don't have current control file we had lost in media crash.
nullDear friend
In this scenario
you lost the contorl file
1>If you have old copy of Contorl file,
which has the current structure of the
database and all the archive files then
you can recover the database with
Ponint in time recovery (Using Backup Controlfile)
suresh -
Select from .. as of - using archived redo logs - 10g
Hi,
I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
I've been searching for a while and cant find an answer.
My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
When is issue the following query
select * from supplier_codes AS OF TIMESTAMP
TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
Any help would be greatly appreciated!
Thanks
RobertIf you want to go back 24 hours, you need to undo the changes...
See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search]. -
Number of bytes used by redo logs
What sql command will tell me how much or number of bytes of the memory being utilized by redo logs in 9i.
Help yourself
select * from dict_columns where table_name like '%LOG%' and column_name like '%BYTES%'
Note there is also a DICT view, with a list of all dictionary views.
And then of course there are the fine manuals, you obviously don't want to read.
Sybrand Bakker
Senior Oracle DBA -
Hi....
I have done data guard .......every thing is fine.......archives are bring transferred to standby..........
Also, during configuration, I had created standby redolog groups 4,5,6 and copied to standby.....
But in real time apply.......the standby is not using standby redolog groups 4,5,6........when i am query v$log it is showing group 1,2,3.
Actually, its should use standby redo log for maximum availability.
Please help....
Thanks in advance.There was a similar question here just a few days ago:
Data Guard - redo log files -
Online Redo logs instead of Standby Redo logs
RDBMS Version: 11.2.0.3/Platform : RHEL 6.3
To migrate a 3TB Database to a new DB server , we are going to use RMAN DUPLICATE.
Step1. Take full backup of DB + Standby Control file at primary site and transfer the Bkp files to Standby site
Step2. At standy site, we will run the RMAN duplicate target database for standby
After the above step, we don't want to create the standby redo logs because the newly restored DB in standby server is going to be the new Prod DB which application will be pointing to.
So, Can I skip the Standby Redo log creation part and create Online redo logs instead ?
As mentioned earlier, Our objective is not to create a proper Dataguard Standby DB setup. We just want to clone our DB to another server using RMAN Duplicate.Tom wrote:
RDBMS Version: 11.2.0.3/Platform : RHEL 6.3
To migrate a 3TB Database to a new DB server , we are going to use RMAN DUPLICATE.
Step1. Take full backup of DB + Standby Control file at primary site and transfer the Bkp files to Standby site
Step2. At standy site, we will run the RMAN duplicate target database for standby
After the above step, we don't want to create the standby redo logs because the newly restored DB in standby server is going to be the new Prod DB which application will be pointing to.
So, Can I skip the Standby Redo log creation part and create Online redo logs instead ?
As mentioned earlier, Our objective is not to create a proper Dataguard Standby DB setup. We just want to clone our DB to another server using RMAN Duplicate.
Hi,
Take full backup of DB + Standby Control
We just want to clone our DB to another server using RMAN Duplicate
If you want only clone database of production, why you are take Standby controlfile?
If you don't want create standby database then, why you using DUPLICATE command with FOR STANDBY option.
You can use DUPLICATE command for clone database, without for standby option.
If you say no, we want create standby database and we will perform swithover,
then yes, you can use online redo logs for max performance mode.
and you can create standby redo logs on all database, but this redo logs will use by database when database role
is standby.
Regards
Mahir M. Quluzade -
Purpose of ONLINE REDO LOG FILES - Media or Instance recovery or BOTH ?
Hi
Currently studying this topic for the 1z0-031 exam and am a little confused.
my books (from instructor led class) say
-redo logs are a mean to provide redo transactions in the event of a DATABASE recovery
-redo log buffer gets flushed to redo log files to provide a recovery mechanism in case of MEDIA FAILURE
Then it says
-Online redo log files are used in a situation such as an INSTANCE FAILURE to recover uncommitted data which has not yet been written to the data files
- online redo log files are used for RECOVERY only.
Am i misunderstanding? Or are redo log files for both MEDIA and INSTANCE recovery? Or just INSTANCE ?
confused....
AmanjitOnline Redo Log Files are used in a sense for both Media and Instance Recovery. If your database is in NoArchive Mode then you will only be able to use the Redo Log Files for instance recover. But if you are running in Archive Log Mode then Redo Log Files are archived and will allow you to recover from media failure.
-
Can I read/exploit the redo log files (outside recovery activities)?
My purpose would be to use the redo log file in case of trouble/complain in order to see who did what and when.
Are the redo log files encrypted or could they be encrypted?
Thanks in advanceCan I read/exploit the redo log files (outside recovery activities)?
My purpose would be to use the redo log file in case of trouble/complain in order to see who did what and when.
Are the redo log files encrypted or could they be encrypted?You can read a redo log files using logminer. It is not opening the file like in textpad or notepad, but is a tool provided by oracle. Depending upon you setting if you use advanced securty and encrypted data in while storing on to file then it will be encrypted otherwise it will be in normal binary format.
Regards. -
Improving redo log writer performance
I have a database on RAC (2 nodes)
Oracle 10g
Linux 3
2 servers PowerEdge 2850
I'm tuning my database with "spotilght". I have alredy this alert
"The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
The serveres are not in RAID5.
How can I improve redo log writer performance?
Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
Therefore, redo log devices should be placed on fast devices.
Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
To reduce redo write time see Improving redo log writer performance.
See Also:
Tuning Contention - Redo Log Files
Tuning Disk I/O - Archive WriterSome comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
> Assuming that you are not CPU constrained,
moving the online redo to
high-speed solid-state disk can make a hugedifference.
Do you honestly think this is practical and usable
advice Don? There is HUGE price difference between
SSD and and normal hard disks. Never mind the
following disadvantages. Quoting
(http://en.wikipedia.org/wiki/Solid_state_disk):[
i]
# Price - As of early 2007, flash memory prices are
still considerably higher
per gigabyte than those of comparable conventional
hard drives - around $10
per GB compared to about $0.25 for mechanical
drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
Capacity - The capacity of SSDs tends to be
significantly smaller than the
capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
Lower recoverability - After mechanical failure the
data is completely lost as
the cell is destroyed, while if normal HDD suffers
mechanical failure the data
is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
Vulnerability against certain types of effects,
including abrupt power loss
(especially DRAM based SSDs), magnetic fields and
electric/static charges
compared to normal HDDs (which store the data inside
a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
Limited write cycles. Typical Flash storage will
typically wear out after
100,000-300,000 write cycles, while high endurance
Flash storage is often
marketed with endurance of 1-5 million write cycles
(many log files, file
allocation tables, and other commonly used parts of
the file system exceed
this over the lifetime of a computer). Special file
systems or firmware
designs can mitigate this problem by spreading
writes over the entire device,
rather than rewriting files in place.
Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
>
Looking at many of your postings to Oracle Forums
thus far Don, it seems to me that you are less
interested in providing actual practical help, and
more interested in self-promotion - of your company
and the Oracle books produced by it.
.. and that is not a very nice approach when people
post real problems wanting real world practical
advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us. -
Error verifing REDO LOG COLLECTOR
Dear all,
I get this error when i try to verify REDO collector:
$avorcldb verify -src 10.52.128.176:3000:TEST -colltype REDO
Introducir Nombre de Usuario de Origen: srcuser
Introducir Contraseña de Origen:
ERROR: el nombre de base de datos global para la base de datos origen debe incluir el dominio para utilizar el recopilador REDO LOG
ERROR: defina los parámetros init.ora anteriores para los valores recomendados/necesarios
The message languaje is spanish.... sorry.... the translation is:
ERROR: Database global name for the source database must include the domain to use te REDO LOG COLLECTOR.
ERROR: Define the before parameters on init.ora with the necesary values.
I think that the error is because of the DB_DOMAIN parameter of the source database. I've incluide it and it doesn´t work.
Thank you!!Of course, the problem looks easy but I can´t still solve it...
I created the source user on the source database, following the instructions of the Administrator guide.
After create this user and grant him the privileges of the script zarsspriv.sql and so on. Then I had to add a database and to do it, i have to enter the username and password of the user that i created and I have no problems...
Then I need to add the collectors of the source database. When I add theses collector I dont have to enter the usernae/password, becasuse I entered it when I register the source database. Well... I add the DBAUD and OSAUD fine but when I try to add the REDO COLLECTOR i recive the error...
The more strange is that I can verify the collectors fine:
avorcldb verify -src neptuno:3000:testl -colltype REDO
Enter Source user name: srcuser
Enter Source password: ******
source neptuno verified for REDO Log Audit Collector collector
...but when I launch the add_collector comand for REDO collector i have the problem..
I´m using Oracle 10G, not 11G.
Thanks -
Use of standby redo log files in primary database
Hi All,
What is the exact use of setting up standby redo log files in the primary database on a data guard setup?
any good documents?A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
refer the link,and Perform the following steps to configure the standby redo log.:-
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
refer the link
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_apply.htm#i1023371 -
Can I use old dbf, ctl and redo.log files on re-installation? URGENT
Hello,
DB: 10.2.0.1.0
Because of some reasons I am reinstalling Oracle DB in our test server.
Prior to install I removed all files under ORACLE_HOME.
But I am keeping my old datafiles, redo.log & cntrl file.
Is it ok that to define same Database files location where my old files existed?
we have dbf, redo.log, ctl files in that location.
I wonder if I used the same files than I probably do not need to create new data files and also do not need to required import data. This is our test database.
DIf all database files are intact then you can reinstall the Oracle software and just start the pre-existing database. All the files would include the spfile or a copy of the init.ora.
In fact that is one way of doing a version upgrade by overlaying $ORACLE_HOME with the new version. We like to install in a new home since we usually run multiple versions at one time and then start the existing databases using the new release and running the database upgrade scripts one at a time when the release proves stable.
HTH -- Mark D Powell -- -
Sizing the redo log files using optimal_logfile_size view.
Regards
I have a specific question regarding logfile size. I have deployed a test database and i was exploring certain aspects with regards to selecting optimal size of redo logs for performance tuning using optimal_logfile_size view from v$instance_recovery. My main goal is to reduce the redo bytes required for instance recovery. Currently i have not been able to optimize the redo log file size. Here are the steps i followed:-
In order to use the advisory from v$instance_recovery i had to set fast_start_mttr_target parameter which is by default not set so i did these steps:-
1)SQL> sho parameter fast_start_mttr_target;
NAME TYPE VALUE
fast_start_mttr_target integer 0
2) Setting the fast_start_mttr_target requires nullifying following deferred parameters :-
SQL> show parameter log_checkpoint;
NAME TYPE VALUE
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'log_checkpoint_timeout';
ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
FALSE IMMEDIATE TRUE FALSE
SQL> alter system set log_checkpoint_timeout=0 scope=both;
System altered.
SQL> show parameter log_checkpoint_timeout;
NAME TYPE VALUE
log_checkpoint_timeout integer 0
3) Now setting fast_start_mttr_target
SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'fast_start_mttr_target';
ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
FALSE IMMEDIATE TRUE FALSE
Setting the fast_mttr_target to 1200 = 20 minutes of checkpoint switching according to Oracle recommendation
Querying the v$instance_recovery view
4) SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
276 165888 *93* 59 361 16040
Here Target Mttr was 93 so i set the fast_mttr_target to 120
SQL> alter system set fast_start_mttr_target=120 scope=both;
System altered.
Now the logfile size suggested by v$instance_recovery is 290 Mb
SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
59 165888 93 59 290 16080
After altering the logfile size to 290 as show below by v$log view :-
SQL> select GROUP#,THREAD#,SEQUENCE#,BYTES from v$log;
GROUP# THREAD# SEQUENCE# BYTES
1 1 24 304087040
2 1 0 304087040
3 1 0 304087040
4 1 0 304087040
5 ) After altering the size i have observed the anomaly as redo log blocks to be applied for recovery has increased from *59 to 696* also now v$instance_recovery view is now suggesting the logfile size of *276 mb*. Have i misunderstood something
SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
*696* 646947 120 59 *276* 18474
Please clarify the above output i am unable to optimize the logfile size and have not been able to achieve the goal of reducing the redo log blocks to be applied for recovery, any help is appreciated in this regard.sunny_123 wrote:
Sir oracle says that fast_start_mttr target can be set to 3600 = 1hour. As suggested by following oracle document
http://docs.oracle.com/cd/B10500_01/server.920/a96533/instreco.htm
I set mine value to 1200 = 20 minutes. Later i adjusted it to 120=2 minutes as Target_mttr suggested it to be around 100 (if fast_mttr_target value is too high or too low effective value is contained in target_mttr of v$instance_recovery)Just to add, you are reading the documentation of 9.2 and a lot has changed since then. For example, in 9.2 the parameter FSMTTR was introduced and explicitly required to be set and monitored by the DBA for teh additional checkpoint writes which might get caused by it. Since 10g onwards this parameter has been made automatically maintained by Oracle. Also it's been long that 9i has been desupported followed by 10g so it's better that you start reading the latest documentation of 11g and if not that, at least of 10.2.
Aman.... -
Where are BLOB Files stored when using redo log files.
I am using Archive Log Mode for my backups. I was wondering if Blob files get stored in the redo log files or if they are archived somewhere else?
Rob.BLOB are just columns of some tables and are stored in related tablespace table by default in their own segments. Changes to BLOB columns are also stored in redo log more or less like any other column.
See an short example for default LOB storage in Re: CLOB Datatype [About Space allocation]
Edited by: P. Forstmann on 27 avr. 2011 07:20
Maybe you are looking for
-
HT201343 Airplay troubles with 2nd Gen. appleTV
I just bought a new macbook pro with mountain lion and I am having trouble using airplay with my apple tv. The airplay icon shows up on my computer screen, and I can play music and videos from itunes, but the apple tv will not display what is on my
-
Hi all, There is no version management in the SAP Script. Then how can i find out the changes made for one TR in that. Can t be possible that i doenload the complete forminfo and upload the same to overwrite the previous one. Regards, Amit Jain
-
Null query string in servlet filter with OC4J 10.1.3.1.0
I have a strange problem with OC4J 10.1.3.1.0. In the servlet filter, while requesting the querystring with HttpServletRequest.getQueryString the result is null, even if it is verified with a sniffer that that query string is there. This happens only
-
Query - Hibernate to JPA/TopLink Conversion
Hi, I have some problems with a query. My query works fine in Hibernate but I have to work with TopLink. I have created the query below. int Language = 1; String hql = "SELECT DISTINCT g " + "FROM GUI g " + "JOIN FETCH g.gUINameAllocation gna "
-
Collective completion of quality notifications
Hi Gurus, I'm on an archiving process, and I need to complete a big number of quality notifications, is there a mass change for this ? Regards, Mahmoud