How to corrupt a redo log ... :) ?
Hi Could anybody tell me how could (if it is possible) to corrupt a redo log ? I should like to make a test and I need such a thing to do ... So, any suggestions ?
Thanks, Paul T.
open the file in any editor add some entries
if you want o coirrupt data,control anything is possible :)
But this is not a question which should not be posted.
the files in use means the database files of the running instnace cannot be edited only after shutting down you can sabotage the DB
Similar Messages
-
Hi Experts,
I want to remove a wrong redo log file from 10G R2 database in window.
how to do that without loss data?
my steps as
1. alter system switch logfile;
2. select * from v$log;
which ARC and sataus do I can drop redo log file based on above SQL
no archive and active status?
ALso Which account should I do above action?
fExamp, system account added redo log file, i only
can drop by system? how about sys?
Thanks help in detail steps
Jim
Edited by: user589812 on Dec 23, 2008 4:35 PMJim,
Check this link out for how to drop a redo log file
Make sure a redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#i1006489
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP# ARC STATUS
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
Drop a redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3; -
Hi,
We are using Oracle 7.3 and we had a system crash. After i boot up the system and tried to open the database, i got a message that one of the redo log(current) was corrupted and hence the database cannot opened. We are in nonarchived mode and since we are using raid5, we haven't multiplexed the files. I cannot drop the log (As i cannot open the database,i cannot switch the logfile, drop and rebuild again). Now is there any methord to drop the log, and rebuild the log again? (The final option is to restore the whole system through "UFSRESTORE" since total storage of all HDD is only 16gb and then take a import of the schema).
thanks
ArunCitizen_2 wrote:
How can i force the archiving of a redo log group 2?How could you archive a log group when you have lost all members of that group?
Have you checked out this documentation: [Recovering After the Loss of Online Redo Log Files|file://///castle/1862/Home%20Directories/Vaillancourt/Oracle%2010gR2%20Doc/backup.102/b14191/recoscen008.htm#sthref1872]
More specifically:
If the group is . . . Current
Then . . .
It is the log that the database is currently writing to.
And you should . . .
Attempt to clear the log; if impossible, then you must restore a backup and perform incomplete recovery up to the most recent available redo log.
>
HTH! -
Logmnr/capure error b'coz of corruption of redo log block
Hi,
We all know that capture process reads the REDO entries from redo log files or archived log files. Therefore we need to ahev db in ARCHIVELOG mode.
In alert log file, I found error saying :
Creating archive destination LOG_ARCHIVE_DEST_1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\LOCATION01\1_36.ARC'
ARC0: Log corruption near block 66922 change 0 time ?
ARC0: All Archive destinations made inactive due to error 354
Fri Apr 04 12:57:44 2003
Errors in file e:\oracle\admin\repf\bdump\trishul_arc0_1724.trc:
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 66922 change 0 time 04/04/2003 11:05:40
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
As a normal practice, we do have multiplexing of redo log files at diff location, but even that second copy of redo log is of no use to recover the redo log. This explains redo log could not be archived, since it can't be read. Same is true even for Logmnr process, it could not read the redo log file and it failed. Now, we have wae to recover from this situation (as far as DB is concern, not Stream Replication), since the shutdown after this error was IMMEDIATE causing checkpoing, and rollback/rollforward is not required during system startup. (No instance recovery) We can make db NOARCHIVELOG mode, drop that particular group, and create new one, and turn db to ARCHIVELOG mode This will certainly serve the purpose as far as consistency of DB is concern.
Here is a catch for Stream Replication. The redo log that got corrupted must be having few transaction which are not being archived, and each will be having corresponding SCN. Now, Capture Process read the info sequentially in order of SCN. Few transaction are now missed, and Capture process can't jump to next SCN skipping few SCN in between. So, we have to re-instantiate the objects on the another system which has no erros, and start working on it. My botheration is what will happen to those missed transaction on the another database. It's absolete loss of the data. In development I can manage that. But in real time Production stage, this is a critical situation. How to recover from this situation to get back the corrupted info from redo log ?
I have not dropped any of the log group yet. B'coz I would like to recover from this situation without LOSS of data.
Thanx, & regards,
Kamlesh Chaudhary
Content of trace files :
Dump file e:\oracle\admin\repf\bdump\trishul_arc0_1724.trc
Fri Apr 04 12:57:31 2003
ORACLE V9.2.0.2.1 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.0 Service Pack 2, CPU type 586
Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.2.0 - Production
Windows 2000 Version 5.0 Service Pack 2, CPU type 586
Instance name: trishul
Redo thread mounted by this instance: 1
Oracle process number: 16
Windows thread id: 1724, image: ORACLE.EXE
*** SESSION ID:(13.1) 2003-04-04 12:57:31.000
- Created archivelog as 'E:\ORACLE\ORADATA\REPF\ARCHIVE\LOCATION02\1_36.ARC'
- Created archivelog as 'E:\ORACLE\ORADATA\REPF\ARCHIVE\LOCATION01\1_36.ARC'
*** 2003-04-04 12:57:44.000
ARC0: All Archive destinations made inactive due to error 354
*** 2003-04-04 12:57:44.000
kcrrfail: dest:2 err:354 force:0
*** 2003-04-04 12:57:44.000
kcrrfail: dest:1 err:354 force:0
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 66922 change 0 time 04/04/2003 11:05:40
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
*** 2003-04-04 12:57:44.000
ARC0: Archiving not possible: error count exceeded
ORA-16038: log 2 sequence# 36 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-16014: log 2 sequence# 36 not archived, no available destinations
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-16014: log 2 sequence# 36 not archived, no available destinations
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-16014: log 2 sequence# 36 not archived, no available destinations
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-16014: log 2 sequence# 36 not archived, no available destinations
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG'
ORA-16014: log 2 sequence# 36 not archived, no available destinations
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'E:\ORACLE\ORADATA\REPF\ARCHIVE\REDO02.LOG
Dump file e:\oracle\admin\repf\udump\trishul_cp01_2048.trc
Fri Apr 04 12:57:27 2003
ORACLE V9.2.0.2.1 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.0 Service Pack 2, CPU type 586
Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.2.0 - Production
Windows 2000 Version 5.0 Service Pack 2, CPU type 586
Instance name: trishul
Redo thread mounted by this instance: 1
Oracle process number: 30
Windows thread id: 2048, image: ORACLE.EXE (CP01)
*** 2003-04-04 12:57:28.000
*** SESSION ID:(27.42) 2003-04-04 12:57:27.000
TLCR process death detected. Shutting down TLCR
error 1280 in STREAMS process
ORA-01280: Fatal LogMiner Error.
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01280: Fatal LogMiner Error
**********************I have the similar problem - I am using Steams environment, and have got this
"ORA-00353: log corruption near block" errors in the alert.log file
during capture the changes on the primary database, and Capture
process became aborted after that.
Was that transactions lost, or after i've started the Capture process
again the were captured and send to the target database?
Have anyone solved that problem?
Can you help me with it? -
How to damage online redo log for simulation
Dear All,
Kindly please help to inform me, is there any way to damage online redo log from database level (not from OS command like dd) ?
I need to do it for test case to enable db_block_checking and db_block_checksum (set both parameter to TRUE).
Is those parameter will help from redo log corruption ? that's why, i want to prove it.
Thanks
Anthonyuser12215770 wrote:
My purpose is i want to verify that the db_block_checking and db_block_checksum can avoid redo corruption (the corruption caused by process in the database).Redo corruption could also occur due to other issues as http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams049.htm#REFRN10030 says:
>
Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULL, DB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk.
>
You could try to use ORADEBUG POKE command to write directly in the SGA if you know how to find log buffer blocks ... About ORADEBUG please read http://www.juliandyke.com/Diagnostics/Tools/ORADEBUG/ORADEBUG.html. -
How to reduce excessive redo log generation in Oracle 10G
Hi All,
Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
below is the size of redo log file members:
L.BYTES/1024/1024 MEMBER
200 /u05/applprod/prdnlog/redolog1a.dbf
200 /u06/applprod/prdnlog/redolog1b.dbf
200 /u05/applprod/prdnlog/redolog2a.dbf
200 /u06/applprod/prdnlog/redolog2b.dbf
200 /u05/applprod/prdnlog/redolog3a.dbf
200 /u06/applprod/prdnlog/redolog3b.dbf
here is the some content of alert message for your reference how frequent log switch is occuring:
Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Thread 1 advanced to log sequence 17439
Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Tue Jul 13 14:46:17 2010
Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Tue Jul 13 14:46:38 2010
Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Thread 1 advanced to log sequence 17440
Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
Tue Jul 13 14:46:52 2010
Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Tue Jul 13 14:53:33 2010
Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Thread 1 advanced to log sequence 17441
Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
Tue Jul 13 14:53:37 2010
Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Tue Jul 13 14:55:37 2010
Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
Tue Jul 13 15:15:37 2010
Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
Tue Jul 13 15:35:38 2010
Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
Tue Jul 13 15:55:39 2010
Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
Tue Jul 13 16:15:41 2010
Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
Tue Jul 13 16:35:41 2010
Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
Tue Jul 13 16:42:28 2010
Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
Thread 1 advanced to log sequence 17442
Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Thanks in advancehi,
Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
L
1 select
2 to_char(first_time,'DD-MM-YY') day,
3 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
4 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
5 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
6 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
7 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
8 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
9 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
10 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
11 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
12 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
13 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
14 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
15 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
16 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
17 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
18 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
19 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
20 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
21 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
22 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
23 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
24 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
25 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
26 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
27 COUNT(*) TOT
28 from v$log_history
29 group by to_char(first_time,'DD-MM-YY')
30 order by daythanks,
baskar.l -
How to change the redo log file location.... ?
I want all my redo log files to be created in \u10 instead of current /u01?
How to do it? NOARCHIVELOG mode database on Oracle 10g R2.
Thank you,
SmirhHi..
I want all my redo log files to be created in \u10 instead of current /u01?I think it should be /u10 :)...
Anand
Edited by: Anand... on Nov 5, 2009 5:32 AM Removed the misinformation about downtime -
How does LGWR write redo log files, I am puzzled!
The document says:
The LGWR concurrently writes the same information to all online redo log files in a group.
my undestandint of the sentence is following for example
group a includes file(a1, a2)
group b includes file(b1, b2)
LGWR write file sequence: write a1, a2 concurrently; afterwards write b1, b2 concurrently.
my question is following:
1、 my understanding is right?
2、 if my understanding is right, I think that separate log file in a group should save in different disk. if not, it cann't guarantee correctly recovery.
my opinion is right?
thanks everyone!Hi,
>>That is multplexing...you should always have members of a log file in more than 1 disk
Exactly. You can keep multiple copies of the online redo log file to safeguard against damage to these files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. In addition, when multiplexing redo log files, it is preferable to keep the members of a group on different disks, so that one disk failure will not affect the continuing operation of the database. If LGWR can write to at least one member of the group, database operation proceeds as normal.
Cheers
Legatti -
How to find the configuration status of REDO logs files?
I am in the process of moving the redo log files.
Before that I want to find out whether it is set up as duplex or any other configurations.
How to find the REDO log configurations.
Oracle 10g R2
Thank you,
SmithExample:
Not duplexed redo log - if number of member is 1, then it is not duplexed:
SQL> select group#, members from v$log;
GROUP# MEMBERS
1 1
2 1
3 1
SQL> select group#, member from v$logfile;
GROUP# MEMBER
1 /u01/app/oracle/oradata/db1/redo01.log
2 /u01/app/oracle/oradata/db1/redo02.log
3 /u01/app/oracle/oradata/db1/redo03.log
SQL> alter database add logfile member '/u01/app/oracle/oradata/db1/redo01_2.log' to group 1;
Database altered.
SQL> alter database add logfile member '/u01/app/oracle/oradata/db1/redo02_2.log' to group 2;
Database altered.
SQL> alter database add logfile member '/u01/app/oracle/oradata/db1/redo03_2.log' to group 3;
Database altered.Duplexed redolog
SQL> select group#, members from v$log;
GROUP# MEMBERS
1 2
2 2
3 2
SQL> select group#, member from v$logfile;
GROUP# MEMBER
1 /u01/app/oracle/oradata/db1/redo01.log
2 /u01/app/oracle/oradata/db1/redo02.log
3 /u01/app/oracle/oradata/db1/redo03.log
1 /u01/app/oracle/oradata/db1/redo01_2.log
2 /u01/app/oracle/oradata/db1/redo02_2.log
3 /u01/app/oracle/oradata/db1/redo03_2.log -
Hi All
Can any one let me know how to increase the Redo Log in SAP R/3 on Oracle.
V are getting warning as "Checkpoint not complete" and when i searced note - 79341 it says to increase the size of redo logs,
Kindly let how to increase Redo Logs.
Regards!!Hi,
Online redo log files should be set up according to the following rules:
1. Avoid log switches with a frequency lower than one minute.
2. Always set up the online redo log files with the same size.
3. Switch on Oracle mirroring.
Refer SAP Note 309526.
Note: Setting up Oracle mirroring may slightly degrade the performance of database commit times, but you should implement it to avoid major problems caused by corruption or loss of an online redo log file. To keep performance degradation to a minimum, store the online redo log files on disks without any other major I/O load.
Minimal size of Online Redo Log Files in bytes If currently its 52428800, then
go till 100MB if log switches are happening frequently. -
Problem Description: alter database open
ERROR at line 1:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 75 change 898387731 time 11/07/2009 17:15:29
ORA-00312: online log 2 thread 1: '/opt/oracle/archivelogs/online_logs/redo02.log'
Elapsed: 00:00:01.71
SQL> select CONTROLFILE_CHANGE#, ARCHIVELOG_CHANGE# from v$database;
CONTROLFILE_CHANGE# ARCHIVELOG_CHANGE#
898387746 898387674
Elapsed: 00:00:00.00
SQL> select distinct checkpoint_change# from v$datafile_header;
CHECKPOINT_CHANGE#
898385334
Elapsed: 00:00:00.01I'm wondering if I should clear the logfile, or restore and recover control file to change# 898385334 or a higher number. Please advise. I filled out a severity 1 request, but I haven't heard back. It has been nearly 3 hours. I'm thinking I need to restore and recover the control file, but I would like some expert advice.
If it matters:
Oracle 11.1.0.6.0
the corruption was caused by a full disk (Oracle completely filled one archivelog destination while another had 600GB+ free)
the backups were done with RMAN
a full restore and recover will take considerable time (probably a week or more by the time I get the data off of tape)
I have all the archivelogs
I have a recent control file backup
and I'd be happy to post any other information.Flashback is not enabled. I think it was enabled when we were running this on a win32 platform, but it's not anymore.
I thought the logfiles were duplexed when the database was set up, but Unfortunately, logfiles aren't duplexed. It also turns out that the datafiles are fuzzy. They need recovery from the corrupted online redo log, probably past the corruption point. Once the database is online again, I'll duplex or multiplex the logfiles before letting the customer use it again.
The database is about 1.3TB. A full backup normally takes two days to complete, plus it would a few days to free up the space on the backup server. However, I have some much faster disk space now.
Oracle support has gotten back to me, but their support system is suffering from partial outages. We tried applying all the logfiles, then an open resetlogs, but it didn't work, due to datafiles being fuzzy. His recommendation was to do a full restore and recover, but that will probably take more than a week. Most of the data has been archived to tape, and I've been generating 100's of GB (in their bzip2 compressed backup form) of archivelogs working on restoring the database from a previous crash.
He said we could also try to force the database open, do a full export, create the database again, then import the data, and said it had a pretty good chance of working. Unless I come across some other option, we'll try that first, since by morning, I'll have somewhere around 1.8TB of extra disk space to hold the data. -
Dear All,
How to check the health of redo log file, we have 200 MB undo tablespace in our production server is it enough for huge transactions. Can I check how much time my redo log file data have been overwritten?
Further in which situation we will add Online Redo Log Groups and which situation we will add Log Members.
My rollback segment is using System tablespace is it recommended?
What is recommendation about 1 redo log group is redo log member or 1 redo log group is multiple redo log members.Thanks Mr. Nicolas. for your informative guidence.
Can I check how much time my redo log file data have been overwritten?Check v$loghist.
We have 218 records in v$loghist, it means 218 times data have been overwritten, i think its not good. Can you guide me how to rectyify this.
in which situation we will add Online Redo Log GroupsIn case of checkpoint not complete reported into alert.log.
How to findout checkpoint entry in alert.log
which situation we will add Log Members.This is the redolog multiplexing, at least two members for each redolog group.
Ok, Can we do multiplexing for members or just do for groups.
My rollback segment is using System tablespace is it recommended?No.
OK, can we change rollback segments tablespace.
1 redo log group is redo log member or 1 redo log group is multiple redo log membersA minimum of two redolog group with two members for each.
After, it depend of your db activity.
We have just one member for each group and we have three groups, so whats ur recommnedation we will add 1 member in each group. -
Offline redo log history list.....??
Im looking for a way how to determine offline redo log frequency for past days eventualy longer period of time and also the list of redo logs archived with creation time stamp. As far as Ive been able to check probably it could be mantaned by V$LOG_HISTORY or V$ARCHIVED_LOG.
Any suggestion for best way ?
DB: Oracle 10
TksHi,
Your can use following statement to check the log switching frequency. This statement i found somewhere from internet but don't remember the source, apology for that
set lines 200
select to_date(to_char(first_time,'DD-MM-YYYY'),'DD-MM-YYYY') day,
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
sum(1) "TOTAL_IN_DAY"
from v$log_history
group by to_date(to_char(first_time,'DD-MM-YYYY'),'DD-MM-YYYY')
order by day
/Salman -
I'm new to the oracle and trying to understand functionality of redo logs. So can someone put information about redo logs and how to recover using redo logs?
regards
chanaka843833 wrote:
I'm new to the oracle and trying to understand functionality of redo logs. So can someone put information about redo logs and how to recover using redo logs?
regards
chanaka
Overview of the Online Redo Log
http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/physical.htm#CNCPT11302
Oracle is going to use them to do the roll forward which means to reapply the changes over the blocks after, for example, a crashed instance restarts.
HTH
Aman.... -
Improving redo log writer performance
I have a database on RAC (2 nodes)
Oracle 10g
Linux 3
2 servers PowerEdge 2850
I'm tuning my database with "spotilght". I have alredy this alert
"The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
The serveres are not in RAID5.
How can I improve redo log writer performance?
Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
Therefore, redo log devices should be placed on fast devices.
Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
To reduce redo write time see Improving redo log writer performance.
See Also:
Tuning Contention - Redo Log Files
Tuning Disk I/O - Archive WriterSome comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
> Assuming that you are not CPU constrained,
moving the online redo to
high-speed solid-state disk can make a hugedifference.
Do you honestly think this is practical and usable
advice Don? There is HUGE price difference between
SSD and and normal hard disks. Never mind the
following disadvantages. Quoting
(http://en.wikipedia.org/wiki/Solid_state_disk):[
i]
# Price - As of early 2007, flash memory prices are
still considerably higher
per gigabyte than those of comparable conventional
hard drives - around $10
per GB compared to about $0.25 for mechanical
drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
Capacity - The capacity of SSDs tends to be
significantly smaller than the
capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
Lower recoverability - After mechanical failure the
data is completely lost as
the cell is destroyed, while if normal HDD suffers
mechanical failure the data
is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
Vulnerability against certain types of effects,
including abrupt power loss
(especially DRAM based SSDs), magnetic fields and
electric/static charges
compared to normal HDDs (which store the data inside
a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
Limited write cycles. Typical Flash storage will
typically wear out after
100,000-300,000 write cycles, while high endurance
Flash storage is often
marketed with endurance of 1-5 million write cycles
(many log files, file
allocation tables, and other commonly used parts of
the file system exceed
this over the lifetime of a computer). Special file
systems or firmware
designs can mitigate this problem by spreading
writes over the entire device,
rather than rewriting files in place.
Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
>
Looking at many of your postings to Oracle Forums
thus far Don, it seems to me that you are less
interested in providing actual practical help, and
more interested in self-promotion - of your company
and the Oracle books produced by it.
.. and that is not a very nice approach when people
post real problems wanting real world practical
advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.
Maybe you are looking for
-
Importing to Discussion Board Post from legacy source
Hi All, Im sure im not the first to wonder or ask this question but here it goes. We have recently moved out intranet to SharePoint 2013. We have rebuilt from the ground up using many of the out of box solutions as well as some third party apps and w
-
Sharepoint 2013 some List Templates missing in add an app
I am using sharepoint 2013. I uploaded 13 list templates in gallery but i can view only 8 templates other templates are missing when i am clicking add an app. Whats wrong this? Thanks in advance.
-
Problem with downloading trial
When I try to install the trial version of Dreamweaver cs6 I always get these errors: ERROR: DW051: Dependency of session payload {A4ED5E53-7AA0-11E1-BF04-B2D4D4A5360E} Dreamweaver Adobe Dreamweaver CS6 12 has changed. ERROR: Media DB Error : 12 What
-
Please how to program a Function (F1..F12) & a Control Key ( Del,Esc,Key+,....) of Keyboard in FORMS6i. Thanks of your Help.
-
Dear all, when we are creating GRPO our Item cost = Unite Price+ Landed Cost. But our CST2% tax amount is also one of cost . So I need Item Cost=Unite Price+ Landed Cost+CST2% . Please give me a solution for same. Thanks, Arabinda Pal