Redo log moved
Hello everyone,
We have Oracle 10g running on Redhat r3, and a database on a partition. As this partition got full I foolly tried to make some space by moving the redolog files (we have 3 groups with 1 file each) to another partition. The system was restarted, but Oracle didn's start properly but we got "ORA-01033: Oracle initiralization or shutdown in progress". The files were moved back to their position but the sequence on the files differ from the one expected: The system expects [...29] and we have[...26],[...27] and [...28] in the respective files. It is a test database so the consistency is not a problem. I tried "recover ... until cancel" but then I got "ORA-01194: file (system01.dbf) needs more recovery to be consistent". I read other similat threds but none of them gave a solution. Is ther any way I can solve it whether by changing the value of the expected sequence number (how?) whether by ignoring this inconsintency. We would like to be able to use our current data without but we are not concerned about being able to roll it back, it would be ok to have something like a "reincarnation". By the way, we had achiving off when this happened.
Any suggestions will be very appreciated.
Best regards
Hi,
as a last resort, you can recreate the controlfiles. That could get you over the "datafile needs more recovery" after incomplete recovery, where there are no archivelogs and messed up redologs. Be warned, in case you are using RMAN backups with the controlfile (the /nocatalog), you will loose the RMAN metadata stored in the controlfile.
To get a list of controlfiles, use
SQL> select name from v$controlfile;
With database in mount state, issue
SQL> alter database backup controlfile to trace;
That will create a trace file in background dump dest (where the alert log is). In the trace file, you find the commands for recreating the controlfile. You go for the RESETLOGS case.
Shutdown the instance. Remove (or better rename) all control files (the create controlfile statement cannot overwrite existing controlfiles).
Startup the database in nomount state.
sqlplus / as sysdba
SQL> startup nomount;
Run the command from the trace file to re-create controlfile, it looks something like the following (BUT USE YOUR OWN FROM THE TRACE FILE!):
SQL>
CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 14
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'E:\ORACLE\ORADATA\ORCL\REDO01.LOG' SIZE 100M,
GROUP 2 'E:\ORACLE\ORADATA\ORCL\REDO02.LOG' SIZE 100M,
GROUP 3 'E:\ORACLE\ORADATA\ORCL\REDO03.LOG' SIZE 100M
DATAFILE
'E:\ORACLE\ORADATA\ORCL\SYSTEM01.DBF',
'E:\ORACLE\ORADATA\ORCL\UNDOTBS01.DBF',
'E:\ORACLE\ORADATA\ORCL\EXAMPLE01.DBF',
'E:\ORACLE\ORADATA\ORCL\INDX01.DBF',
'E:\ORACLE\ORADATA\ORCL\TOOLS01.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS01.DBF',
'E:\ORACLE\ORADATA\ORCL\OEM_REPOSITORY.DBF',
'E:\ORACLE\ORADATA\ORCL\CWMLITE01.DBF',
'E:\ORACLE\ORADATA\ORCL\DRSYS01.DBF',
'E:\ORACLE\ORADATA\ORCL\ODM01.DBF',
'E:\ORACLE\ORADATA\ORCL\XDB01.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS02.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS03.DBF',
'E:\ORACLE\ORADATA\ORCL\USERS04.DBF'
CHARACTER SET WE8MSWIN1252
Then, skip the recover statements (as that was leading to nowhere) and issue the
SQL> ALTER DATABASE OPEN RESETLOGS;
If it opens, add the tempfiles to temporary tablespaces, the commands are at the end of the generated trace file:
SQL> ALTER TABLESPACE TEMP add tempfile '....' reuse;
Good luck with that,
Martin
Similar Messages
-
Moving the online redo log files to different location
We just installed few more drives into our sandbox system and I want to move the online redo log files for better performance. We've got the SAPARCH directory moved to a different location.
Does anyone know how/where I can change the parameters so redo log files are pointed at different drives? It's not in the <b>init<SID>.ora</b> file...
Regards,
SumitHi Sumit,
The following link contains information about moving the redo logs:
http://www.stanford.edu/dept/itss/docs/oracle/9i/server.920/a96521/onlineredo.htm
Best regards,
Alwin -
Moving ORACLE_HOME , Datafiles, controlfiles,redo log file locations
Version: 10.2.0.1.0
One of our test DB's software location(ORACLE_HOME) was wrongly installed in /home/oracle. We have been using this for an year now. Now we are thinking of moving the ORACLE_HOME to a new location /u02. Because of another disk maintenance activity, we have to move all datafiles , redo log files, control files, tempfiles to a different location as well.
This database is not in ARCHIVELOG mode(luckily).
If i do a fresh installation of 10.2.0.1.0 in /u02, i cannot use the old installation's system01.dbf, sysaux01.dbf,undotbs01.dbf files for this fresh installation. Right?
How do i go about doing this whole move thing?This database is not in ARCHIVELOG mode(luckily).
Why?
If i do a fresh installation of 10.2.0.1.0 in /u02, i cannot use the old installation's system01.dbf, sysaux01.dbf,undotbs01.dbf files for this fresh installation. Right?
No.
Issue ALTER DATABASE BACKUP CONTROLFILE TO TRACE. Then Shutdown the database. Copy all the files including the trace generated. Change the control_files parameter from init<SID>.ora file. Copy this to new home. Also tailor the backup trace file for change in location of logfiles,datafiles. Then
$export ORACLE_SID=<SID>
$sqlplus /nolog
SQL>conn sys as sysdba
password:<Enter>
SQL>startup nomount pfile=NEW_HOME/dbs/init<SID>.ora
SQL>@location_of_trace_file_generated/tracefilename.trc
Add temporary tablespace and make it default temporary tablespace for the database.
Your database will be up and running.
Regards.
Edited by: orant575 on Jul 10, 2009 4:37 PM -
Redo log in case of NOARCHIVELOG Mode.
================================================
This post is now available at .. Redo log files in case of NOARCHIVELOG Mode.
================================================
Question is related with the oracle architure..
database requires a minimum of two redo log files to guarantee that one is always available for writing while the other is being archived, this sounds perfect when DB is running in ARCHIVELOG mode but at the same time it also forces database to have 2 redo log files even when the DB is running in NOARCHIVELOG mode?
Any particular reason..
I would look for reasons not answers on what redo log is and what information it holds etc..
Edited by: pgoel on Mar 12, 2011 4:04 PM======================================
SORRY, WRONG FORUM moving it to the corrent forum
======================================
Edited by: pgoel on Mar 12, 2011 4:01 PM -
Online redo logs on a physical standby?
A question on REDO logs on physical standby databases. (10.2.0.4 db on Windows 32bit)
My PRIMARY has 3 ONLINE REDO groups, 2 members each, in ..ORADATA\LOCP10G
My PHYSICAL STANDBY has 4 STANDBY REDO groups, 2 members each, in ..ORADATA\SBY10G
I have shipping occurring from the primary in LGWR, ASYNC mode - max availablility
However I notice the STANDBY also has ONLINE REDO logs, same as the PRIMARY, in the ..ORADATA\SBY10G folder
According to the 10g Dataguard docs, section 2.5.1:
"Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."
I have tried to drop these on the STANDBY when not in apply mode, but I get the following:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE DROP LOGFILE GROUP 3
ERROR at line 1:
ORA-01275: Operation DROP LOGFILE is not allowed if standby file management is
automatic.
I also deleted them while the STANDBY instance was idle, but it recreated them when moved to MOUNT mode.
So my question is why is my PHYSICAL recreating and using these, if the docs say the shouldn't?
I saw the same error mentioned here: prob. with DataGuard
Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?
Or, is this a prooduct of having management=AUTOMATIC - i.e. the database will create these 'automatically'
Ta
btAccording to the 10g Dataguard docs, section 2.5.1:
"Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."yes, those are used when database is open.
You should not perform any changes in Standby. Even if those exist online redo log files, whats the difficulty you have seen?
These will be used whenever you performed switchover/failover. So nothing to worry on this.
Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?Then oracle functionality itself harm if you think in that way. When they not used in open then what the harm with that?
Standby_File_management --> for example if you add any datafile, those information will be in archives/redos once they applied on standby those will be added automatically when it is set to AUTO if its manual, then it creates a unnamed file in $ORACLE_HOME/dbs location later you have to rename that file and recovery need to perform .
check this http://docs.oracle.com/cd/B14117_01/server.101/b10755/initparams206.htm
HTH. -
Multiplexing Online redo logs, archive logs, and control files.
Currently I am only multiplexing my control files and online redo logs, My archive logs are only going to the FRA and then being backed up to tape.
We have to replace disks that hold the FRA data. HP says there is a chance we will have to rebuild the FRA.
As my archive logs are going to the FRA now, can I multiplex them to another disk group? And if all of the control files, online redo logs and archive logs are multiplexed to another disk group, when ASM dismounts the FRA disk group due to insufficient number of disks, will the database remain open and on line.
If so then I will just need to rebuild the ASM volumes, and the FRA disk group and bring it to the mount state, correct?
Thanks!You can save your online redo logs and archive logs anywhere you want by making use of of init params create_online_log_dest and log_archive_dest_n. You will have to create new redo log groups in the new location and drop the ones in the FRA. The archive logs will simply land wherever you designate with log_archive_dest_n parameters. Moving the control files off FRA is a little trickier because you will need to restore your controlfile to a non-FRA destination and then shutdown your instance, edit the control file param to reflect changes and restart.
I think you will be happier if you move everything off the FRA diskgroup before dismounting it, and not expecting the db to automagically recover from the loss of files on the FRA. -
Redo log backup to disk is failed
Hi,
My Archive log backup to disk is failed
BR0002I BRARCHIVE 7.00 (13)
BR0006I Start of offline redo log processing: aeamsbyx.svd 2009-05-04 15.15.07
BR0477I Oracle pfile E:\oracle\DV1\102\database\initDV1.ora created from spfile E:\oracle\DV1\102\database\spfileDV1.ora
BR0101I Parameters
Name Value
oracle_sid DV1
oracle_home E:\oracle\DV1\102
oracle_profile E:\oracle\DV1\102\database\initDV1.ora
sapdata_home E:\oracle\DV1
sap_profile E:\oracle\DV1\102\database\initDV1.sap
backup_dev_type disk
archive_copy_dir W:\oracle\DV1\sapbackup
compress no
disk_copy_cmd copy
cpio_disk_flags -pdcu
archive_dupl_del only
system_info SAPServiceDV1 SAP2DQSRV Windows 5.2 Build 3790 Service Pack 1 Intel
oracle_info DV1 10.2.0.2.0 8192 21092 71120290
sap_info 46C SAPR3 DV1 W1372789206 R3_ORA 0020109603
make_info NTintel OCI_10103_SHARE Apr 5 2006
command_line brarchive -u / -c force -p initDV1.sap -sd
BR0013W No offline redo log files found for processing
BR0007I End of offline redo log processing: aeamsbyx.svd 2009-05-04 15.15.11
BR0280I BRARCHIVE time stamp: 2009-05-04 15.15.11
BR0004I BRARCHIVE completed successfully with warnings
I have checked the target directory nothing is backed up and gone through few SAP notes 10170, 17163, 132551, 490976 and 646681 but nothing is helped.
And anohter question is in DB13 Calander --> Schedule an action pattern maximum I can backup only 1 month redo logs. But I have 3 months redo log files are there. How can I back up those files.
Our environment is SAP R/3 4.6C, windows 2003 and Oracle 10.2.0.2.0
Please some one help me on this.
Thanks and Regards
SatyaUpdate your BRTools. They are very old.
Check that your DB is in archive-log-mode. If no enable it.
testing backup
- run a online backup
- run "sqlplus / as sysdba"
- SQL> alter system switch logfile; ... this switches the current online log... a new log will written to oraarch.
- run a archivelog backup
... now you should have complete db-backup with minimum 1!!! archive log.
Now you can delete old redologs from oraarch.
If this doesn't work and your database is in archive-log-mode:
- shutdown sap and oracle
- MOVE all redologs from oraarch to a other location manually... no more files should be on oraarch
- run a offline backup
If the offline backup was running successfully you can delete the prior moved redologs. The backup is consistent and the redologs will no more required.
- start oracle and sap
Oracle should now write new redologs to oraarch. Test online backup!
Edited by: Thomas Rudolph on May 6, 2009 10:16 PM
Edited by: Thomas Rudolph on May 6, 2009 10:17 PM -
Hi,
I need to move online redo logs to the folder where data files reside. It will be easier for cold back up in NOARCHIVE mode. At the same time I would like recovery_flash_area unattached.
The Beta version was perfect for that purpose. I already moved SPFILE.
Can somebody explain how to move only those two files to C:\oraclexe\oradata\XE?
KonstantinRelocating and Renaming Redo Log Members
Thank you Forbrich and others from the forum! I have solved the problem tanks to your help.
There are some slightly differences from Oracle® Database Administrator's Guide
10g Release 2 (10.2) Part Number B14231-01 and APEX for Win XP.
1. SQL Statements
SQL*Plus: Release 10.2.0.1.0 - Production on Thu May 25 19:37:04 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL> connect / AS SYSDBA
Connected.
SQL> SHUTDOWN
Database closed.
Database dismounted.
ORACLE instance shut down.
/* Then using OS (Windows Explorer) copy two files to the new folder. */
SQL> STARTUP MOUNT XE
ORACLE instance started.
Total System Global Area 314572800 bytes
Fixed Size 1287184 bytes
Variable Size 243272688 bytes
Database Buffers 67108864 bytes
Redo Buffers 2904064 bytes
Database mounted.
SQL> ALTER DATABASE RENAME FILE 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_25T0KHLK_.LOG','C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_25T0KF00_.LOG' TO 'C:\ORACLEXE\ORADATA\XE\O1_MF_2_25T0KHLK_.LOG','C:\ORACLEXE\ORADATA\XE\O1_MF_1_25T0KF00_.LOG';
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> EXIT
2. APEX for Win XP automatically erases redo log files.
Everything works OK
Konstantin
Message was edited by:
konstantin.gudjev -
SQL> alter database add logfile
2 group 4 ( 'C:\oracle\oradata\orcl\logfilebackup\redo01.log') size 10m
3 /
Database altered.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.from dos prompt i delete the redo01.log
C:\del C:\oracle\oradata\orcl\redo01.log
C:\>
SQL> startup
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 787988 bytes
Variable Size 145488364 bytes
Database Buffers 25165824 bytes
Redo Buffers 524288 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'C:\ORACLE\ORADATA\ORCL\REDO01.LOG'redo01.log is deleted and i multiplexed the redo log in my first step so i can recover it by coping from there
SQL> shutdown
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
C:\copy c:\oracle\oradata\orcl\controlfilebackup\redo01.log c:\oracle\oradata\
1 file(s) copied.
SQL> startup
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 787988 bytes
Variable Size 145488364 bytes
Database Buffers 25165824 bytes
Redo Buffers 524288 bytes
Database mounted.
ORA-00341: log 1 of thread 1, wrong log # in header
ORA-00312: online log 1 thread 1: 'C:\ORACLE\ORADATA\ORCL\REDO01.LOG'why wrong log # then why we multiplexed the log file there is no advantage of multiplexed log files after moving back it from it mulitplexed location to its original location.i will read it paul laters,but database is not going to up i tested the experiment again in new freshed database but no use
SQL> alter database add logfile member 'C:\oracle\oradata\orcl1\orcl1\logfilebackup\redo01.log' TO GROUP 1;
Database altered.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
deleted the log file redo01.log from "C:\oracle\oradata\orcl1\orcl1"
SQL> startup
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 787988 bytes
Variable Size 145488364 bytes
Database Buffers 25165824 bytes
Redo Buffers 524288 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'C:\ORACLE\ORADATA\ORCL1\ORCL1\REDO01.LOG'
ORA-00312: online log 1 thread 1:
'C:\ORACLE\ORADATA\ORCL1\ORCL1\LOGFILEBACKUP\REDO01.LOG'
SQL> shutdown
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
copy the multiplexed log file redo01.log from "C:\oracle\oradata\orcl1\orcl1\logfilebackup"
to "C:\oracle\oradata\orcl1\orcl1"
SQL> startup
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 787988 bytes
Variable Size 145488364 bytes
Database Buffers 25165824 bytes
Redo Buffers 524288 bytes
Database mounted.
ORA-00322: log 1 of thread 1 is not current copy
ORA-00312: online log 1 thread 1: 'C:\ORACLE\ORADATA\ORCL1\ORCL1\REDO01.LOG'
ORA-00312: online log 1 thread 1:
'C:\ORACLE\ORADATA\ORCL1\ORCL1\LOGFILEBACKUP\REDO01.LOG'what i did wrong? -
Re: ORACLE/LINUX HAS SERIOUS BUG! | no O_SYNC on redo logs
To StE.
Michael,
how do you traced LGWR process?
I try >gdb oracle PID
and I attached to LGWR process.
But how do you achieved the trace file as result?
Regards,
Mark
nullHas anyone reported this to Oracle? I haven't heard about this
bug, but it may be in the pipeline for the next patch.
Mark Malakanov (guest) wrote:
: StE,
: you are right.
: Oracle MUST write into current redo log file after every
: commit.
: A confirmation "Commited" MUST be appeared after a write
: operation finished and sync'ed.
: Unfortunately this rule does not supported in Oracle8.0.5.1 for
: Linux.
: I made strace against LGWR process. It really open log file
: without O_SYNC key.
: But I tried to set a Sync attribute with chattr command against
: logfiles. Also I tried to do mount with sync option against
: whole filesystem. This ways brings system to open files as
: O_SYNC key used.
: All of above doesnt help. Oracle looses last commited
: transactions. Only very bad performance.
: Also I make the simple test in C. Open file without O_SYNC,
: Write 5Mb, fsync and message. The message appears only after
: fsync operation is finished. I understood - Linux is not the
: cause of error.
: I did find no fsync calls in strace's log file.
: I think it is the cause of transactions are loosed.
: Mark
: StE (guest) wrote:
: : Jay Walters (guest) wrote:
: : : It is expected behavior to not write to the datafiles until
: a
: : : checkpoint is performed. Did you see this problem for
: : : logfiles? Or just on data files? I am looking into moving
: to
: : : ORACLE on Linux but need data integrity...
: : : Can anybody confirm if this is really a problem as it seems
: : : like a show stopper.
: : Using strace I've observed the RDBMS opening the online redo
: : logs without O_SYNC and, on a commit, checkpoint, or log
: change,
: : I cannot see fsync being called on the file handle.
: : Maybe this is a bug in strace, or a blip of stupidity on my
: : part, but someone testing has managed to lose a committed
: : transaction after deliberately powering off their machine.
: That
: : shouldn't happen.
: : -michael
null -
Improving redo log writer performance
I have a database on RAC (2 nodes)
Oracle 10g
Linux 3
2 servers PowerEdge 2850
I'm tuning my database with "spotilght". I have alredy this alert
"The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
The serveres are not in RAID5.
How can I improve redo log writer performance?
Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
Therefore, redo log devices should be placed on fast devices.
Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
To reduce redo write time see Improving redo log writer performance.
See Also:
Tuning Contention - Redo Log Files
Tuning Disk I/O - Archive WriterSome comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
> Assuming that you are not CPU constrained,
moving the online redo to
high-speed solid-state disk can make a hugedifference.
Do you honestly think this is practical and usable
advice Don? There is HUGE price difference between
SSD and and normal hard disks. Never mind the
following disadvantages. Quoting
(http://en.wikipedia.org/wiki/Solid_state_disk):[
i]
# Price - As of early 2007, flash memory prices are
still considerably higher
per gigabyte than those of comparable conventional
hard drives - around $10
per GB compared to about $0.25 for mechanical
drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
Capacity - The capacity of SSDs tends to be
significantly smaller than the
capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
Lower recoverability - After mechanical failure the
data is completely lost as
the cell is destroyed, while if normal HDD suffers
mechanical failure the data
is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
Vulnerability against certain types of effects,
including abrupt power loss
(especially DRAM based SSDs), magnetic fields and
electric/static charges
compared to normal HDDs (which store the data inside
a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
Limited write cycles. Typical Flash storage will
typically wear out after
100,000-300,000 write cycles, while high endurance
Flash storage is often
marketed with endurance of 1-5 million write cycles
(many log files, file
allocation tables, and other commonly used parts of
the file system exceed
this over the lifetime of a computer). Special file
systems or firmware
designs can mitigate this problem by spreading
writes over the entire device,
rather than rewriting files in place.
Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
>
Looking at many of your postings to Oracle Forums
thus far Don, it seems to me that you are less
interested in providing actual practical help, and
more interested in self-promotion - of your company
and the Oracle books produced by it.
.. and that is not a very nice approach when people
post real problems wanting real world practical
advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us. -
Recursive call with commit not written to redo log
In my DBA training I was led to the belief that a commit caused the log writer to write the redo log buffer to the redo log file, but I find this is not true if the commit is inside recursive code.
I believe this is intentional as a way off reducing i/o but it does raise data integrity problems.
Apparently if you have some PL/SQL (can be sql*plus code or procedure) with a loop containing a commit,
the commit does not actually ensure that the transaction is written to the Redo log.
Instead Oracle only ensures all is written to the redo log when control is returned to the application/sqlplus.
You can see this by checking the redo writes in v$sysstat.
It will be less than the number of expected commits.
Thus the old rule of expecting all committed transation to be there after a database recovery is not necessarily true.
Does anyone know how to force the commit to ensure redo is written
-inside pl/sql or perhaps a setting in the the calling environment ?
ThanksThanks for your input
The trouble is that I believe if you stopped in a debugger the log writer would catch up -
Or if you killed your instance in the middle of this test you wouldn't be sure how many commits the db thought it did
ie. the db would recover to the last known commit in the redo logs
- maybe I should turn on tracing ?
Since my question I have a found a site that seems to back up the results I am getting
http://www.ixora.com.au/notes/redo_write_triggers.htm
see the note under point 3
Have a look at the stats below and you will see
redo writes 19026
user commits 100057
How I actually tested was to run
the utlbstat scipt
then run some pl/sql that
- mimiced a business transactions (4 select lookup validations, 2 inserts and 1 insert or update and a commit)
- loop * 100000
then ran utlestat.sql
i.e. test script
@C:\oracle\ora92\rdbms\admin\utlbstat.sql
connect test/test
@c:\mig\Load_test.sql
@C:\oracle\ora92\rdbms\admin\utlestat.sql
Statistic Total Per Transact Per Logon Per Second
CPU used by this session 37433 .37 935.83 79.31
CPU used when call started 37434 .37 935.85 79.31
CR blocks created 62 0 1.55 .13
DBWR checkpoint buffers wri 37992 .38 949.8 80.49
DBWR checkpoints 6 0 .15 .01
DBWR transaction table writ 470 0 11.75 1
DBWR undo block writes 22627 .23 565.68 47.94
SQL*Net roundtrips to/from 4875 .05 121.88 10.33
background checkpoints comp 5 0 .13 .01
background checkpoints star 6 0 .15 .01
background timeouts 547 .01 13.68 1.16
branch node splits 4 0 .1 .01
buffer is not pinned count 4217 .04 105.43 8.93
buffer is pinned count 649 .01 16.23 1.38
bytes received via SQL*Net 1027466 10.27 25686.65 2176.83
bytes sent via SQL*Net to c 5237709 52.35 130942.73 11096.84
calls to get snapshot scn: 1514482 15.14 37862.05 3208.65
calls to kcmgas 303700 3.04 7592.5 643.43
calls to kcmgcs 215 0 5.38 .46
change write time 4419 .04 110.48 9.36
cleanout - number of ktugct 1875 .02 46.88 3.97
cluster key scan block gets 101 0 2.53 .21
cluster key scans 49 0 1.23 .1
commit cleanout failures: b 27 0 .68 .06
commit cleanouts 1305175 13.04 32629.38 2765.2
commit cleanouts successful 1305148 13.04 32628.7 2765.14
commit txn count during cle 3718 .04 92.95 7.88
consistent changes 752 .01 18.8 1.59
consistent gets 1514852 15.14 37871.3 3209.43
consistent gets - examinati 1005941 10.05 25148.53 2131.23
data blocks consistent read 752 .01 18.8 1.59
db block changes 3465329 34.63 86633.23 7341.8
db block gets 3589136 35.87 89728.4 7604.1
deferred (CURRENT) block cl 1068723 10.68 26718.08 2264.24
enqueue releases 805858 8.05 20146.45 1707.33
enqueue requests 805852 8.05 20146.3 1707.31
execute count 1004701 10.04 25117.53 2128.6
free buffer requested 36371 .36 909.28 77.06
hot buffers moved to head o 3801 .04 95.03 8.05
immediate (CURRENT) block c 3894 .04 97.35 8.25
index fast full scans (full 448 0 11.2 .95
index fetch by key 201128 2.01 5028.2 426.12
index scans kdiixs1 501268 5.01 12531.7 1062.01
leaf node splits 1750 .02 43.75 3.71
logons cumulative 2 0 .05 0
messages received 19465 .19 486.63 41.24
messages sent 19465 .19 486.63 41.24
no work - consistent read g 3420 .03 85.5 7.25
opened cursors cumulative 201103 2.01 5027.58 426.07
opened cursors current -3 0 -.08 -.01
parse count (hard) 4 0 .1 .01
parse count (total) 201103 2.01 5027.58 426.07
parse time cpu 2069 .02 51.73 4.38
parse time elapsed 2260 .02 56.5 4.79
physical reads 6600 .07 165 13.98
physical reads direct 75 0 1.88 .16
physical writes 38067 .38 951.68 80.65
physical writes direct 75 0 1.88 .16
physical writes non checkpo 34966 .35 874.15 74.08
prefetched blocks 2 0 .05 0
process last non-idle time 1029203858 10286.18 25730096.45 2180516.65
recursive calls 3703781 37.02 92594.53 7846.99
recursive cpu usage 35210 .35 880.25 74.6
redo blocks written 1112273 11.12 27806.83 2356.51
redo buffer allocation retr 21 0 .53 .04
redo entries 1843462 18.42 46086.55 3905.64
redo log space requests 17 0 .43 .04
redo log space wait time 313 0 7.83 .66
redo size 546896692 5465.85 13672417.3 1158679.43
redo synch time 677 .01 16.93 1.43
redo synch writes 63 0 1.58 .13
redo wastage 4630680 46.28 115767 9810.76
redo write time 64354 .64 1608.85 136.34
redo writer latching time 42 0 1.05 .09
redo writes 19026 .19 475.65 40.31
rollback changes - undo rec 10 0 .25 .02
rollbacks only - consistent 122 0 3.05 .26
rows fetched via callback 1040 .01 26 2.2
session connect time 1029203858 10286.18 25730096.45 2180516.65
session logical reads 5103988 51.01 127599.7 10813.53
session pga memory -263960 -2.64 -6599 -559.24
session pga memory max -788248 -7.88 -19706.2 -1670.02
session uga memory -107904 -1.08 -2697.6 -228.61
session uga memory max 153920 1.54 3848 326.1
shared hash latch upgrades 501328 5.01 12533.2 1062.14
sorts (memory) 1467 .01 36.68 3.11
sorts (rows) 38796 .39 969.9 82.19
switch current to new buffe 347 0 8.68 .74
table fetch by rowid 1738 .02 43.45 3.68
table scan blocks gotten 424 0 10.6 .9
table scan rows gotten 4164 .04 104.1 8.82
table scans (short tables) 451 0 11.28 .96
transaction rollbacks 5 0 .13 .01
user calls 5912 .06 147.8 12.53
user commits 100057 1 2501.43 211.99
user rollbacks 56 0 1.4 .12
workarea executions - optim 1676 .02 41.9 3.55
write clones created in bac 5 0 .13 .01
write clones created in for 745 .01 18.63 1.58
99 rows selected. -
Configure archiver to read from both redo log members
Hi, We have moved our Data Warehouse to a new SAN. The luns for the redo logs are getting hit heavily. Just trying to find out if on AIX it is possible to configure archiver so that it reads from both members, current iostat data shows reads of the redo logs are only happening against the first member of each group and none against the second member. Thanks Tom Cullen
See a recent discussion on this issue :
Redo log Doubt
Hemant K Chitale -
How to find the actual data redolog contains? My redo log is of 1 GB of each group. We have moved the redolog to newer disk, so i want to know the actual size of redolog, how much data it carries and how we can analyze/tune redolog to increase performance.
ThanksHi,
>>How to find the actual data redolog contains?
I think that you can take a look at the "actual_redo_blks" column in v$instance_recovery dynamic performance view in order to see the current number of redo blocks required to be read in case of recovery.
>>how much data it carries and how we can analyze/tune redolog to increase performance.
Oracle 10g has introduced a redo logfile sizing advisor that will recommend a size for the redo logs that limit excessive log switches, incomplete and excessive checkpoints, log archiving issues, DBWR performance and excessive disk I/O. So, you can obtain redo sizing advice on the redo log groups page of Oracle Enterprise Manager Database Control.
Cheers -
Too much redo log files...
Hi,
I have a very light application in Oracle 9.2.0.7 in Linux-32bits that is generating 400 logfiles a day. I can´t find why those logs are being generated!
The only thing relevant in that application is a big table that serves only for insert command (1000 per hour) for audit reasons. But this table was created with NOLOGGING option.
Redo Size: 4 groups of 40 Mb each.
The insert statement uses a sequence to generate a unique key. Is this sequence causing my big logfile generation?
Thanks,
Paulo.Here is the statspack:
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
DB 378381468 DB 1 9.2.0.7.0 NO host
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 12 28-Jun-07 11:05:11 26 1,198.7
End Snap: 13 28-Jun-07 12:05:24 29 1,077.2
Elapsed: 60.22 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 512M Std Block Size: 8K
Shared Pool Size: 512M Log Buffer: 5,120K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 281,252.38 2,073.48
Logical reads: 73,113.76 539.02
Block changes: 3,133.29 23.10
Physical reads: 3.24 0.02
Physical writes: 21.39 0.16
User calls: 26.12 0.19
Parses: 145.64 1.07
Hard parses: 0.81 0.01
Sorts: 138.33 1.02
Logons: 0.69 0.01
Executes: 443.27 3.27
Transactions: 135.64
% Blocks changed per Read: 4.29 Recursive Call %: 98.97
Rollback per transaction %: 0.13 Rows per Sort: 17.26
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 99.99
Library Hit %: 99.66 Soft Parse %: 99.44
Execute to Parse %: 67.14 Latch Hit %: 99.93
Parse CPU to Parse Elapsd %: 55.03 % Non-Parse CPU: 99.22
Shared Pool Statistics Begin End
Memory Usage %: 91.06 91.23
% SQL with executions>1: 44.54 39.78
% Memory for SQL w/exec>1: 43.09 33.89
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 3,577 84.73
log file parallel write 854,726 359 8.51
row cache lock 56,780 104 2.47
process startup 172 91 2.16
SQL*Net message from dblink 5,001 22 .53
Wait Events for DB: DB Instance: DB Snaps: 12 -13
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 854,726 0 359 0 1.7
row cache lock 56,780 0 104 2 0.1
process startup 172 4 91 530 0.0
SQL*Net message from dblink 5,001 0 22 4 0.0
log file sync 3,015 3 19 6 0.0
enqueue 471 1 9 20 0.0
buffer busy waits 20,290 0 8 0 0.0
db file sequential read 3,853 0 6 2 0.0
SQL*Net more data from dblin 88,584 0 5 0 0.2
control file parallel write 1,704 0 5 3 0.0
latch free 1,404 748 4 3 0.0
single-task message 134 0 4 27 0.0
LGWR wait for redo copy 8,230 1 2 0 0.0
log file switch completion 60 0 2 32 0.0
log file sequential read 1,333 0 2 1 0.0
control file sequential read 4,530 0 1 0 0.0
db file scattered read 246 0 0 1 0.0
SQL*Net more data to client 7,292 0 0 0 0.0
SQL*Net break/reset to clien 72 0 0 1 0.0
db file parallel write 4,568 0 0 0 0.0
log file single write 62 0 0 0 0.0
async disk IO 3,410 0 0 0 0.0
SQL*Net message to dblink 5,001 0 0 0 0.0
direct path read (lob) 84 0 0 0 0.0
direct path read 318 0 0 0 0.0
direct path write 312 0 0 0 0.0
buffer deadlock 115 115 0 0 0.0
SQL*Net message from client 86,475 0 27,758 321 0.2
jobq slave wait 4,594 4,532 13,455 2929 0.0
SQL*Net more data from clien 602 0 1 2 0.0
SQL*Net message to client 86,481 0 0 0 0.2
Background Wait Events for DB: DB Instance: DB Snaps: 12 -13
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 854,744 0 359 0 1.7
control file parallel write 1,704 0 5 3 0.0
LGWR wait for redo copy 8,230 1 2 0 0.0
log file sequential read 1,333 0 2 1 0.0
control file sequential read 1,849 0 1 1 0.0
db file parallel write 4,567 0 0 0 0.0
latch free 74 0 0 0 0.0
rdbms ipc reply 65 0 0 0 0.0
log file single write 62 0 0 0 0.0
async disk IO 3,410 0 0 0 0.0
db file sequential read 1 0 0 8 0.0
buffer busy waits 5 0 0 0 0.0
direct path read 248 0 0 0 0.0
direct path write 248 0 0 0 0.0
rdbms ipc message 868,357 6,776 30,095 35 1.8
pmon timer 1,204 1,204 3,529 2931 0.0
smon timer 154 0 3,514 22816 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
active txn count during cleanout 2,844 0.8 0.0
background checkpoints completed 31 0.0 0.0
background checkpoints started 31 0.0 0.0
background timeouts 7,956 2.2 0.0
branch node splits 15 0.0 0.0
buffer is not pinned count 324,721,116 89,875.8 662.6
buffer is pinned count 308,901,876 85,497.3 630.3
bytes received via SQL*Net from c 8,048,130 2,227.6 16.4
bytes received via SQL*Net from d 181,575,342 50,256.1 370.5
bytes sent via SQL*Net to client 33,964,494 9,400.6 69.3
bytes sent via SQL*Net to dblink 933,170 258.3 1.9
calls to get snapshot scn: kcmgss 9,900,434 2,740.2 20.2
calls to kcmgas 985,222 272.7 2.0
calls to kcmgcs 11,669 3.2 0.0
change write time 9,910 2.7 0.0
cleanout - number of ktugct calls 18,903 5.2 0.0
cleanouts and rollbacks - consist 33 0.0 0.0
cleanouts only - consistent read 932 0.3 0.0
cluster key scan block gets 289,955 80.3 0.6
cluster key scans 101,840 28.2 0.2
commit cleanout failures: block l 0 0.0 0.0
commit cleanout failures: buffer 113 0.0 0.0
commit cleanout failures: callbac 96 0.0 0.0
commit cleanout failures: cannot 3,095 0.9 0.0
commit cleanouts 1,966,376 544.3 4.0
commit cleanouts successfully com 1,963,072 543.3 4.0
commit txn count during cleanout 309,283 85.6 0.6
consistent changes 5,245,452 1,451.8 10.7
consistent gets 242,967,989 67,248.3 495.8
consistent gets - examination 135,768,580 37,577.8 277.0
CPU used by this session 357,659 99.0 0.7
CPU used when call started 344,951 95.5 0.7
CR blocks created 768 0.2 0.0
current blocks converted for CR 0 0.0 0.0
cursor authentications 886 0.3 0.0
data blocks consistent reads - un 1,760 0.5 0.0
db block changes 11,320,580 3,133.3 23.1
db block gets 21,192,200 5,865.5 43.2
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 69,649 19.3 0.1
DBWR checkpoints 31 0.0 0.0
DBWR free buffers found 0 0.0 0.0
DBWR lru scans 0 0.0 0.0
DBWR make free requests 0 0.0 0.0
DBWR revisited being-written buff 0 0.0 0.0
DBWR summed scan depth 0 0.0 0.0
DBWR transaction table writes 2,070 0.6 0.0
DBWR undo block writes 44,323 12.3 0.1
deferred (CURRENT) block cleanout 745,333 206.3 1.5
dirty buffers inspected 1 0.0 0.0
enqueue conversions 8,193 2.3 0.0
enqueue deadlocks 1 0.0 0.0
enqueue releases 2,002,960 554.4 4.1
enqueue requests 2,002,963 554.4 4.1
enqueue timeouts 3 0.0 0.0
enqueue waits 451 0.1 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
exchange deadlocks 115 0.0 0.0
execute count 1,601,528 443.3 3.3
free buffer inspected 30 0.0 0.0
free buffer requested 1,196,628 331.2 2.4
hot buffers moved to head of LRU 26,707 7.4 0.1
immediate (CR) block cleanout app 965 0.3 0.0
immediate (CURRENT) block cleanou 10,817 3.0 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 131,028,270 36,265.8 267.4
index scans kdiixs1 17,868,907 4,945.7 36.5
leaf node splits 4,528 1.3 0.0
leaf node 90-10 splits 3,017 0.8 0.0
logons cumulative 2,499 0.7 0.0
messages received 859,631 237.9 1.8
messages sent 859,631 237.9 1.8
no buffer to keep pinned count 21,253 5.9 0.0
no work - consistent read gets 87,667,752 24,264.5 178.9
opened cursors cumulative 528,984 146.4 1.1
OS Involuntary context switches 0 0.0 0.0
OS Page faults 0 0.0 0.0
OS Page reclaims 0 0.0 0.0
OS System time used 0 0.0 0.0
OS User time used 0 0.0 0.0
OS Voluntary context switches 0 0.0 0.0
parse count (failures) 7 0.0 0.0
parse count (hard) 2,928 0.8 0.0
parse count (total) 526,209 145.6 1.1
parse time cpu 2,778 0.8 0.0
parse time elapsed 5,048 1.4 0.0
physical reads 11,690 3.2 0.0
physical reads direct 6,698 1.9 0.0
physical reads direct (lob) 102 0.0 0.0
physical writes 77,270 21.4 0.2
physical writes direct 7,620 2.1 0.0
physical writes direct (lob) 0 0.0 0.0
physical writes non checkpoint 33,360 9.2 0.1
pinned buffers inspected 0 0.0 0.0
prefetched blocks 799 0.2 0.0
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time 3,630 1.0 0.0
recursive calls 9,053,277 2,505.8 18.5
recursive cpu usage 255,973 70.9 0.5
redo blocks written 2,572,625 712.1 5.3
redo buffer allocation retries 50 0.0 0.0
redo entries 3,074,994 851.1 6.3
redo log space requests 60 0.0 0.0
redo log space wait time 193 0.1 0.0
redo ordering marks 0 0.0 0.0
redo size 1,016,164,852 281,252.4 2,073.5
redo synch time 1,956 0.5 0.0
redo synch writes 5,317 1.5 0.0
redo wastage 259,689,040 71,876.3 529.9
redo write time 37,488 10.4 0.1
redo writer latching time 242 0.1 0.0
redo writes 854,744 236.6 1.7
rollback changes - undo records a 1,098 0.3 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
rollbacks only - consistent read 747 0.2 0.0
rows fetched via callback 117,908,375 32,634.5 240.6
session connect time 0 0.0 0.0
session cursor cache count 16 0.0 0.0
session cursor cache hits 484,372 134.1 1.0
session logical reads 264,160,020 73,113.8 539.0
session pga memory 16,473,320 4,559.5 33.6
session pga memory max 16,914,080 4,681.5 34.5
session uga memory 17,216,514,728 4,765,157.7 35,130.3
session uga memory max 1,865,036,296 516,201.6 3,805.6
shared hash latch upgrades - no w 17,251,803 4,774.9 35.2
shared hash latch upgrades - wait 24,671 6.8 0.1
sorts (disk) 32 0.0 0.0
sorts (memory) 499,747 138.3 1.0
sorts (rows) 8,626,333 2,387.6 17.6
SQL*Net roundtrips to/from client 80,069 22.2 0.2
SQL*Net roundtrips to/from dblink 5,001 1.4 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 1 0.0 0.0
table fetch by rowid 238,882,317 66,117.4 487.4
table fetch continued row 4,436,670 1,228.0 9.1
table scan blocks gotten 5,066,302 1,402.2 10.3
table scan rows gotten 134,679,712 37,276.4 274.8
table scans (direct read) 0 0.0 0.0
table scans (long tables) 447 0.1 0.0
table scans (short tables) 152,382 42.2 0.3
transaction rollbacks 530 0.2 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 94,382 26.1 0.2
user commits 489,423 135.5 1.0
user rollbacks 653 0.2 0.0
write clones created in backgroun 11 0.0 0.0
write clones created in foregroun 878 0.2 0.0
Tablespace IO Stats for DB: DB Instance: DB Snaps: 12 -13
->ordered by IOs (Reads + Writes) desc
Tablespace
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
T1_UNDO
31 0 0.0 1.0 46,535 13 344 0.4
T1
31 0 0.0 1.0 13,754 4 3,657 0.4
T2
3,308 1 0.8 1.1 2,973 1 0 0.0
T3
31 0 0.0 1.0 5,710 2 16,240 0.4
T4
555 0 4.0 1.0 600 0 0 0.0
SYSTEM
429 0 3.9 2.5 280 0 49 0.2
TEMP
134 0 0.4 48.1 238 0 0 0.0
T1_16K
31 0 0.0 1.0 31 0 0 0.0
T2_16K
31 0 0.0 1.0 31 0 0 0.0
Buffer Pool Statistics for DB: DB Instance: DB Snaps: 12 -13
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
D 49,625 100.0 263,975,320 4,909 69,666 0 0 20,290
16k 7,056 100.0 30 0 0 0 0 0
Instance Recovery Stats for DB: DB Instance: DB Snaps: 12 -13
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
B 0 0 10518 10000 73728 186265 10000
E 0 0 13189 10000 73728 219498 10000
Buffer Pool Advisory for DB: DB Instance: DB End Snap: 13
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
D 32 .1 3,970 205.60 4,726,309,734
D 64 .2 7,940 111.86 2,571,419,284
D 96 .2 11,910 59.99 1,379,092,849
D 128 .3 15,880 32.24 741,224,090
D 160 .4 19,850 16.05 369,050,333
D 192 .5 23,820 1.28 29,352,221
D 224 .6 27,790 1.05 24,077,507
D 256 .6 31,760 1.03 23,723,389
D 288 .7 35,730 1.02 23,518,434
D 320 .8 39,700 1.01 23,328,106
D 352 .9 43,670 1.01 23,193,257
D 384 1.0 47,640 1.00 23,064,957
D 400 1.0 49,625 1.00 22,987,576
D 416 1.0 51,610 1.00 22,927,325
D 448 1.1 55,580 0.99 22,824,032
D 480 1.2 59,550 0.99 22,713,509
D 512 1.3 63,520 0.99 22,649,147
D 544 1.4 67,490 0.98 22,605,489
D 576 1.4 71,460 0.98 22,525,897
D 608 1.5 75,430 0.97 22,407,418
D 640 1.6 79,400 0.96 22,022,381
16k 16 .1 1,008 1.00 139,218,299
16k 32 .3 2,016 1.00 139,211,699
16k 48 .4 3,024 1.00 139,207,678
16k 64 .6 4,032 1.00 139,202,581
16k 80 .7 5,040 1.00 139,198,339
16k 96 .9 6,048 1.00 139,193,448
16k 112 1.0 7,056 1.00 139,188,446
16k 128 1.1 8,064 1.00 139,183,808
16k 144 1.3 9,072 1.00 139,179,598
16k 160 1.4 10,080 1.00 139,175,656
16k 176 1.6 11,088 1.00 139,170,607
16k 192 1.7 12,096 1.00 139,166,491
16k 208 1.9 13,104 1.00 139,162,487
16k 224 2.0 14,112 1.00 139,158,197
16k 240 2.1 15,120 1.00 139,153,797
16k 256 2.3 16,128 1.00 139,149,365
16k 272 2.4 17,136 1.00 139,144,252
16k 288 2.6 18,144 1.00 139,140,121
16k 304 2.7 19,152 1.00 139,135,435
16k 320 2.9 20,160 1.00 139,130,845
Buffer wait Statistics for DB: DB Instance: DB Snaps: 12 -13
-> ordered by wait time desc, waits desc
Tot Wait Avg
Class Waits Time (s) Time (ms)
data block 19,912 8 0
undo header 343 0 0
segment header 34 0 0
undo block 1 0 0
Enqueue activity for DB: DB Instance: DB Snaps: 12 -13
-> Enqueue stats gathered prior to 9i should not be compared with 9i data
-> ordered by Wait Time desc, Waits desc
Avg Wt Wait
Eq Requests Succ Gets Failed Gets Waits Time (ms) Time (s)
TM 981,781 981,773 0 7 1,365.43 10
TX 983,944 983,906 0 412 .59 0
HW 4,645 4,645 0 32 .09 0
Rollback Segment Stats for DB: DB Instance: DB Snaps: 12 -13
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
0 155.0 0.00 0 0 0 0
1 202,561.0 0.00 31,178,710 40 2 3
2 191,044.0 0.00 30,067,156 23 2 6
3 195,891.0 0.00 30,470,548 39 1 3
4 203,928.0 0.00 31,822,638 38 2 5
5 196,386.0 0.00 -4,264,350,168 38 1 3
6 204,125.0 0.00 32,081,200 24 1 7
7 192,169.0 0.00 33,732,012 45 3 6
8 195,819.0 0.00 30,503,550 40 2 2
9 202,905.0 0.00 31,595,438 40 2 4
10 195,796.0 0.00 30,566,652 29 4 9
Rollback Segment Storage for DB: DB Instance: DB Snaps: 12 -13
->Optimal Size should be larger than Avg Active
RBS No Segment Size Avg Active Optimal Size Maximum Size
0 385,024 0 385,024
1 12,705,792 944,176 2,213,732,352
2 11,657,216 1,548,937 2,214,715,392
3 13,754,368 832,465 243,392,512
4 13,754,368 946,902 235,069,440
5 12,705,792 964,352 2,195,374,080
6 20,045,824 1,232,438 2,416,041,984
7 12,705,792 977,490 3,822,182,400
8 10,608,640 875,068 243,392,512
9 11,657,216 878,119 243,392,512
10 18,997,248 1,034,104 2,281,889,792
Undo Segment Summary for DB: DB Instance: DB Snaps: 12 -13
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
1 44,441 ########## 47 2 0 0 0/0/0/0/0/0
Undo Segment Stats for DB: DB Instance: DB Snaps: 12 -13
-> ordered by Time desc
Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
28-Jun 11:56 7,111 ######## 47 1 0 0 0/0/0/0/0/0
28-Jun 11:46 10,782 ######## 18 2 0 0 0/0/0/0/0/0
28-Jun 11:36 6,170 ######## 42 1 0 0 0/0/0/0/0/0
28-Jun 11:26 4,966 ######## 13 1 0 0 0/0/0/0/0/0
28-Jun 11:16 6,602 ######## 40 1 0 0 0/0/0/0/0/0
28-Jun 11:06 8,810 ######## 10 1 0 0 0/0/0/0/0/0
Latch Activity for DB: DB Instance: DB Snaps: 12 -13
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
active checkpoint queue 9,585 0.0 0.0 0 0
alert log latch 158 0.0 0 0
archive control 220 0.0 0 0
archive process latch 220 0.5 1.0 0 0
cache buffer handles 264,718 0.0 0.0 0 0
cache buffers chains 416,051,175 0.0 0.0 4 401,018 0.0
cache buffers lru chain 1,285,963 0.0 0.0 0 1,206,550 0.0
channel handle pool latc 4,927 0.0 0 0
channel operations paren 10,788 0.0 0 0
checkpoint queue latch 528,319 0.0 0.0 0 69,506 0.0
child cursor hash table 35,371 0.0 0 0
Consistent RBA 854,833 0.0 0.0 0 0
dml lock allocation 1,963,007 0.9 0.0 0 0
dummy allocation 4,995 0.0 0 0
enqueue hash chains 4,014,593 0.5 0.0 0 0
enqueues 94,666 0.0 0.0 0 0
event group latch 2,340 0.0 0 0
FAL request queue 72 0.0 0 0
FIB s.o chain latch 310 0.0 0 0
FOB s.o list latch 6,769 0.0 0 0
global tx hash mapping 10,388 0.0 0 0
hash table column usage 16 0.0 0 479 0.0
job workq parent latch 0 0 316 0.0
job_queue_processes para 116 0.0 0 0
ktm global data 200 0.0 0 0
lgwr LWN SCN 855,008 0.0 0.0 0 0
library cache 5,836,900 0.4 0.0 0 8,926 0.6
library cache load lock 468 0.0 0 0
library cache pin 3,510,695 0.0 0.0 0 0
library cache pin alloca 1,402,523 0.0 0.0 0 0
list of block allocation 6,115 0.0 0 0
loader state object free 620 0.0 0 0
message pool operations 262 0.0 0 0
messages 2,664,950 0.4 0.0 0 0
mostly latch-free SCN 856,000 0.1 0.0 0 0
multiblock read objects 3,184 0.0 0 0
ncodef allocation latch 57 0.0 0 0
object stats modificatio 8 0.0 0 0
post/wait queue 6,183 0.0 0 3,082 0.0
process allocation 4,677 0.0 0 2,340 0.0
process group creation 4,677 0.0 0 0
redo allocation 4,784,936 0.5 0.0 0 0
redo copy 0 0 3,081,261 0.3
redo writing 2,576,299 0.0 0.2 0 0
row cache enqueue latch 3,017,144 0.0 0.0 0 0
row cache objects 5,049,552 0.8 0.0 0 92 0.0
sequence cache 984,824 0.0 0.1 0 0
session allocation 110,417 0.0 0.0 0 0
session idle bit 205,319 0.0 0 0
session switching 57 0.0 0 0
Latch Activity for DB: DB Instance: DB Snaps: 12 -13
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
session timer 1,204 0.0 0 0
shared pool 2,409,725 0.1 0.1 0 0
simulator hash latch 7,439,429 0.0 0.0 0 0
simulator lru latch 202 0.0 0 128,961 0.2
sort extent pool 1,053 0.0 0 0
SQL memory manager worka 67 0.0 0 0
temp lob duration state 187 0.0 0 0
transaction allocation 7,290 0.0 0 0
transaction branch alloc 5,668 0.0 0 0
undo global data 3,002,808 0.4 0.0 0 0
user lock 8,642 0.0 0 0
Latch Sleep breakdown for DB: DB Instance: DB Snaps: 12 -13
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
cache buffers chains 416,051,175 197,296 750 196776/298/2
15/7/0
row cache objects 5,049,552 42,368 38 42330/38/0/0
/0
redo allocation 4,784,936 24,766 77 24697/61/8/0
/0
library cache 5,836,900 23,477 276 23207/264/6/
0/0
enqueue hash chains 4,014,593 21,061 26 21035/26/0/0
/0
dml lock allocation 1,963,007 17,887 16 17872/14/1/0
/0
undo global data 3,002,808 12,350 8 12342/8/0/0/
0
messages 2,664,950 10,131 5 10126/5/0/0/
0
shared pool 2,409,725 1,362 189 1175/185/2/0
/0
row cache enqueue latch 3,017,144 470 7 463/7/0/0/0
mostly latch-free SCN 856,000 434 1 433/1/0/0/0
library cache pin 3,510,695 345 4 341/4/0/0/0
sequence cache 984,824 53 4 49/4/0/0/0
library cache pin allocati 1,402,523 35 1 34/1/0/0/0
redo writing 2,576,299 5 1 4/1/0/0/0
archive process latch 220 1 1 0/1/0/0/0
Latch Miss Sources for DB: DB Instance: DB Snaps: 12 -13
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
archive process latch kcrrpa 0 1 0
cache buffers chains kcbgtcr: fast path 0 346 188
cache buffers chains kcbgtcr: kslbegin excl 0 163 239
cache buffers chains kcbrls: kslbegin 0 86 170
cache buffers chains kcbget: pin buffer 0 53 49
cache buffers chains kcbgcur: kslbegin 0 44 20
cache buffers chains kcbnlc 0 38 22
cache buffers chains kcbget: exchange 0 8 16
cache buffers chains kcbchg: kslbegin: call CR 0 3 21
cache buffers chains kcbget: exchange rls 0 3 2
cache buffers chains kcbnew 0 3 0
cache buffers chains kcbbxsv 0 2 0
cache buffers chains kcbchg: kslbegin: bufs not 0 1 23
dml lock allocation ktaiam 0 13 1
dml lock allocation ktaidm 0 3 15
enqueue hash chains ksqgtl3 0 22 2
enqueue hash chains ksqrcl 0 4 24
library cache kglic 0 55 4
library cache kglhdgn: child: 0 42 86
library cache kglobpn: child: 0 26 32
library cache kglpndl: child: after proc 0 14 0
library cache kglpndl: child: before pro 0 13 73
library cache kglpin: child: heap proces 0 12 29
library cache kgllkdl: child: cleanup 0 11 4
library cache kglupc: child 0 4 7
library cache kgldti: 2child 0 2 4
library cache kglpnp: child 0 1 4
library cache pin kglpnal: child: alloc spac 0 3 3
library cache pin kglpndl 0 1 1
library cache pin alloca kglpnal 0 1 0
messages ksaamb: after wakeup 0 3 2
messages ksarcv 0 2 2
mostly latch-free SCN kcslcu3 0 1 1
redo allocation kcrfwr 0 74 8
redo allocation kcrfwi: more space 0
Maybe you are looking for
-
Simple Question on SQL.
I have three tables. Two entity tables and one relation table. Table A(AKey, priority, field3, field4) Table B(BKey, field5, field6) Table C(AKey, priority, BKey) The fields AKey and priority form the primary key for the Table A and the field BKey fo
-
An interesting observation...when I have a MacBook Pro connected and select Computers on my AppleTV2, the name of the MacBook computer comes up on the screen and allows me to select it and view photos from my iTunes library. Orks perfectly. When my W
-
How to retrieve data from a MSSQL?
Dear All, How can I retrieve data from a MSSQL server in my custom iView which is developed by Eclipse? Thanks Sam
-
Example Named Column Report Layout?
Howdy, Using APEX 3.1.2, I've successfully setup and tested Apache FOP for a print server. I'm hoping to create a multi-line report layout to emulate the MS Access (of Evil) one this app is replacing, but I'm having trouble getting started. It seems
-
i use firefox 16 and even if i clearly my history the autocomplete bar still shows some url when i type there 1st letter