Optimal size of redo log
Hi all,
Recently we migrated from 9.2.0.4 to 10.2.0.4 and database performance is slow in newer version, on checking alert log we found this:-
Thread 1 cannot allocate new log, sequence 1779 Checkpoint not complete
Current log# 6 seq# 1778 mem# 0: /oradata/lipi/redo6.log
Current log# 6 seq# 1778 mem# 1: /oradata/lipi/redo06a.log Wed Mar 10 15:19:27 2010 Thread 1 advanced to log sequence 1779 (LGWR switch)
Current log# 1 seq# 1779 mem# 0: /oradata/lipi/redo01.log
Current log# 1 seq# 1779 mem# 1: /oradata/lipi/redo01a.log Wed Mar 10 15:20:45 2010 Thread 1 advanced to log sequence 1780 (LGWR switch)
Current log# 2 seq# 1780 mem# 0: /oradata/lipi/redo02.log
Current log# 2 seq# 1780 mem# 1: /oradata/lipi/redo02a.log Wed Mar 10 15:21:44 2010 Thread 1 advanced to log sequence 1781 (LGWR switch)
Current log# 3 seq# 1781 mem# 0: /oradata/lipi/redo03.log
Current log# 3 seq# 1781 mem# 1: /oradata/lipi/redo03a.log Wed Mar 10 15:23:00 2010 Thread 1 advanced to log sequence 1782 (LGWR switch)
Current log# 4 seq# 1782 mem# 0: /oradata/lipi/redo04.log
Current log# 4 seq# 1782 mem# 1: /oradata/lipi/redo04a.log Wed Mar 10 15:24:48 2010 Thread 1 advanced to log sequence 1783 (LGWR switch)
Current log# 5 seq# 1783 mem# 0: /oradata/lipi/redo5.log
Current log# 5 seq# 1783 mem# 1: /oradata/lipi/redo05a.log Wed Mar 10 15:25:00 2010 Thread 1 cannot allocate new log, sequence 1784 Checkpoint not complete
Current log# 5 seq# 1783 mem# 0: /oradata/lipi/redo5.log
Current log# 5 seq# 1783 mem# 1: /oradata/lipi/redo05a.log Wed Mar 10 15:25:27 2010 Thread 1 advanced to log sequence 1784 (LGWR switch)
Current log# 6 seq# 1784 mem# 0: /oradata/lipi/redo6.log
Current log# 6 seq# 1784 mem# 1: /oradata/lipi/redo06a.log Wed Mar 10 15:28:11 2010 Thread 1 advanced to log sequence 1785 (LGWR switch)
Current log# 1 seq# 1785 mem# 0: /oradata/lipi/redo01.log
Current log# 1 seq# 1785 mem# 1: /oradata/lipi/redo01a.log Wed Mar 10 15:29:56 2010 Thread 1 advanced to log sequence 1786 (LGWR switch)
Current log# 2 seq# 1786 mem# 0: /oradata/lipi/redo02.log
Current log# 2 seq# 1786 mem# 1: /oradata/lipi/redo02a.log Wed Mar 10 15:31:22 2010 Thread 1 cannot allocate new log, sequence 1787 Private strand flush not complete
Current log# 2 seq# 1786 mem# 0: /oradata/lipi/redo02.log
Current log# 2 seq# 1786 mem# 1: /oradata/lipi/redo02a.log Wed Mar 10 15:31:29 2010 Thread 1 advanced to log sequence 1787 (LGWR switch)
Current log# 3 seq# 1787 mem# 0: /oradata/lipi/redo03.log
Current log# 3 seq# 1787 mem# 1: /oradata/lipi/redo03a.log Wed Mar 10 15:31:40 2010 Thread 1 cannot allocate new log, sequence 1788 Checkpoint not complete
Current log# 3 seq# 1787 mem# 0: /oradata/lipi/redo03.log
Current log# 3 seq# 1787 mem# 1: /oradata/lipi/redo03a.log Wed Mar 10 15:31:47 2010 Thread 1 advanced to log sequence 1788 (LGWR switch)
Current log# 4 seq# 1788 mem# 0: /oradata/lipi/redo04.log
Current log# 4 seq# 1788 mem# 1: /oradata/lipi/redo04a.log
so, my point is should we increase redo log size to fix Checkpoint not complete message, if yes then what should be optimal size of redo log file?
Piyush
Respected Sir,
So many things are going to popular without evidence, experts comments and even cross marking by docs. I would like to suggest to add one more chapter in next oracle release docs something like:
"Myths in Oracle" and i hope following should be there:
1. Put indexes in seperate tablespace to achieve good performance.
2. Different block size issues and its pros and cons.
3. Index scan is always best then full table scan; means if optimizer is not using index means there is something "fishy" with either query or database.
4. Certification is the measurement of good knowledge in oracle.
5. count(1) or count(*) is faster than each other.
(difference between count(*) and count(1) and count(column name)
6. Views are slow
(Do you believe that "views are slow" is a myth
7. BCHR is meaningful indicator for the performance of the database.
(I do'nt want to put the link, because that thread is .... )
8. And this thread i.e. redo log should be large enough to have at least 20 minutes of data.
Sir, since i am regular reader of your site, blog and book; if you please spare some time on above or more myths in oracle; i hope it will help the whole oracle dba community who is something like in a very big and dark hall with lot of doors and windows (as i am).
Kind Regards
Girish Sharma
Edited by: Girish Sharma on Mar 11, 2010 6:27 PM
And i got the useful link of your site please too:
http://www.jlcomp.demon.co.uk/myths.html
Similar Messages
-
Private strand flush not complete how to find optimal size of redo log file
hi,
i am using oracle 10.2.0 on unix system and getting Private strand flush not complete in the alert log file. i know this is due to check point is not completed.
I need to increase the size of redo log files or add new group to the database. i have log file switch (checkpoint incomplete) in the top 5 wait event.
i can't change any parameter of database. i have three redo log group and log files are of 250MB size. i want to know the suitable size to avoid problem.
select * from v$instance_recovery;
RECOVERY_ESTIMATED_IOS ACTUAL_REDO_BLKS TARGET_REDO_BLKS LOG_FILE_SIZE_REDO_BLKS LOG_CHKPT_TIMEOUT_REDO_BLKS LOG_CHKPT_INTERVAL_REDO_BLKS FAST_START_IO_TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR CKPT_BLOCK_WRITES OPTIMAL_LOGFILE_SIZE ESTD_CLUSTER_AVAILABLE_TIME WRITES_MTTR WRITES_LOGFILE_SIZE WRITES_LOG_CHECKPOINT_SETTINGS WRITES_OTHER_SETTINGS WRITES_AUTOTUNE WRITES_FULL_THREAD_CKPT
625 9286 9999 921600 9999 0 9 112166207 0 0 219270206 0 3331591 5707793please suggest me or tell me the way how to find out suitable size to avoid problem.
thanks
umeshHow often should a database archive its logs
Re: Redo log size increase and performance
Please read the above thread and great replies by HJR sir. I think if you wish to get concept knowledge, you should add in your notes.
"If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control."
Source:http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10752/build_db.htm#19559
Pl also see ML Doc 274264.1 (REDO LOGS SIZING ADVISORY) on tips to calculate the optimal size for redo logs in 10g databases
Source:Re: Redo Log Size in R12
HTH
Girish Sharma -
How to increase the size of Redo log files?
Hi All,
I have 10g R2 RAC on RHEL. As of now, i have 3 redo log files of 50MB size. i have used redo log size advisor by setting fast_start_mttr_target=1800 to check the optimal size of the redologs, it is showing 400MB. Now, i want to increase the size of redo log files. how to increase it?
If we are supposed to do it on production, how to do?
I found the following in one of the article....
"The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time.Undersized log files increase checkpoint activity and increase CPU usage."
I did not understand the the point however it must balanced out with the expected recovery time in the above given paragraph.
Can anybody help me?
Thanks,
Praveen.You dont have to shutdown the database before dropping redo log group but make sure you have atleast two other redo log groups. Also note that you cannot drop active redo log group.
Here is nice link,
http://www.idevelopment.info/data/Oracle/DBA_tips/Database_Administration/DBA_34.shtml
And make sure you test this in test database first. Production should be touched only after you are really comfortable with this procedure. -
Adding or increasing size of redo logs
Hi,
I launched health Chack of TOAD on my Database and I had :
! "Checkpoint not complete" errors: 1744
! (consider adding or increasing size of redo logs to resolve this)
My question is how to
1- add a logfile ?
2-increase a logfile ?
Many thanks.The size is set when you create the redo logs you can not adjust this size without going through a lot of hoops. One method of increasing the size is described in metalink note: 1035935.6
Basically you have to create new log groups of the size you want like
SQL>alter database add logfile group 4 '/opt/oracle/data/redo4/log4a.dbf' size 10M;
Do this for as many new groups as you want with the size you want.
Then when the original groups are INACTIVE which can be seen in the v$log table, drop those groups.
alter database drop logfile group 1;
But read the entire note before you do anything.
Regards
Tim -
How to reduce the size of redo log files
Hi,
I am using Oracle Database 9.2.0.1.0. My present redo log files are of 100 MB each
(redo01.log,redo02.log,redo03.log) which tooks more time to swicth the logs.
I want to change the size to 20MB each so that log switcjing will be faster.
Please let me know the exact step to resize the redo log files so that Ican change it.
Regards,
IndraneelTechnical questions cannot be answered here. Please, post in the right forum :
General Database Discussions -
Recommended os block size for redo log
Hi
Platform AIX
Oracle 10.2.0.4
Is there any recommended filesystem blocksize where redo log should be placed?
We have tested with 512 bytes and 4096 bytes. We got better performance on 512 bytes in terms of avg wait on log file sync.
Is there oracle/Aix recommendation on the same?978881 wrote:
Hi
Platform AIX
Oracle 10.2.0.4
Is there any recommended filesystem blocksize where redo log should be placed?
We have tested with 512 bytes and 4096 bytes. We got better performance on 512 bytes in terms of avg wait on log file sync.
Is there oracle/Aix recommendation on the same?The recommendation is to create redo logs on a mount with agblk=512bytes. Please refer following link(page 60) which is a oracle+IBM technical brief:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/5cb5ed706d254a8186256c71006d2e0a/bae31a51a00fa0018625721f00268dc4/$FILE/Oracle%20Architecture%20and%20Tuning%20on%20AIX%20(v%202.30).pdf
Regards,
S.K. -
Sizing the redo log files using optimal_logfile_size view.
Regards
I have a specific question regarding logfile size. I have deployed a test database and i was exploring certain aspects with regards to selecting optimal size of redo logs for performance tuning using optimal_logfile_size view from v$instance_recovery. My main goal is to reduce the redo bytes required for instance recovery. Currently i have not been able to optimize the redo log file size. Here are the steps i followed:-
In order to use the advisory from v$instance_recovery i had to set fast_start_mttr_target parameter which is by default not set so i did these steps:-
1)SQL> sho parameter fast_start_mttr_target;
NAME TYPE VALUE
fast_start_mttr_target integer 0
2) Setting the fast_start_mttr_target requires nullifying following deferred parameters :-
SQL> show parameter log_checkpoint;
NAME TYPE VALUE
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'log_checkpoint_timeout';
ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
FALSE IMMEDIATE TRUE FALSE
SQL> alter system set log_checkpoint_timeout=0 scope=both;
System altered.
SQL> show parameter log_checkpoint_timeout;
NAME TYPE VALUE
log_checkpoint_timeout integer 0
3) Now setting fast_start_mttr_target
SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'fast_start_mttr_target';
ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
FALSE IMMEDIATE TRUE FALSE
Setting the fast_mttr_target to 1200 = 20 minutes of checkpoint switching according to Oracle recommendation
Querying the v$instance_recovery view
4) SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
276 165888 *93* 59 361 16040
Here Target Mttr was 93 so i set the fast_mttr_target to 120
SQL> alter system set fast_start_mttr_target=120 scope=both;
System altered.
Now the logfile size suggested by v$instance_recovery is 290 Mb
SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
59 165888 93 59 290 16080
After altering the logfile size to 290 as show below by v$log view :-
SQL> select GROUP#,THREAD#,SEQUENCE#,BYTES from v$log;
GROUP# THREAD# SEQUENCE# BYTES
1 1 24 304087040
2 1 0 304087040
3 1 0 304087040
4 1 0 304087040
5 ) After altering the size i have observed the anomaly as redo log blocks to be applied for recovery has increased from *59 to 696* also now v$instance_recovery view is now suggesting the logfile size of *276 mb*. Have i misunderstood something
SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
*696* 646947 120 59 *276* 18474
Please clarify the above output i am unable to optimize the logfile size and have not been able to achieve the goal of reducing the redo log blocks to be applied for recovery, any help is appreciated in this regard.sunny_123 wrote:
Sir oracle says that fast_start_mttr target can be set to 3600 = 1hour. As suggested by following oracle document
http://docs.oracle.com/cd/B10500_01/server.920/a96533/instreco.htm
I set mine value to 1200 = 20 minutes. Later i adjusted it to 120=2 minutes as Target_mttr suggested it to be around 100 (if fast_mttr_target value is too high or too low effective value is contained in target_mttr of v$instance_recovery)Just to add, you are reading the documentation of 9.2 and a lot has changed since then. For example, in 9.2 the parameter FSMTTR was introduced and explicitly required to be set and monitored by the DBA for teh additional checkpoint writes which might get caused by it. Since 10g onwards this parameter has been made automatically maintained by Oracle. Also it's been long that 9i has been desupported followed by 10g so it's better that you start reading the latest documentation of 11g and if not that, at least of 10.2.
Aman.... -
Urgent: Huge diff in total redo log size and archive log size
Dear DBAs
I have a concern regarding size of redo log and archive log generated.
Is the equation below is correct?
total size of redo generated by all sessions = total size of archive log files generated
I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
Before i start measuring i cleared up archive directory and started to monitor from a specific time.
Environment: Oracle 9i Release 2
How I tracked the sizing information is below
logon as SYS user and run the following statements
DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
CREATE TABLE REDOSTAT
AUDSID NUMBER,
SID NUMBER,
SERIAL# NUMBER,
SESSION_ID CHAR(27 BYTE),
STATUS VARCHAR2(8 BYTE),
DB_USERNAME VARCHAR2(30 BYTE),
SCHEMANAME VARCHAR2(30 BYTE),
OSUSER VARCHAR2(30 BYTE),
PROCESS VARCHAR2(12 BYTE),
MACHINE VARCHAR2(64 BYTE),
TERMINAL VARCHAR2(16 BYTE),
PROGRAM VARCHAR2(64 BYTE),
DBCONN_TYPE VARCHAR2(10 BYTE),
LOGON_TIME DATE,
LOGOUT_TIME DATE,
REDO_SIZE NUMBER
TABLESPACE SYSTEM
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT SELECT ON REDOSTAT TO PUBLIC;
CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO SYS.REDOSTAT
(AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
WHERE
A.SID = B.SID
AND
B.STATISTIC# = C.STATISTIC#
AND
C.NAME = 'redo size'
AND
A.AUDSID = sys_context ('USERENV', 'SESSIONID');
COMMIT;
END TR_SESS_LOGOFF;
Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
I have seen the similar implementation as above at many sites.
Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
If I didnt find a solution I would raise a SR with Oracle.
Thanks
[V]You can query v$sess_io, column block_changes to find out which session generating how much redo.
The following query gives you the session redo statistics:
select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
where a.statistic# = b.statistic#
and b.name like '%redo%'
and a.value > 0
group by a.sid,b.name
If you want, you can only look for redo size for all the current sessions.
Jaffar -
Hello,
Our DB is having very high redo log space wait time :
redo log space requests 867527
redo log space wait time 67752674
LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
Also, the amount of redo generated per hour :
START_DATE START NUM_LOGS MBYTES DBNAME
2008-07-03 10:00 2 1000 TKL
2008-07-03 11:00 4 2000 TKL
2008-07-03 12:00 3 1500 TKL
Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
Thanks in advance ,
Regards,
AmanLooking quickly over the AWR report provided the following information could be helpful:
1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
In particular the large_pool_size setting seems to be quite high although you're using shared servers.
Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
Regards,
Randolf
Oracle related stuff:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Dear all,
In st04 I see Redo log wait is this a problem. Please suggest how to solve it
Please find the details.
Size (kB) 14,352
Entries 42,123,046
Allocation retries 9,103
Alloc fault rate(%) 0.0
Redo log wait (s) 486
Log files (in use) 8 ( 8 )
DB_INST_ID Instance ID 1
DB_INSTANCE DB instance name prd
DB_NODE Database node A
DB_RELEASE Database release 10.2.0.4.0
DB_SYS_TIMESTAMP Day, Time 06.04.2010 13:07:10
DB_SYSDATE DB System date 20100406
DB_SYSTIME DB System time 130710
DB_STARTUP_TIMESTAMP Start up at 22.03.2010 03:51:02
DB_STARTDATE DB Startup date 20100322
DB_STARTTIME DB Startup time 35102
DB_ELAPSED Seconds since start 1329368
DB_SNAPDIFF Sec. btw. snapshots 1329368
DATABUFFERSIZE Size (kB) 3784704
DBUFF_QUALITY Quality (%) 96.3
DBUFF_LOGREADS Logical reads 5615573538
DBUFF_PHYSREADS Physical reads 207302988
DBUFF_PHYSWRITES Physical writes 7613263
DBUFF_BUSYWAITS Buffer busy waits 878188
DBUFF_WAITTIME Buffer wait time (s) 3583
SHPL_SIZE Size (kB) 1261568
SHPL_CAQUAL DD-cache Quality (%) 95.1
SHPL_GETRATIO SQL area getratio(%) 98.4
SHPL_PINRATIO SQL area pinratio(%) 99.9
SHPL_RELOADSPINS SQLA.Reloads/pins(%) 0.0042
LGBF_SIZE Size (kB) 14352
LGBF_ENTRIES Entries 42123046
LGBF_ALLORETR Allocation retries 9103
LGBF_ALLOFRAT Alloc fault rate(%) 0
LGBF_REDLGWT Redo log wait (s) 486
LGBF_LOGFILES Log files 8
LGBF_LOGFUSE Log files (in use) 8
CLL_USERCALLS User calls 171977181
CLL_USERCOMM User commits 1113161
CLL_USERROLLB User rollbacks 34886
CLL_RECURSIVE Recursive calls 36654755
CLL_PARSECNT Parse count 10131732
CLL_USR_PER_RCCLL User/recursive calls 4.7
CLL_RDS_PER_UCLL Log.Reads/User Calls 32.7
TIMS_BUSYWT Busy wait time (s) 389991
TIMS_CPUTIME CPU time session (s) 134540
TIMS_TIM_PER_UCLL Time/User call (ms) 3
TIMS_SESS_BUSY Sessions busy (%) 0.94
TIMS_CPUUSAGE CPU usage (%) 2.53
TIMS_CPUCOUNT Number of CPUs 4
RDLG_WRITES Redo writes 1472363
RDLG_OSBLCKWRT OS blocks written 54971892
RDLG_LTCHTIM Latching time (s) 19
RDLG_WRTTIM Redo write time (s) 2376
RDLG_MBWRITTEN MB written 25627
TABSF_SHTABSCAN Short table scans 12046230
TABSF_LGTABSCAN Long table scans 6059
TABSF_FBYROWID Table fetch by rowid 1479714431
TABSF_FBYCONTROW Fetch by contin. row 2266031
SORT_MEMORY Sorts (memory) 3236898
SORT_DISK Sorts (disk) 89
SORT_ROWS Sorts (rows) 5772889843
SORT_WAEXOPT WA exec. optim. mode 1791746
SORT_WAEXONEP WA exec. one pass m. 93
SORT_WAEXMULTP WA exec. multipass m 0
IEFF_SOFTPARSE Soft parse ratio 0.9921
IEFF_INMEM_SORT In-memory sort ratio 1
IEFF_PARSTOEXEC Parse to exec. ratio 0.9385
IEFF_PARSCPUTOTOT Parse CPU to Total 0.9948
IEFF_PTCPU_PTELPS PTime CPU / PT elps. 0.1175
Regards,
KumarHi,
If the redo buffers are not large enough, the Oracle log-writer process waits for space to become available. This wait time becomes wait time for the end user. Hence this may cause perfromance problem at database end and hence need to be tuned.
The size of the redo log buffer is defined in the init.ora file using the 'LOG_BUFFER' parameter. The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer.
If the size of redo log buffer is not big enough causing this wait, recommendation is to increase the size of redo log buffer in such a way that the value of "redo log space requests" should be near to zero.
regards,
rakesh -
How to change the size of redo file
Deal all,
I finish installed the db,and the size of redo file is 50m by default. I found many "checkpoint not complete" events in alert.log file.
Now i want to change it to 100M and try to avoid the event appear again ,how can i do?As Sb has already correct reply that size of redo logs can not be changed. We have to create new bigger/smaller size of redo logs and then 'alter system switch logfile' to inactivate older logs. Now you can drop older logs. Redo logs are something a glass of water. We can not change the size of glass, if we wish to dring more/less water, we will take that sized glass and we will get that sized water.
Remember to consider the multiplexing/grouping when doing any maintenance like this. And there may be archive log & backup space considerations.
Hans @ http://dbaspot.com/oracle-server/33231-changing-redo-log-file-size.html
Regards
Girish Sharma -
Errors reported in alert log file regarding redo log files
Hi,
In my database i see the following entries regarding the redo log files frequently.
Thread 1 advanced to log sequence 88
Current log# 3 seq# 88 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\PELICAN\REDO03.LOG
Thread 1 advanced to log sequence 89
Current log# 1 seq# 89 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\PELICAN\REDO01.LOG
Thread 1 cannot allocate new log, sequence 90
Checkpoint not complete
I have 3 redo log files of 50MB each. My database is 10g.
What is the reason?
What do i need to do ?
Thanks.Hi,
It is clearly stated that checkpoint is not complete. Thats why it could not allocate new sequence number to group 1.
We can understand that your redo log group size is insufficient. or increase the number of redo log groups.
Just test adding one more redo log group or test by increasing the size of redo log groups. -
How to reduce excessive redo log generation in Oracle 10G
Hi All,
Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
below is the size of redo log file members:
L.BYTES/1024/1024 MEMBER
200 /u05/applprod/prdnlog/redolog1a.dbf
200 /u06/applprod/prdnlog/redolog1b.dbf
200 /u05/applprod/prdnlog/redolog2a.dbf
200 /u06/applprod/prdnlog/redolog2b.dbf
200 /u05/applprod/prdnlog/redolog3a.dbf
200 /u06/applprod/prdnlog/redolog3b.dbf
here is the some content of alert message for your reference how frequent log switch is occuring:
Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Thread 1 advanced to log sequence 17439
Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Tue Jul 13 14:46:17 2010
Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Tue Jul 13 14:46:38 2010
Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Thread 1 advanced to log sequence 17440
Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
Tue Jul 13 14:46:52 2010
Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Tue Jul 13 14:53:33 2010
Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Thread 1 advanced to log sequence 17441
Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
Tue Jul 13 14:53:37 2010
Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Tue Jul 13 14:55:37 2010
Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
Tue Jul 13 15:15:37 2010
Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
Tue Jul 13 15:35:38 2010
Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
Tue Jul 13 15:55:39 2010
Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
Tue Jul 13 16:15:41 2010
Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
Tue Jul 13 16:35:41 2010
Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
Tue Jul 13 16:42:28 2010
Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
Thread 1 advanced to log sequence 17442
Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Thanks in advancehi,
Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
L
1 select
2 to_char(first_time,'DD-MM-YY') day,
3 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
4 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
5 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
6 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
7 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
8 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
9 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
10 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
11 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
12 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
13 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
14 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
15 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
16 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
17 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
18 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
19 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
20 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
21 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
22 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
23 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
24 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
25 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
26 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
27 COUNT(*) TOT
28 from v$log_history
29 group by to_char(first_time,'DD-MM-YY')
30 order by daythanks,
baskar.l -
Total combined size of Archived logs
DB version : 11.2
Platform : AIX
How can I determine the total size of archive logs for a particular DB?
Googling and OTN search didn't provide much details
Didn't get the solution from the following thread either as it digressed from the subject
Re: archive log size
The redo log size for our DB is 100 mb.
SQL> select count(*) from v$archived_log where status = 'A' and name is not null;
COUNT(*)
22
So, I can multiply 22*100 = 2200 mb . But there has been some manual switches, the size of those files will be less. This is why I am looking for an accurate way to determine the total size of Archive logs.Hello;
V$ARCHIVED_LOG contains BLOCKS ( Size of the archived log (in blocks) ) and BLOCK_SIZE ( which is the same as the logical block size of the online log from which the archived log was copied )
So with a little help in the query you should be able to get it.
Archivelog size each day
select
trunc(COMPLETION_TIME) TIME,
SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
from
V$ARCHIVED_LOG
group by
trunc (COMPLETION_TIME) order by 1;Since COMPLETION_TIME is a DATE you can add another SUM to the query to get the exact total you want for the exact date range you want.
Archivelog size each hour
alter session set nls_date_format = 'YYYY-MM-DD HH24';
select
trunc(COMPLETION_TIME,'HH24') TIME,
SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
from
V$ARCHIVED_LOG
group by
trunc (COMPLETION_TIME,'HH24') order by 1;Another example
SELECT To_char(completion_time,'YYYYMMDD') run_date,
Round(Sum(blocks * block_size + block_size) / 1024 / 1024 / 1024) redo_blocks
FROM v$archived_log
GROUP BY To_char(completion_time,'YYYYMMDD')
ORDER BY 2
/Best Regards
mseberg
Edited by: mseberg on Feb 23, 2012 2:30 AM -
T is frequently switching the redo log files within 5min approx..
i am facing frequent switching of redo logs within 5minutes
can you please tell how to resolve
thanks for helpHi,
I found this:
More frequent log switches may result in decreased performance. If your redo logs switches so faster Oracle will stop processing until the checkpoint completes successfully. Generally it is recommended to size your redo log file in a way that Oracle performs a log switch every 15 to 30 minutes.
A recommended approach is to
Query V$LOG view to determine the current size of the redo log members.
Record the number of log switches per hour.
Increase the log file size so that Oracle switches at the recommended rate of one switch per 15 to 30 minutes.
You can also check messages in the alert log in order to determine how fast Oracle is filling and switching logs. Suppose if your database redo log file size is set to 1MB. It means that Oracle switches the logs every 1 minute. So you will need to increase the size of redo log file to 30MB so that Oracle switches per 30 minutes.
It is also recommended to ensure that your online redo log files do not switch too often during high activity time. Instead in the period of high activity it should switch less while it should switch enough times during the time of low processing workloads. Many database administrators create PL/SQL programs to ensure that the logs switch every 15 to 30 minutes during times when activity is low.
Oracle ARCHIVE_LAG_TARGET can also be used to force a log switch after the specified amount of time elapses. The basic purpose of ARCHIVE_LAG_TARGET parameter is to control the amount of data that is lost and effectively increasing the availability of the standby database but many database administrators set ARCHIVE_LAG_TARGET parameter to make sure that the logs switch at regular intervals during lower activity time periods.
You should also keep in mind that how the size of the online redo log files will affect the instance recovery. Remember the lesser the checkpoints are taken; the longer will be the instance recovery duration. You can decrease the instance recovery time by appropriately setting the LOG_CHECKPOINT_TIMEOUT, LOG_CHECKPOINT_INTERVAL and FAST_START_MTTR_TARGET parameters.
Maybe you are looking for
-
HP TouchSmart 600-1070a lockup from overheating?
I'm repairing a Touchsmart 600-1070a that has suffered a fall from its desk (damaged the on/off switch, but that's fixed), already replaced the HDD which had bad sectors. There appears to be a lot of cumulative small hiccups including; * sometimes
-
Mail. Backup problem.
Hi: I back up my email with Time Machine everything is normal. However, when I go to the libray folder and return the mail file to the present and susbtitute today´s file with the copy in Time Machine it pass normally (Size & Items -13.1 GB and 4
-
My imac has sound but no picture?
My IMac has sound but no picture? I've tried everything but a hard restart. Anyone else know a solution? Oh by the way, my eight your old son was the last to use it but swears he didn't press anything.
-
Transfering files between devices
Good morning, What are the various means to transfer files (photos, movies, books, documents) from my iMac to my iPad/iPhone and vice versa, without synchronising the devices with my iTune library. For instance I would like to load a movie I have on
-
[SOLVED] Stopping openvpn@ service on suspend
I'm trying to stop any running openvpn@ service either prior to suspend or immediately after resume. I've got a service file enabled: [Unit] Description=Root suspend actions Before=sleep.target [Service] Type=forking ExecStart=-/home/firecat53/.local