Adding or increasing size of redo logs

Hi,
I launched health Chack of TOAD on my Database and I had :
! "Checkpoint not complete" errors: 1744
! (consider adding or increasing size of redo logs to resolve this)
My question is how to
1- add a logfile ?
2-increase a logfile ?
Many thanks.

The size is set when you create the redo logs you can not adjust this size without going through a lot of hoops. One method of increasing the size is described in metalink note: 1035935.6
Basically you have to create new log groups of the size you want like
SQL>alter database add logfile group 4 '/opt/oracle/data/redo4/log4a.dbf' size 10M;
Do this for as many new groups as you want with the size you want.
Then when the original groups are INACTIVE which can be seen in the v$log table, drop those groups.
alter database drop logfile group 1;
But read the entire note before you do anything.
Regards
Tim

Similar Messages

  • How to increase the size of Redo log files?

    Hi All,
    I have 10g R2 RAC on RHEL. As of now, i have 3 redo log files of 50MB size. i have used redo log size advisor by setting fast_start_mttr_target=1800 to check the optimal size of the redologs, it is showing 400MB. Now, i want to increase the size of redo log files. how to increase it?
    If we are supposed to do it on production, how to do?
    I found the following in one of the article....
    "The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time.Undersized log files increase checkpoint activity and increase CPU usage."
    I did not understand the the point however it must balanced out with the expected recovery time in the above given paragraph.
    Can anybody help me?
    Thanks,
    Praveen.

    You dont have to shutdown the database before dropping redo log group but make sure you have atleast two other redo log groups. Also note that you cannot drop active redo log group.
    Here is nice link,
    http://www.idevelopment.info/data/Oracle/DBA_tips/Database_Administration/DBA_34.shtml
    And make sure you test this in test database first. Production should be touched only after you are really comfortable with this procedure.

  • Private strand flush not complete how to find optimal size of redo log file

    hi,
    i am using oracle 10.2.0 on unix system and getting Private strand flush not complete in the alert log file. i know this is due to check point is not completed.
    I need to increase the size of redo log files or add new group to the database. i have log file switch (checkpoint incomplete) in the top 5 wait event.
    i can't change any parameter of database. i have three redo log group and log files are of 250MB size. i want to know the suitable size to avoid problem.
    select * from v$instance_recovery;
    RECOVERY_ESTIMATED_IOS     ACTUAL_REDO_BLKS     TARGET_REDO_BLKS     LOG_FILE_SIZE_REDO_BLKS     LOG_CHKPT_TIMEOUT_REDO_BLKS     LOG_CHKPT_INTERVAL_REDO_BLKS     FAST_START_IO_TARGET_REDO_BLKS     TARGET_MTTR     ESTIMATED_MTTR     CKPT_BLOCK_WRITES     OPTIMAL_LOGFILE_SIZE     ESTD_CLUSTER_AVAILABLE_TIME     WRITES_MTTR     WRITES_LOGFILE_SIZE     WRITES_LOG_CHECKPOINT_SETTINGS     WRITES_OTHER_SETTINGS     WRITES_AUTOTUNE     WRITES_FULL_THREAD_CKPT
    625     9286     9999     921600          9999          0     9     112166207               0     0     219270206     0     3331591     5707793please suggest me or tell me the way how to find out suitable size to avoid problem.
    thanks
    umesh

    How often should a database archive its logs
    Re: Redo log size increase and performance
    Please read the above thread and great replies by HJR sir. I think if you wish to get concept knowledge, you should add in your notes.
    "If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control."
    Source:http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10752/build_db.htm#19559
    Pl also see ML Doc 274264.1 (REDO LOGS SIZING ADVISORY) on tips to calculate the optimal size for redo logs in 10g databases
    Source:Re: Redo Log Size in R12
    HTH
    Girish Sharma

  • Optimal size of redo log

    Hi all,
    Recently we migrated from 9.2.0.4 to 10.2.0.4 and database performance is slow in newer version, on checking alert log we found this:-
    Thread 1 cannot allocate new log, sequence 1779 Checkpoint not complete
    Current log# 6 seq# 1778 mem# 0: /oradata/lipi/redo6.log
    Current log# 6 seq# 1778 mem# 1: /oradata/lipi/redo06a.log Wed Mar 10 15:19:27 2010 Thread 1 advanced to log sequence 1779 (LGWR switch)
    Current log# 1 seq# 1779 mem# 0: /oradata/lipi/redo01.log
    Current log# 1 seq# 1779 mem# 1: /oradata/lipi/redo01a.log Wed Mar 10 15:20:45 2010 Thread 1 advanced to log sequence 1780 (LGWR switch)
    Current log# 2 seq# 1780 mem# 0: /oradata/lipi/redo02.log
    Current log# 2 seq# 1780 mem# 1: /oradata/lipi/redo02a.log Wed Mar 10 15:21:44 2010 Thread 1 advanced to log sequence 1781 (LGWR switch)
    Current log# 3 seq# 1781 mem# 0: /oradata/lipi/redo03.log
    Current log# 3 seq# 1781 mem# 1: /oradata/lipi/redo03a.log Wed Mar 10 15:23:00 2010 Thread 1 advanced to log sequence 1782 (LGWR switch)
    Current log# 4 seq# 1782 mem# 0: /oradata/lipi/redo04.log
    Current log# 4 seq# 1782 mem# 1: /oradata/lipi/redo04a.log Wed Mar 10 15:24:48 2010 Thread 1 advanced to log sequence 1783 (LGWR switch)
    Current log# 5 seq# 1783 mem# 0: /oradata/lipi/redo5.log
    Current log# 5 seq# 1783 mem# 1: /oradata/lipi/redo05a.log Wed Mar 10 15:25:00 2010 Thread 1 cannot allocate new log, sequence 1784 Checkpoint not complete
    Current log# 5 seq# 1783 mem# 0: /oradata/lipi/redo5.log
    Current log# 5 seq# 1783 mem# 1: /oradata/lipi/redo05a.log Wed Mar 10 15:25:27 2010 Thread 1 advanced to log sequence 1784 (LGWR switch)
    Current log# 6 seq# 1784 mem# 0: /oradata/lipi/redo6.log
    Current log# 6 seq# 1784 mem# 1: /oradata/lipi/redo06a.log Wed Mar 10 15:28:11 2010 Thread 1 advanced to log sequence 1785 (LGWR switch)
    Current log# 1 seq# 1785 mem# 0: /oradata/lipi/redo01.log
    Current log# 1 seq# 1785 mem# 1: /oradata/lipi/redo01a.log Wed Mar 10 15:29:56 2010 Thread 1 advanced to log sequence 1786 (LGWR switch)
    Current log# 2 seq# 1786 mem# 0: /oradata/lipi/redo02.log
    Current log# 2 seq# 1786 mem# 1: /oradata/lipi/redo02a.log Wed Mar 10 15:31:22 2010 Thread 1 cannot allocate new log, sequence 1787 Private strand flush not complete
    Current log# 2 seq# 1786 mem# 0: /oradata/lipi/redo02.log
    Current log# 2 seq# 1786 mem# 1: /oradata/lipi/redo02a.log Wed Mar 10 15:31:29 2010 Thread 1 advanced to log sequence 1787 (LGWR switch)
    Current log# 3 seq# 1787 mem# 0: /oradata/lipi/redo03.log
    Current log# 3 seq# 1787 mem# 1: /oradata/lipi/redo03a.log Wed Mar 10 15:31:40 2010 Thread 1 cannot allocate new log, sequence 1788 Checkpoint not complete
    Current log# 3 seq# 1787 mem# 0: /oradata/lipi/redo03.log
    Current log# 3 seq# 1787 mem# 1: /oradata/lipi/redo03a.log Wed Mar 10 15:31:47 2010 Thread 1 advanced to log sequence 1788 (LGWR switch)
    Current log# 4 seq# 1788 mem# 0: /oradata/lipi/redo04.log
    Current log# 4 seq# 1788 mem# 1: /oradata/lipi/redo04a.log
    so, my point is should we increase redo log size to fix Checkpoint not complete message, if yes then what should be optimal size of redo log file?
    Piyush

    Respected Sir,
    So many things are going to popular without evidence, experts comments and even cross marking by docs. I would like to suggest to add one more chapter in next oracle release docs something like:
    "Myths in Oracle" and i hope following should be there:
    1. Put indexes in seperate tablespace to achieve good performance.
    2. Different block size issues and its pros and cons.
    3. Index scan is always best then full table scan; means if optimizer is not using index means there is something "fishy" with either query or database.
    4. Certification is the measurement of good knowledge in oracle.
    5. count(1) or count(*) is faster than each other.
    (difference between count(*) and count(1) and count(column name)
    6. Views are slow
    (Do you believe that "views are slow" is a myth
    7. BCHR is meaningful indicator for the performance of the database.
    (I do'nt want to put the link, because that thread is .... )
    8. And this thread i.e. redo log should be large enough to have at least 20 minutes of data.
    Sir, since i am regular reader of your site, blog and book; if you please spare some time on above or more myths in oracle; i hope it will help the whole oracle dba community who is something like in a very big and dark hall with lot of doors and windows (as i am).
    Kind Regards
    Girish Sharma
    Edited by: Girish Sharma on Mar 11, 2010 6:27 PM
    And i got the useful link of your site please too:
    http://www.jlcomp.demon.co.uk/myths.html

  • How to reduce the size of redo log files

    Hi,
    I am using Oracle Database 9.2.0.1.0. My present redo log files are of 100 MB each
    (redo01.log,redo02.log,redo03.log) which tooks more time to swicth the logs.
    I want to change the size to 20MB each so that log switcjing will be faster.
    Please let me know the exact step to resize the redo log files so that Ican change it.
    Regards,
    Indraneel

    Technical questions cannot be answered here. Please, post in the right forum :
    General Database Discussions

  • Recommended os block size for redo log

    Hi
    Platform AIX
    Oracle 10.2.0.4
    Is there any recommended filesystem blocksize where redo log should be placed?
    We have tested with 512 bytes and 4096 bytes. We got better performance on 512 bytes in terms of avg wait on log file sync.
    Is there oracle/Aix recommendation on the same?

    978881 wrote:
    Hi
    Platform AIX
    Oracle 10.2.0.4
    Is there any recommended filesystem blocksize where redo log should be placed?
    We have tested with 512 bytes and 4096 bytes. We got better performance on 512 bytes in terms of avg wait on log file sync.
    Is there oracle/Aix recommendation on the same?The recommendation is to create redo logs on a mount with agblk=512bytes. Please refer following link(page 60) which is a oracle+IBM technical brief:
    http://www-03.ibm.com/support/techdocs/atsmastr.nsf/5cb5ed706d254a8186256c71006d2e0a/bae31a51a00fa0018625721f00268dc4/$FILE/Oracle%20Architecture%20and%20Tuning%20on%20AIX%20(v%202.30).pdf
    Regards,
    S.K.

  • Require 9i Primary and Standby redo logs files same size?

    Hi,
    We have 9.2.0.6 Oracle RAC (2 node) and configured data guard (physical standby).
    I want to increase redo log files size, but i can't this do same time primary and standby side.
    Is there a rule, primary and standby database instances have same size redo log files?
    If I increase only primary redo log files, is there any side effect? However I try this issue on test system. I increased all primary redo log files(if status='INACTIVE' drop redo log group and add redo log group, switch logfile,...)
    , but i couldn't changed standby side. So the system is work well. Is this correct solution or not? How can i increase both sides redo log files?
    Thank you for helps..

    Thank you for your helps.. I found this issue answer:
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1010448
    Consequently, when you add or drop an online redo log file at the primary site, it is important that you synchronize the changes in the standby database by following these steps:
    If Redo Apply is running, you must cancel Redo Apply before you can change the log files.
    If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO, change the value to MANUAL.
    Add or drop an online redo log file:
    To add an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log' SIZE 100M;
    To drop an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
    Repeat the statement you used in Step 3 on each standby database.
    Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
    bye..

  • Increase Redo Logs

    Hi All
    Can any one let me know how to increase the Redo Log in SAP R/3 on Oracle.
    V are getting warning as "Checkpoint not complete" and when i searced note - 79341 it says to increase the size of redo logs,
    Kindly let how to increase Redo Logs.
    Regards!!

    Hi,
    Online redo log files should be set up according to the following rules:
    1. Avoid log switches with a frequency lower than one minute.
    2. Always set up the online redo log files with the same size.
    3. Switch on Oracle mirroring.
    Refer SAP Note 309526.
    Note: Setting up Oracle mirroring may slightly degrade the performance of database commit times, but you should implement it to avoid major problems caused by corruption or loss of an online redo log file. To keep performance degradation to a minimum, store the online redo log files on disks without any other major I/O load.
    Minimal size of Online Redo Log Files in bytes     If currently its 52428800, then
    go till 100MB if log switches are happening frequently.

  • Urgent: Huge diff in total redo log size and archive log size

    Dear DBAs
    I have a concern regarding size of redo log and archive log generated.
    Is the equation below is correct?
    total size of redo generated by all sessions = total size of archive log files generated
    I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
    My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
    Before i start measuring i cleared up archive directory and started to monitor from a specific time.
    Environment: Oracle 9i Release 2
    How I tracked the sizing information is below
    logon as SYS user and run the following statements
    DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
    CREATE TABLE REDOSTAT
    AUDSID NUMBER,
    SID NUMBER,
    SERIAL# NUMBER,
    SESSION_ID CHAR(27 BYTE),
    STATUS VARCHAR2(8 BYTE),
    DB_USERNAME VARCHAR2(30 BYTE),
    SCHEMANAME VARCHAR2(30 BYTE),
    OSUSER VARCHAR2(30 BYTE),
    PROCESS VARCHAR2(12 BYTE),
    MACHINE VARCHAR2(64 BYTE),
    TERMINAL VARCHAR2(16 BYTE),
    PROGRAM VARCHAR2(64 BYTE),
    DBCONN_TYPE VARCHAR2(10 BYTE),
    LOGON_TIME DATE,
    LOGOUT_TIME DATE,
    REDO_SIZE NUMBER
    TABLESPACE SYSTEM
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    GRANT SELECT ON REDOSTAT TO PUBLIC;
    CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    INSERT INTO SYS.REDOSTAT
    (AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
    SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
    LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
    FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
    WHERE
    A.SID = B.SID
    AND
    B.STATISTIC# = C.STATISTIC#
    AND
    C.NAME = 'redo size'
    AND
    A.AUDSID = sys_context ('USERENV', 'SESSIONID');
    COMMIT;
    END TR_SESS_LOGOFF;
    Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
    Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
    I have seen the similar implementation as above at many sites.
    Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
    If I didnt find a solution I would raise a SR with Oracle.
    Thanks
    [V]

    You can query v$sess_io, column block_changes to find out which session generating how much redo.
    The following query gives you the session redo statistics:
    select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    and a.value > 0
    group by a.sid,b.name
    If you want, you can only look for redo size for all the current sessions.
    Jaffar

  • Errors reported in alert log file regarding redo log files

    Hi,
    In my database i see the following entries regarding the redo log files frequently.
    Thread 1 advanced to log sequence 88
    Current log# 3 seq# 88 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\PELICAN\REDO03.LOG
    Thread 1 advanced to log sequence 89
    Current log# 1 seq# 89 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\PELICAN\REDO01.LOG
    Thread 1 cannot allocate new log, sequence 90
    Checkpoint not complete
    I have 3 redo log files of 50MB each. My database is 10g.
    What is the reason?
    What do i need to do ?
    Thanks.

    Hi,
    It is clearly stated that checkpoint is not complete. Thats why it could not allocate new sequence number to group 1.
    We can understand that your redo log group size is insufficient. or increase the number of redo log groups.
    Just test adding one more redo log group or test by increasing the size of redo log groups.

  • How to reduce excessive redo log generation in Oracle 10G

    Hi All,
    Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
    previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
    below is the size of redo log file members:
    L.BYTES/1024/1024     MEMBER
    200     /u05/applprod/prdnlog/redolog1a.dbf
    200     /u06/applprod/prdnlog/redolog1b.dbf
    200     /u05/applprod/prdnlog/redolog2a.dbf
    200     /u06/applprod/prdnlog/redolog2b.dbf
    200     /u05/applprod/prdnlog/redolog3a.dbf
    200     /u06/applprod/prdnlog/redolog3b.dbf
    here is the some content of alert message for your reference how frequent log switch is occuring:
    Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Thread 1 advanced to log sequence 17439
    Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Tue Jul 13 14:46:17 2010
    Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Tue Jul 13 14:46:38 2010
    Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Thread 1 advanced to log sequence 17440
    Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
    Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
    Tue Jul 13 14:46:52 2010
    Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Tue Jul 13 14:53:33 2010
    Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Thread 1 advanced to log sequence 17441
    Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
    Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
    Tue Jul 13 14:53:37 2010
    Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Tue Jul 13 14:55:37 2010
    Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
    Tue Jul 13 15:15:37 2010
    Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
    Tue Jul 13 15:35:38 2010
    Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
    Tue Jul 13 15:55:39 2010
    Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
    Tue Jul 13 16:15:41 2010
    Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
    Tue Jul 13 16:35:41 2010
    Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
    Tue Jul 13 16:42:28 2010
    Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
    Thread 1 advanced to log sequence 17442
    Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Thanks in advance

    hi,
    Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
    L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by daythanks,
    baskar.l

  • Sizing the redo log files using optimal_logfile_size view.

    Regards
    I have a specific question regarding logfile size. I have deployed a test database and i was exploring certain aspects with regards to selecting optimal size of redo logs for performance tuning using optimal_logfile_size view from v$instance_recovery. My main goal is to reduce the redo bytes required for instance recovery. Currently i have not been able to optimize the redo log file size. Here are the steps i followed:-
    In order to use the advisory from v$instance_recovery i had to set fast_start_mttr_target parameter which is by default not set so i did these steps:-
    1)SQL> sho parameter fast_start_mttr_target;
    NAME TYPE VALUE
    fast_start_mttr_target               integer                           0
    2) Setting the fast_start_mttr_target requires nullifying following deferred parameters :-
    SQL> show parameter log_checkpoint;
    NAME TYPE VALUE
    log_checkpoint_interval integer 0
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean FALSE
    SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'log_checkpoint_timeout';
    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    FALSE IMMEDIATE TRUE FALSE
    SQL> alter system set log_checkpoint_timeout=0 scope=both;
    System altered.
    SQL> show parameter log_checkpoint_timeout;
    NAME TYPE VALUE
    log_checkpoint_timeout               integer                           0
    3) Now setting fast_start_mttr_target
    SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'fast_start_mttr_target';
    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    FALSE IMMEDIATE TRUE FALSE
    Setting the fast_mttr_target to 1200 = 20 minutes of checkpoint switching according to Oracle recommendation
    Querying the v$instance_recovery view
    4) SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    276 165888 *93* 59 361 16040
    Here Target Mttr was 93 so i set the fast_mttr_target to 120
    SQL> alter system set fast_start_mttr_target=120 scope=both;
    System altered.
    Now the logfile size suggested by v$instance_recovery is 290 Mb
    SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    59 165888 93 59 290 16080
    After altering the logfile size to 290 as show below by v$log view :-
    SQL> select GROUP#,THREAD#,SEQUENCE#,BYTES from v$log;
    GROUP# THREAD# SEQUENCE# BYTES
    1 1 24 304087040
    2 1 0 304087040
    3 1 0 304087040
    4 1 0 304087040
    5 ) After altering the size i have observed the anomaly as redo log blocks to be applied for recovery has increased from *59 to 696* also now v$instance_recovery view is now suggesting the logfile size of *276 mb*. Have i misunderstood something
    SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    *696* 646947 120 59 *276* 18474
    Please clarify the above output i am unable to optimize the logfile size and have not been able to achieve the goal of reducing the redo log blocks to be applied for recovery, any help is appreciated in this regard.

    sunny_123 wrote:
    Sir oracle says that fast_start_mttr target can be set to 3600 = 1hour. As suggested by following oracle document
    http://docs.oracle.com/cd/B10500_01/server.920/a96533/instreco.htm
    I set mine value to 1200 = 20 minutes. Later i adjusted it to 120=2 minutes as Target_mttr suggested it to be around 100 (if fast_mttr_target value is too high or too low effective value is contained in target_mttr of v$instance_recovery)Just to add, you are reading the documentation of 9.2 and a lot has changed since then. For example, in 9.2 the parameter FSMTTR was introduced and explicitly required to be set and monitored by the DBA for teh additional checkpoint writes which might get caused by it. Since 10g onwards this parameter has been made automatically maintained by Oracle. Also it's been long that 9i has been desupported followed by 10g so it's better that you start reading the latest documentation of 11g and if not that, at least of 10.2.
    Aman....

  • Redo log space requests VALUE high

    SELECT name||' = '||value
    FROM v$sysstat
    WHERE name = 'redo log space requests';
    I am noticing 40+ space requests for some of my Oracle 9.2.0.5 databases.
    On another 7.3.4 DB I see this over 140 but this DB shutdown only on weekends so this cumulative value increases I presume.
    I have 20MB of 5 groups already. Do I still add another 2 more groups or increase their sizes ?
    I did read somewhere that I'd have to increase the log_buffer parameter. So how do we deal with this issue ? Any repercussions if I let this as it is for now ?
    The cause of this would be redo logs are not big enough or otherwise ?
    Thanks.

    user4874781 wrote:
    Thanks for your response Charles.
    So if I understand this correctly ... Redolog Space requests corresponds to a either an incorrectly sized redo log file / DBWR / CKPT needs to be tuned.
    Maybe I was interpreting this the wrong way. (Possibly)
    " The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer. " If that is the case, if there was longer waits for this event, I was under the assumption that log_buffer needs to be increased.
    http://www.idevelopment.info/data/Oracle/DBA_tips/Tuning/TUNING_6.shtml
    * Yes, the waits have increased to 70 as of now (since 40 yesterday .. DB was started Saturday night and will run till weekend) Less activity as of now, since the day has just started ; so it would definitely rise by end of the day. I took a look at the above article, and I think I understand why the article is slightly confusing. With due respect to the author, the article was last modified 16-Apr-2001, which I believe is before the Oracle documentation was better clarified regarding these statistics. From:
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10755/stats002.htm
    "redo log space requests: Number of times the active log file is full and Oracle must wait for disk space to be allocated for the redo log entries. Such space is created by performing a log switch. Log files that are small in relation to the size of the SGA or the commit rate of the work load can cause problems. When the log switch occurs, Oracle must ensure that all committed dirty buffers are written to disk before switching to a new log file. If you have a large SGA full of dirty buffers and small redo log files, a log switch must wait for DBWR to write dirty buffers to disk before continuing."
    "redo log space wait time: Total elapsed waiting time for "redo log space requests" in 10s of milliseconds"
    It is quite possible that the "redo log space requests" will increase with every redo log file switch, which should not be too much of a concern. You may want to focus a little more on the "redo log space wait time" statistic, which indicates how much wait time was involved waiting. You might also want to focus on the system-wide wait event interface, examining how the accumulated wait time increases from one sampling of each of the statistics to the next.
    * I have 1 log switch every 11 minutes ; BTW ; I have 5 log groups of 20 MB each as of now. So I am assuming 40 MB of 4 or 5 log groups should be fine as per your suggestion ?If you have the disk space, considering that it is an ancient AIX box, you may want to set the redo log files to an even larger size, possibly 100MB (or larger). You may then want to force periodic switches of the redo log, for instance once an hour, or once every 30 minutes.
    * This is an ancient AIX box with 512 MB Ram. Is the redo log located on a fast device ? I'd have to find that out ( any hints on that ? )
    * Decreasing the log_buffer is possible on weekends since I'd have to bounce it for it to take effect.
    I will increase the log files accordingly and hopefully the space waits will reduce. Thanks again.Someone else, possibly Sybrand, on the forums might be familiar with AIX and be able to provide you with an answer. If you are seeing system-wide increasing wait time for redo related waits, then that might be an indication that the redo logs are not located on a fast device.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • High redo log space wait time

    Hello,
    Our DB is having very high redo log space wait time :
    redo log space requests 867527
    redo log space wait time 67752674
    LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
    Also, the amount of redo generated per hour :
    START_DATE START NUM_LOGS MBYTES DBNAME
    2008-07-03 10:00 2 1000 TKL
    2008-07-03 11:00 4 2000 TKL
    2008-07-03 12:00 3 1500 TKL
    Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
    Thanks in advance ,
    Regards,
    Aman

    Looking quickly over the AWR report provided the following information could be helpful:
    1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
    In particular the large_pool_size setting seems to be quite high although you're using shared servers.
    Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
    2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
    In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
    3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
    4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
    5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
    6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
    7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • T is frequently switching the redo log files within 5min approx..

    i am facing frequent switching of redo logs within 5minutes
    can you please tell how to resolve
    thanks for help

    Hi,
    I found this:
    More frequent log switches may result in decreased performance. If your redo logs switches so faster Oracle will stop processing until the checkpoint completes successfully. Generally it is recommended to size your redo log file in a way that Oracle performs a log switch every 15 to 30 minutes.
    A recommended approach is to
    Query V$LOG view to determine the current size of the redo log members.
    Record the number of log switches per hour.
    Increase the log file size so that Oracle switches at the recommended rate of one switch per 15 to 30 minutes.
    You can also check messages in the alert log in order to determine how fast Oracle is filling and switching logs. Suppose if your database redo log file size is set to 1MB. It means that Oracle switches the logs every 1 minute. So you will need to increase the size of redo log file to 30MB so that Oracle switches per 30 minutes.
    It is also recommended to ensure that your online redo log files do not switch too often during high activity time. Instead in the period of high activity it should switch less while it should switch enough times during the time of low processing workloads. Many database administrators create PL/SQL programs to ensure that the logs switch every 15 to 30 minutes during times when activity is low.
    Oracle ARCHIVE_LAG_TARGET can also be used to force a log switch after the specified amount of time elapses. The basic purpose of ARCHIVE_LAG_TARGET parameter is to control the amount of data that is lost and effectively increasing the availability of the standby database but many database administrators set ARCHIVE_LAG_TARGET parameter to make sure that the logs switch at regular intervals during lower activity time periods.
    You should also keep in mind that how the size of the online redo log files will affect the instance recovery. Remember the lesser the checkpoints are taken; the longer will be the instance recovery duration. You can decrease the instance recovery time by appropriately setting the LOG_CHECKPOINT_TIMEOUT, LOG_CHECKPOINT_INTERVAL and FAST_START_MTTR_TARGET parameters.

Maybe you are looking for

  • When I add an event to iCalendar on my computer with no alert, why does it show up on my phone with an alert?

    I use iCalendar often for work, so I add a lot of events at one time on my computer and sync them with my other devices thru iCloud. When I add events on my MacBook I make sure that they do not have any alerts. When they sync to my iPhone, for some r

  • I want to install Windows XP on my new Macbook Pro

    I want to install Windows XP pro on my new macbook pro.  I followed bootcamp assistant until i got the driver disc saying I need windows 7.  oopppss i dont want windows 7.  Can I still use Windows xp?  I have only one professional software package th

  • Photoshop CS3 color sampler info question

    I'm adjusting RGB images in CMYK preview mode using custom ICC profiles. In taking a color sample measurement using Color Sampler tool and Info Window, I can have the sample data display in RGB, CMYK or whatever. Is there a way to get each sample to

  • Service Requisition

    Hi All While creating a service requisition ( item category D), the material group field on the item level is a required entry. Is there a I can make the material group field as optional. In "Defining screen layout at document level" we have the mate

  • Loose form data

    I have a little form with name of person, username and password. I use the check username behaivor, which redirects to the same page and displays an alert if the username is already in the database, the problem is that I loose the data entered in the