Archive log writes frequency rate (4.73 minute(s)) is too high

hello,
DB version=10.2.0.1.0
OS sun Solaris 5.10
i got one alert telling
Archive log writes frequency rate (4.73 minute(s)) is too high could someone help me out to know what is this error and what causingit to occure?
thanks

Raman wrote:
please go throrgh this URL
http://www.dba-oracle.com/t_redo_log_tuning.htm
Well, the first step on that is wrong. The link in the step corrects it. However, both pages advocate putting redo logs on SSD, and if you google about for recent blog postings about that, you see that that is a bad idea. Even on Exadata, it only works because it is also writing to normal spinning rust, and says the write to redo is done when the first write is acknowledged. For normal non-Exadata databases, it's at best an expensive waste of time, and at worst shows up the deterioration of SSD's as redo log corruption. So, you might not want to link there.
You should size the redo for the data rate expected at maximum, and use the parameters CKPT mentioned to switch for normal operating data rates.

Similar Messages

  • Archive log filling up a gig per minute

    Hi there,
    No one is on the database amd every day now the archive log is filling up over a gig per minute so when I get in work in the morning the backup didn't run and the archive directory is full and the database is not available.
    This has been happening for the last 6 days and I can't figure it out. I have to keep clearing out archive files to get the directory percentage down so a few users can run some reports.
    This database is not growing at all. There is no data being added to it, its a histrical database for report running only.
    Here is the file system, as you can see, big drive so I am at a loss.
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    3.1G 829M 2.2G 28% /
    /dev/mapper/VolGroup00-LogVol_USR
    4.0G 3.0G 813M 79% /usr
    /dev/mapper/VolGroup00-LogVol_d01
    160G 116G 36G 77% /d01
    /dev/mapper/VolGroupBackup-LogVol_Backup
    213G 195G 7.6G 97% /backup
    /dev/sda1 99M 11M 84M 11% /boot
    none 1.5G 0 1.5G 0% /dev/shm

    user13286861 wrote:
    Hi there,
    No one is on the database amd every day now the archive log is filling up over a gig per minute so when I get in work in the morning the backup didn't run and the archive directory is full and the database is not available.
    This has been happening for the last 6 days and I can't figure it out. I have to keep clearing out archive files to get the directory percentage down so a few users can run some reports.
    This database is not growing at all. There is no data being added to it, its a histrical database for report running only.
    Here is the file system, as you can see, big drive so I am at a loss.
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    3.1G 829M 2.2G 28% /
    /dev/mapper/VolGroup00-LogVol_USR
    4.0G 3.0G 813M 79% /usr
    /dev/mapper/VolGroup00-LogVol_d01
    160G 116G 36G 77% /d01
    /dev/mapper/VolGroupBackup-LogVol_Backup
    213G 195G 7.6G 97% /backup
    /dev/sda1 99M 11M 84M 11% /boot
    none 1.5G 0 1.5G 0% /dev/shmRun off a statspack report and post the "Instance Activity", "Top 5" and "Load Profile" sections here. Don't forget to use the "cost" tags to get the output in fixed font (see end of post).
    Question 1 - what did you do 6 days ago ?
    Question 2 - what method are you using for backups.
    Guess 1 - you have a load of tablespaces in backup mode, and you're doing a lot of delayed block cleanout.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Archive Logs in every 15 minutes(Oracle 11g 64 bit  EE on Linux RHEL 4

    In our production database we have very few transactions may be of few MB's in whole day but it is generating archive logs constantly in every 15 minutes(may be sometime in 14 minutes also) of file size 50 MB each and this way consume 4 GB of space in a day for archive log which is way above than expected.
    I have checked archive_lag_target and value of this is 0.
    Any clue why it is creating 50 MB archive log file after every 14-15 minutes?

    It's easy enough to reduce redo log file size without downtime; just add new smaller redo log files, switch logfile a couple of times and drop the old redo log files.
    However, if the redo logs are filling up before they switch, then this will probably only make matters worse.
    If the redo logs are switching before they are full then maybe you also need to consider log_checkpoint_interval and log_checkpoint_timeout settings.
    If the redo logs are filling up before they switch then use the techniques suggested by a couple of the other posters to track down the guilty SQL.

  • Oracle write archive log files continuosly

    Hi all,
    I don't know why my Oracle Database has this problem. Online log are writen to archive log file continuously(3 minutes period). My archive logfile is 300M. I have startup force my database. It work, but archive log files are writen so much. This is alert log:
    >
    Sat Jan 1 14:23:19 2011
    Successfully onlined Undo Tablespace 5.
    Sat Jan 1 14:23:19 2011
    SMON: enabling tx recovery
    Sat Jan 1 14:23:19 2011
    Database Characterset is AL32UTF8
    Opening with internal Resource Manager plan
    where NUMA PG = 1, CPUs = 16
    replication_dependency_tracking turned off (no async multimaster replication found)
    Sat Jan 1 14:23:40 2011
    WARNING: AQ_TM_PROCESSES is set to 0. System operation might be adversely affected.
    Sat Jan 1 14:24:32 2011
    db_recovery_file_dest_size of 204800 MB is 28.64% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Sat Jan 1 14:24:40 2011
    Completed: ALTER DATABASE OPEN
    Sat Jan 1 14:27:05 2011
    Warning: PROCESSES may be too low for current load
    shared servers=360, want 7 more but starting only 3 more
    Warning: PROCESSES may be too low for current load
    shared servers=363, want 9 more but starting only 0 more
    Sat Jan 1 14:27:39 2011
    Warning: PROCESSES may be too low for current load
    shared servers=363, want 9 more but starting only 1 more
    Warning: PROCESSES may be too low for current load
    shared servers=364, want 9 more but starting only 0 more
    Sat Jan 1 14:28:58 2011
    Thread 1 advanced to log sequence 9463 (LGWR switch)
    Current log# 3 seq# 9463 mem# 0: /u01/oradata/TNORA3/redo03a.log
    Current log# 3 seq# 9463 mem# 1: /u02/oradata/TNORA3/redo03b.log
    Sat Jan 1 14:30:20 2011
    Errors in file /opt/app/oracle/admin/TNORA3/bdump/tnora_j000_17762.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-00018: maximum number of sessions exceeded
    Sat Jan 1 14:39:47 2011
    Thread 1 advanced to log sequence 9464 (LGWR switch)
    Current log# 1 seq# 9464 mem# 0: /u01/oradata/TNORA3/redo01a.log
    Current log# 1 seq# 9464 mem# 1: /u02/oradata/TNORA3/redo01b.log
    Sat Jan 1 14:42:51 2011
    Errors in file /opt/app/oracle/admin/TNORA3/bdump/tnora_s008_17165.trc:
    ORA-07445: exception encountered: core dump [_intel_fast_memcpy.J()+80] [SIGSEGV] [Address not mapped to object] [0x2B8988CE2018] [] []
    Sat Jan 1 14:42:57 2011
    Thread 1 advanced to log sequence 9465 (LGWR switch)
    Current log# 2 seq# 9465 mem# 0: /u01/oradata/TNORA3/redo02a.log
    Current log# 2 seq# 9465 mem# 1: /u02/oradata/TNORA3/redo02b.log
    Sat Jan 1 14:43:11 2011
    found dead shared server 'S008', pid = (42, 1)
    Sat Jan 1 14:45:39 2011
    Thread 1 advanced to log sequence 9466 (LGWR switch)
    Current log# 3 seq# 9466 mem# 0: /u01/oradata/TNORA3/redo03a.log
    Current log# 3 seq# 9466 mem# 1: /u02/oradata/TNORA3/redo03b.log
    Sat Jan 1 14:48:47 2011
    Thread 1 cannot allocate new log, sequence 9467
    Checkpoint not complete
    Current log# 3 seq# 9466 mem# 0: /u01/oradata/TNORA3/redo03a.log
    Current log# 3 seq# 9466 mem# 1: /u02/oradata/TNORA3/redo03b.log
    Sat Jan 1 14:48:50 2011
    Thread 1 advanced to log sequence 9467 (LGWR switch)
    Current log# 1 seq# 9467 mem# 0: /u01/oradata/TNORA3/redo01a.log
    Current log# 1 seq# 9467 mem# 1: /u02/oradata/TNORA3/redo01b.log
    Sat Jan 1 14:52:11 2011
    Thread 1 advanced to log sequence 9468 (LGWR switch)
    Current log# 2 seq# 9468 mem# 0: /u01/oradata/TNORA3/redo02a.log
    Current log# 2 seq# 9468 mem# 1: /u02/oradata/TNORA3/redo02b.log
    Sat Jan 1 14:55:12 2011
    Thread 1 advanced to log sequence 9469 (LGWR switch)
    Current log# 3 seq# 9469 mem# 0: /u01/oradata/TNORA3/redo03a.log
    Current log# 3 seq# 9469 mem# 1: /u02/oradata/TNORA3/redo03b.log
    Sat Jan 1 14:58:12 2011
    Thread 1 advanced to log sequence 9470 (LGWR switch)
    Current log# 1 seq# 9470 mem# 0: /u01/oradata/TNORA3/redo01a.log
    Current log# 1 seq# 9470 mem# 1: /u02/oradata/TNORA3/redo01b.log
    Sat Jan 1 15:02:00 2011
    Thread 1 advanced to log sequence 9471 (LGWR switch)
    Current log# 2 seq# 9471 mem# 0: /u01/oradata/TNORA3/redo02a.log
    Current log# 2 seq# 9471 mem# 1: /u02/oradata/TNORA3/redo02b.log
    Sat Jan 1 15:05:16 2011
    Thread 1 advanced to log sequence 9472 (LGWR switch)
    Current log# 3 seq# 9472 mem# 0: /u01/oradata/TNORA3/redo03a.log
    Current log# 3 seq# 9472 mem# 1: /u02/oradata/TNORA3/redo03b.log
    Sat Jan 1 15:08:30 2011
    Thread 1 advanced to log sequence 9473 (LGWR switch)
    Current log# 1 seq# 9473 mem# 0: /u01/oradata/TNORA3/redo01a.log
    Current log# 1 seq# 9473 mem# 1: /u02/oradata/TNORA3/redo01b.log
    Sat Jan 1 15:11:12 2011
    Thread 1 cannot allocate new log, sequence 9474
    Checkpoint not complete
    Current log# 1 seq# 9473 mem# 0: /u01/oradata/TNORA3/redo01a.log
    Current log# 1 seq# 9473 mem# 1: /u02/oradata/TNORA3/redo01b.log
    Sat Jan 1 15:11:14 2011
    Thread 1 advanced to log sequence 9474 (LGWR switch)
    Current log# 2 seq# 9474 mem# 0: /u01/oradata/TNORA3/redo02a.log
    Current log# 2 seq# 9474 mem# 1: /u02/oradata/TNORA3/redo02b.log
    Sat Jan 1 15:14:15 2011
    Thread 1 advanced to log sequence 9475 (LGWR switch)
    Current log# 3 seq# 9475 mem# 0: /u01/oradata/TNORA3/redo03a.log
    Current log# 3 seq# 9475 mem# 1: /u02/oradata/TNORA3/redo03b.log
    >
    PLs, help me.

    This is the contait of tail -100 /opt/app/oracle/admin/TNORA3/bdump/tnora_s008_17165.trc | more
    KCBS: Tot bufs in set segwise
    KCBS: nbseg[0] is 1568
    KCBS: nbseg[1] is 1568
    KCBS: nbseg[2] is 1569
    KCBS: nbseg[3] is 1568
    KCBS: nbseg[4] is 1568
    KCBS: nbseg[5] is 1568
    KCBS: nbseg[6] is 1569
    KCBS: nbseg[7] is 1568
    KCBS: nbseg[8] is 1568
    KCBS: nbseg[9] is 1568
    KCBS: nbseg[10] is 1569
    KCBS: nbseg[11] is 1568
    KCBS: nbseg[12] is 1568
    KCBS: nbseg[13] is 1568
    KCBS: nbseg[14] is 1569
    KCBS: nbseg[15] is 1568
    KCBS: nbseg[16] is 1568
    KCBS: nbseg[17] is 1568
    KCBS: nbseg[18] is 1569
    KCBS: nbseg[19] is 1568
    KCBS: Act cnt = 15713
    KCBS: bufcnt = 31365, nb_kcbsds = 31365
    KCBS: fbufcnt = 445
    KCBS: Tot bufs in set segwise
    KCBS: nbseg[0] is 1568
    KCBS: nbseg[1] is 1568
    KCBS: nbseg[2] is 1569
    KCBS: nbseg[3] is 1568
    KCBS: nbseg[4] is 1568
    KCBS: nbseg[5] is 1568
    KCBS: nbseg[6] is 1569
    KCBS: nbseg[7] is 1568
    KCBS: nbseg[8] is 1568
    KCBS: nbseg[9] is 1568
    KCBS: nbseg[10] is 1569
    KCBS: nbseg[11] is 1568
    KCBS: nbseg[12] is 1568
    KCBS: nbseg[13] is 1568
    KCBS: nbseg[14] is 1569
    KCBS: nbseg[15] is 1568
    KCBS: nbseg[16] is 1568
    KCBS: nbseg[17] is 1568
    KCBS: nbseg[18] is 1569
    KCBS: nbseg[19] is 1568
    KCBS: Act cnt = 15713
    KCBS: bufcnt = 31365, nb_kcbsds = 31365
    KCBS: fbufcnt = 445
    KCBS: Tot bufs in set segwise
    KCBS: nbseg[0] is 1568
    KCBS: nbseg[1] is 1568
    KCBS: nbseg[2] is 1568
    KCBS: nbseg[3] is 1569
    KCBS: nbseg[4] is 1568
    KCBS: nbseg[5] is 1568
    KCBS: nbseg[6] is 1568
    KCBS: nbseg[7] is 1569
    KCBS: nbseg[8] is 1568
    KCBS: nbseg[9] is 1568
    KCBS: nbseg[10] is 1568
    KCBS: nbseg[11] is 1569
    KCBS: nbseg[12] is 1568
    KCBS: nbseg[13] is 1568
    KCBS: nbseg[14] is 1568
    KCBS: nbseg[15] is 1569
    KCBS: nbseg[16] is 1568
    KCBS: nbseg[17] is 1568
    KCBS: nbseg[18] is 1568
    KCBS: nbseg[19] is 1569
    KCBS: Act cnt = 15713
    KCBS: bufcnt = 31365, nb_kcbsds = 31365
    KCBS: fbufcnt = 444
    KCBS: Tot bufs in set segwise
    KCBS: nbseg[0] is 1568
    KCBS: nbseg[1] is 1568
    KCBS: nbseg[2] is 1568
    KCBS: nbseg[3] is 1569
    KCBS: nbseg[4] is 1568
    KCBS: nbseg[5] is 1568
    KCBS: nbseg[6] is 1568
    KCBS: nbseg[7] is 1569
    KCBS: nbseg[8] is 1568
    KCBS: nbseg[9] is 1568
    KCBS: nbseg[10] is 1568
    KCBS: nbseg[11] is 1569
    KCBS: nbseg[12] is 1568
    KCBS: nbseg[13] is 1568
    KCBS: nbseg[14] is 1568
    KCBS: nbseg[15] is 1569
    KCBS: nbseg[16] is 1568
    KCBS: nbseg[17] is 1568
    KCBS: nbseg[18] is 1568
    KCBS: nbseg[19] is 1569
    KCBS: Act cnt = 15713
    KSOLS: Begin dumping all object level stats elements
    KSOLS: Done dumping all elements. Exiting.
    Dump event group for SESSION
    Unable to dump event group - no SESSION state objectDump event group for SYSTEM
    ssexhd: crashing the process...
    Shadow_Core_Dump = partial

  • Archive log generation in every 7 minute interval

    One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
    Current settings are
    fast_start_mttr_target integer 300
    log_buffer integer 5242880
    Regards
    Manoj

    You can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
    how much blocks have been changed by the session. High values indicate a
    session generating lots of redo.
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 i.block_changes
    3 FROM v$session s, v$sess_io i
    4 WHERE s.sid = i.sid
    5 ORDER BY 5 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. This view contains information about the amount of
    undo blocks and undo records accessed by the transaction (as found in the
    USED_UBLK and USED_UREC columns).
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 t.used_ublk, t.used_urec
    3 FROM v$session s, v$transaction t
    4 WHERE s.taddr = t.addr
    5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
    the session.

  • Managing ARCHIVE Logs in Oracle 10.2.0.3

    I am working with a customer who seems to think there is a way of controling the database other than a custom JOB, script or RMAN in how it creates, manages and deletes its archive logs while running in archivelog mode. He wants the database to automatically delete obsolete archive logs. He also wants to control the duration in time between each time an archive log is written in order to stop the growth of archive logs and filling up disk space.
    I am saying this is not possible. You either configure RMAN to delete the obsolete or expired archive logs based on your retention policy or do it manually in the Enterprise Manager or Grid Control Console by deletenig obsolete or expired logs.
    Am I correct or am I off base here?

    4.1.3 Sizing Redo Log Files
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.
    Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/build_db.htm#sthref237
    If you are talking about data guard then:
    4.1.3 Sizing Redo Log Files
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.
    Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
    Automatic Deletion of Applied Archive Logs
    Archived logs, once they are applied on the logical standby database, will be automatically deleted by SQL Apply.
    This feature reduces storage consumption on the logical standby database and improves Data Guard manageability.
    See also:
    Oracle Data Guard Concepts and Administration for details
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14214/chapter1.htm#sthref269

  • Manually apply archived log to standby database.

    Hi all,
    I am working on oracle 10g EE(10.2.0.4).I am having a very difficult database switchover case in my hand.
    1.> There are two database .one primary(prim) and another standby(standby).
    2.> Initially both of them was configured as oracle data guard, where prim was primary and standby was standby using rman .(RMAN>duplicate target database for
    standby dorecover).
    3.>But the rate of generation of archivelog is very high( 20 archivelog per hour with 200mb each).
    4.> So automatic shipping of archivelog has been disabled from primary.(log_archive_dest_state_2=defer and nullifying the value of log_archive_dest_2).
    5> Now after a period of time archived logs are shipped to dr site manually and applied manually.(alter database recover automatic standby database;)
    6.> Now there is a hardware problem occured at primary..it will take 72 hours to come up.
    How can i use standby database as primary for that 72 hour.As archive log apply is manually done and parameters are being changed,can i switch over these two?
    Is it possible to switchover and switch back these two database with such a configuration?

    How can i use standby database as primary for that 72 hour.As archive log apply is manually done and parameters are being changed,can i switch over these two?
    Is it possible to switchover and switch back these two database with such a configuration?
    As of now you can perform failover your standby to primary database. you should have flashaback enabled, so you have to create a restore point.
    Then it can be opened in read & write mode, So once you finished again you can go back to that point.
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/manage_ps.htm#i1017111
    The above case applicable if you ready to loose, But it doesnt make sense. But in your case this doesnt work. Because once you start use of Standby as primary, there would be live/production data, which you need very much.
    So there is lot of work to do.
    You have to rebuild standby database and then switchover back to same location. So its a little trip work.
    Have you enabled Dataguard Broker ?
    Edited by: CKPT on Mar 13, 2012 1:43 PM

  • Unable to find archived log

    Hi
    I am restoring a hot backup taken through RMAN using following commands:
    configure controlfile autobackup on;
    BACKUP DATABASE ;
    BACKUP ARCHIVELOG ALL DELETE INPUT;
    Now I am going to restore that using following commands:
    restore spfile from autobackup;
    restore controlfile from autobackup;
    shutdown immediate;
    startup mount;
    restore database;
    RECOVER DATABASE;
    ALTER DATABASE OPEN RESETLOGS;
    But it goes fine till restore database. At recover database I get following errors:
    archived log for thread 1 with sequence 2461 is already on disk as file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_1_8fbs9bvt_.log
    archived log for thread 1 with sequence 2462 is already on disk as file /u01/app/oracle/fast_recovery_area/XE/onlinelog/o1_mf_2_8fbs9chb_.log
    unable to find archived log
    archived log thread=1 sequence=545
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/11/2013 20:41:43
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 545 and starting SCN of 25891726
    I have checked the backup folder and there are only empty date wise folders under archivedlog folders.
    If I write RMAN> ALTER DATABASE OPEN RESETLOGS; I get:
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of alter db command at 09/11/2013 20:43:01
    ORA-01190: control file or data file 1 is from before the last RESETLOGS
    ORA-01110: data file 1: '/u01/app/oracle/oradata/XE/system.dbf'
    If I write RMAN> recover database until sequence 545; I get
    Starting recover at 11-SEP-13
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=695 device type=DISK
    starting media recovery
    unable to find archived log
    archived log thread=1 sequence=545
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/11/2013 21:09:34
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 545 and starting SCN of 25891726
    I don't mind if some data is lost. Will be really thankful if someone can help me get my database open;
    Habib

    They way you are trying to recover will try to recover up to the last know SCN. Try to do a point in time recover up to a few minutes before the database was shutdown or crashed.
    Try something like this
    run{
    set until time "to_date('2013-09-11:00:00:00', 'yyyy-mm-dd:hh24:mi:ss')";
    restore spfile from autobackup;
    restore controlfile from autobackup;
    shutdown immediate;
    startup mount;
    restore database;
    RECOVER DATABASE;
    ALTER DATABASE OPEN RESETLOGS;

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Archive log generation

    Hai all,
    In My production environment, some times archives log are generating 5-6 logs a min.. very less no of users are connected to the database now..
    rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4810.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4811.arc
    -rw-r----- 1 oraprod dba 10483712 Jan 12 14:10 prod_arch_4812.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4813.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4814.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4815.arc
    Y is this happenin ?
    Any comments or ideas to resolve this?
    Yusuf

    Whenever you create a thread, it is always advisable to specify your current OS and DB versions.
    You could be generating this redo information by means of your scheduled tasks or by the current users activity, the few concurrent number of users doesn't mean they won't be generating a lot of transactions. Check your v$undostat and v$rollstat, v$transaction, and v$session to monitor users and transaction activity.
    10M for a redo log size IMO is very little for the current transaction requirements for most databases. Your database currently generates transaction information at a rate of about 50M/min. With 100M redolog files you would be generating one archivelog file around each two minutes, instead of the current 5 archivelog per minute.
    Since your database is highly transactional, make sure you have enough free space to store your generated archive log files, you will be generating about 3G/hr.
    ~ Madrid

  • Enabling archive log

    Hi all,
    1) How to enable archive log in oracle 10g. Found a link in google. http://cuddletech.com/articles/oracle/node58.html
    Is it the correct way, need to down the database.
    2) Could the archive log affect the performance of the database, if i have average of 1 write transactions to the database? I have a database that have already switch on by my precessder. I notice that it generate a 40,000kb of file every 3 minutes.

    The questions are, how often are your redo log switches, are the users noticing a performance issue, what level of service have you agreed to for backup and recovery?
    There can be trade-offs made, depending on the exact circumstances. As Ed noted, not losing data is the prime directive. There are many ways to avoid losing data, someone has to decide which are most appropriate and which are just too expensive. Someone playing what-if scenarios with your projected budgets is going to have a different idea about that than someone who is in charge of millions of dollars of online orders. DW and DSS systems often have other places to get the data and may not require being able to point-in-time recover. It depends.
    What you need to avoid is managers saying things like "we only care about restoring to last nights backup," when they don't understand what they are saying.
    Some bulk-loading situations can benefit from nologging operations, and some (like a massive update in an app upgrade) can benefit from going out of and into archivelog mode, since such things can generate huge amounts of redo without a requirement to recover halfway through them. Just be sure and take an appropriate backup after such things. That also means, take a backup as soon as you go into archivelog mode.

  • System I/O and Too Many Archive Logs

    Hi all,
    This is frustrating me. Our production database began to produce too many archived redo logs instantly --again. This happened before; two months ago our database was producing too many archive logs; just then we began get async i/o errors, we consulted a DBA and he restarted the database server telling us that it was caused by the system(???).
    But after this restart the amount of archive logs decreased drastically. I was deleting the logs by hand(350 gb DB 300 gb arch area) and after this the archive logs never exceeded 10% of the 300gb archive area. Right now the logs are increasing 1%(3 GB) per 7-8 mins which is too many.
    I checked from Enterprise Manager, System I/O graph is continous and the details show processes like ARC0, ARC1, LGWR(log file sequential read, db file parallel write are the most active ones) . Also Phsycal Reads are very inconsistent and can exceed 30000 KB at times. Undo tablespace is full nearly all of the time causing ORA-01555.
    The above symptoms have all began today. The database is closed at 3:00 am to take offline backup and opened at 6:00 am everyday.
    Nothing has changed on the database(9.2.0.8), applications(11.5.10.2) or OS(AIX 5.3).
    What is the reason of this most senseless behaviour? Please help me.
    Thanks in advance.
    Regards.
    Burak

    Selam Burak,
    High number of archive logs are being created because you may have massive redo creation on your database. Do you have an application that updates, deletes or inserts into any kind of table?
    What is written in the alert.log file?
    Do you have the undo tablespace with the guarentee retention option btw?
    Have you ever checked the log file switch sequency map?
    Please use below SQL to detirme the switch frequency;
    SELECT * FROM (
    SELECT * FROM (
    SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"+
    FROM V$LOG_HISTORY
    WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    +) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC+
    +) WHERE ROWNUM < 8+
    Ogan

  • Recovery question using archive log

    HPUX 11X
    Oracle 10.2.0.2
    I have a standby database that the customers want to test the ability to fail over. They also want to test the data integrity of the standby once it's brought on line as the primary.
    I've already told the customer that any writes that they do on the standby during the test time WILL be lost. I plan on doing a incomplete recovery back to a specific sequence number / archive log when the tests are complete.
    My plan is to take note of the current sequence number just before I bring the standby live. When all is said and done, I was going to do a recover back to the sequence number one PRIOR to that one. Then when I place the db back in standby mode, it will sync back up to the primary.
    What I need to know is if this is a good or a bad idea. Should I plan on doing it differently and for what reason.
    Thanks! :)

    Use flashback database to recover the former primary database to a new standby.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1049997
    Therefore the changes made on standby won't be lost (unless you want to lose them).
    If your databases do not have Flashback Database feature enabled, you can use incomplete recovery instead.
    I think Flashback Database should be your first option.

  • Primary/standby and 3rd archive log destination?

    Running Oracle EE 10.2.0.4 linux 64 bit. I have a primary and standby configuration using data guard, with appropriate LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2 for the primary and standby - all works as expected. I want to multiplex the log files by adding a 3rd archive log destination to a different disk. I'm not clear on whether or not specifying a second 'primary' archive destination will write to both dest_1 and dest_3 or just alternate between the 2? I want to make sure that both DEST_1 and DEST_3 are written to (there can be a lag for DEST_3) -
    Is the following all I need or are there additional parameters to LOG_ARCHIVE_DEST_n I'm missing?
    LOG_ARCHIVE_DEST_3 = 'valid_for=(ONLINE_LOGFILE,ALL_ROLES)', 'location="/myThirdLoc"'
    Thanks -

    The database hang because of the redo log file is full. my test database is 16.7G and I gave the recovery filesystem 19 G for the redo log. Apparently it is not enough. After i set up the backup with daily incremental and run for 5 days, the recovery is full and db hang. How do you free up the space or set up the backup so that it can recycle the space? My DB is 10g r2 on RHEL3.

  • Recover database from archive log: Time based recovery

    Hi,
    Could you pelase help regarding the following:
    I have power outage in my machine running oracle 9i on Solaris OS 9
    Oracle is mounted but failed to open
    Once it is started showing the error message
    SQL> startup;
    ORACLE instance started.
    Total System Global Area 9457089744 bytes
    Fixed Size 744656 bytes
    Variable Size 3154116608 bytes
    Database Buffers 6291456000 bytes
    Redo Buffers 10772480 bytes
    Database mounted.
    ORA-01113: file 4 needs media recovery
    ORA-01110: data file 4: '/opt/oracle/oradata/sysdb/indx/indx01.dbf'
    So I tried to recover the database
    SQL> recover database;
    ORA-00279: change 1652948240 generated at 12/03/2007 13:09:08 needed for thread
    1
    ORA-00289: suggestion : /opt/oracle/oradata/nobilldb/archive/1_183942.dbf
    ORA-00280: change 1652948240 for thread 1 is in sequence #183942
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00279: change 1652948816 generated at 12/03/2007 13:09:19 needed for thread
    1
    ORA-00289: suggestion : /opt/oracle/oradata/nobilldb/archive/1_183943.dbf
    ORA-00280: change 1652948816 for thread 1 is in sequence #183943
    ORA-00278: log file '/opt/oracle/oradata/nobilldb/archive/1_183942.dbf' no
    longer needed for this recovery
    The power outage at 16:00pm and the recovery archive log file '/opt/oracle/oradata/nobilldb/archive/1_183942.dbf' at 11 am
    Always I am applying the next sequence it is giving the same message and asking the next sequence. I have more than 900 archive log from 11am to 4pm and each of them having the size of 100mb and it take 1 minute to get back from each to provide this error message.
    How I can start my recovery from say 15:45 onwards until 16:15?
    I have all archive logs in the proper destination.
    Still my database is not opened and it is starts applying archive log since 5 hours back, please help me regarding this
    Thanks in advance

    Wrong forum. Post your question in the following forum:
    General Database Discussions

Maybe you are looking for

  • Date format accepted in new table but not in active table for DSO, BI 7.0

    Hi All, Need help from the experts, i was trying to load Flate file data in CSV(Coma separated) and cell type general for dates(yyyymmdd). everything was fine i have a routine for new fields in DSO for Fiscal week and Fiscal Quater which is calculati

  • How to pass the value of a zone since a iview SAPTransaction to a ivew KM ?

    I have a iview of the type SAP transaction which posts the transaction of Edit order. I would like to recover in a ivew km, the number of the order seized in iview SAP Transaction How one makes? Thanks's

  • Status of ADF BC / JSF Version of Developer Guide?

    Can anyone provide an update on when this will be available (Developer guide based on ADF BC and JSF vs. Toplink)? I'm EAGERLY awaiting - this will be very helpful. I was able to download the SRDemo app based on ADF BC / JSF - this is very helpful to

  • How to replace apostrophe with space

    Hi Experts, I need to replace the Apostrophe in Material description with space. I am getting a error as the text literal .....longer than 255. highly appreciated for immediate response. REPLACE ALL OCCURRENCES OF ''' IN WA_MAKTX WITH ''. thanks, RRR

  • #sh ip ospf route Command

    Hi All, Kinldy let me know that #sh ip ospf route, which IOS Version support this command... Regards Indrapal