Excessive redo log

Hello all:
Oracle10.2.0.3
OS Windows2003
Vendor app is running against the database and generating excessive redo switching logs every 10 minutes
Online redologs are already 1GB. Vendor wants them to be increased further. Suggestions to troubleshoot please.
multiblock_read_count is set to 128 and compatibility is set to 10.2.0.2.
Thanks
Sannidhi

You need to figureout what is causing so that this much is redo being generated. If your redologs are of 1gb and till those are getting switched in a span of 10 min then definately there is some thing that is generating to much redo.
Without analyzing the issue if you increase the redolog size, even that size may not be sufficient and may be filled quickly.
with logminer you can find what is there in the log file. or find out and trace the session which are doing some activity while the frequent switch hapens.
Edited by: sac on Sep 11, 2009 10:47 AM

Similar Messages

  • How to reduce excessive redo log generation in Oracle 10G

    Hi All,
    Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
    previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
    below is the size of redo log file members:
    L.BYTES/1024/1024     MEMBER
    200     /u05/applprod/prdnlog/redolog1a.dbf
    200     /u06/applprod/prdnlog/redolog1b.dbf
    200     /u05/applprod/prdnlog/redolog2a.dbf
    200     /u06/applprod/prdnlog/redolog2b.dbf
    200     /u05/applprod/prdnlog/redolog3a.dbf
    200     /u06/applprod/prdnlog/redolog3b.dbf
    here is the some content of alert message for your reference how frequent log switch is occuring:
    Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Thread 1 advanced to log sequence 17439
    Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Tue Jul 13 14:46:17 2010
    Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Tue Jul 13 14:46:38 2010
    Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Thread 1 advanced to log sequence 17440
    Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
    Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
    Tue Jul 13 14:46:52 2010
    Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Tue Jul 13 14:53:33 2010
    Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Thread 1 advanced to log sequence 17441
    Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
    Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
    Tue Jul 13 14:53:37 2010
    Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Tue Jul 13 14:55:37 2010
    Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
    Tue Jul 13 15:15:37 2010
    Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
    Tue Jul 13 15:35:38 2010
    Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
    Tue Jul 13 15:55:39 2010
    Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
    Tue Jul 13 16:15:41 2010
    Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
    Tue Jul 13 16:35:41 2010
    Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
    Tue Jul 13 16:42:28 2010
    Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
    Thread 1 advanced to log sequence 17442
    Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Thanks in advance

    hi,
    Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
    L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by daythanks,
    baskar.l

  • Standby Redo Log just sits as IN-MEMORY

    Hi, I have shipped an archived redo log from my Primary to my physical standby.
    I can see the log arriving at my Standby and being applied
    select thread#, max(sequence#) "Last Standby Seq Received"
    from v$archived_log val, v$database vdb
    where val.resetlogs_change# = vdb.resetlogs_change#
    group by thread# order by 1;
    returns
       THREAD# Last Standby Seq Received
             1                       151
    select thread#, max(sequence#) "Last Standby Seq Applied"
    from v$archived_log val, v$database vdb
    where val.resetlogs_change# = vdb.resetlogs_change#
    and applied='YES'
    group by thread# order by 1;
       THREAD# Last Standby Seq Applied
             1                      150
    select stamp,name,applied
    from v$archived_log
    where applied != 'YES'
        STAMP
    NAME
    APPLIED
    827498375
    /home/app/oracle/fast_recovery_area/STANDBYL/archivelog/2013_09_30/o1_mf_1_151_9
    4lrqpj3_.arc
    IN-MEMORY
    This log continually sits like this
    I know that Redo Appply is active
    select * from v$managed_standby where process = 'MRP0';
    PROCESS          PID STATUS       CLIENT_P
    CLIENT_PID
    CLIENT_DBID
    GROUP#                                   RESETLOG_ID    THREAD#  SEQUENCE#
        BLOCK#     BLOCKS DELAY_MINS KNOWN_AGENTS ACTIVE_AGENTS
    MRP0            3068 APPLYING_LOG N/A
    N/A
    N/A
    N/A                                        820252586          1        152
          5038     102400          0            3             3
    It looks as if 151 has been applied - yet
    select thread#, max(sequence#) "Last Standby Seq Applied"
    from v$archived_log val, v$database vdb
    where val.resetlogs_change# = vdb.resetlogs_change#
    and applied='YES'
    group by thread# order by 1;
    still shows 150 !
    151 is not that big, I still would have expected it to have applied by now ( in excess of 30 mins )
    Also I know there is no defer or time delay on the archive_destination setting on the Primary.
    Any ideas why this standby redo log just sits IN-MEMORY ?
    thanks,
    Jim

    Hello;
    Not able to reproduce issue. Using Oracle 11.2.0.3 without Real-time apply.
    select thread#, max(sequence#) "Last Standby Seq Received"
    from v$archived_log val, v$database vdb
    where val.resetlogs_change# = vdb.resetlogs_change#
    group by thread# order by 1;
    THREAD#                Last Standby Seq Received
    1                      194    
    select thread#, max(sequence#) "Last Standby Seq Applied"
    from v$archived_log val, v$database vdb
    where val.resetlogs_change# = vdb.resetlogs_change#
    and applied='YES'
    group by thread# order by 1;
    THREAD#                Last Standby Seq Applied
    1                      194         
    select thread#, max(sequence#) "Last Standby Seq Applied"
    from v$archived_log val, v$database vdb
    where val.resetlogs_change# = vdb.resetlogs_change#
    And Applied='YES'
    group by thread# order by 1;
    THREAD#                Last Standby Seq Applied
    1                      194   
    SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied"
    FROM
    (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
    (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
    WHERE
    Arch.Thread# = Appl.Thread#
    ORDER BY 1;
    Thread                 Last Sequence Received Last Sequence Applied 
    1                      194                    194                  
    select * from v$managed_standby where process = 'MRP0';
    ( shows the next sequence 195 on mine )
    Is it possible the Standby has one extra archive? I've noticed that sometime you will see one extra on that side which appears to have nothing to do with the Data Guard process.
    Best Regards
    mseberg

  • Urgent: Excessive archive logs 10g 10.2.0.4

    Hi,
    I am experiencing excessive archive logging.
    This has been happenning for the past 3 days. Usually achive logs were taking about 250 Mb.
    Yesteday they jumped to 129Gb, today - 30Gb until the hard drive ran out of space.
    Now I obviously have
    "Archiver is unable to archive a redo log because the output device is full or unavailable" message.
    The database is servicing a low number of transations application and awaiting deployment into production.
    Not sure where to start.
    Any advice would be appreciated.
    I can provide any other information necessary to troubleshoot.
    Thanks in advance.

    run this....it should point out the session currently going:
    SELECT s.sid, s.serial#, s.username, s.program,
    t.used_ublk, t.used_urec, vsql.sql_text
    FROM v$session s, v$transaction t, v$sqlarea vsql
    WHERE s.taddr = t.addr
    and s.sql_id = vsql.sql_id (+)
    ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    This assumes you can log in by maybe moving some of your archives to free up space to resume db operations.
    Also, reach out to developers/users (if you can) and see if anyone is testing something or loading tables, etc.
    Edited by: DBA_Mike on Apr 14, 2009 10:10 AM

  • Excessive Redo Generation After Upgrading on Oracle 10gR2

    We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0.
    After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 20 where as after migration it become around 200 files per day. I have run statspack report on this database. Statspack report is also saying that log file switch wait is become very high. Parameter timed_statistics has also been set to FALSE. Workload on the database is same before & after upgrade. Queries running in the sessions are also same before & after upgrade. All the parameters & memory structures are same after upgrade. Satatpack report is saying that db block change & disk write is become very high. I had used import export for upgrading the databases. Please provide a solution for this problem.
    Thanks In advance for all your favours....

    Hi;
    Please check below notes which could be helpful for your issue:
    Diagnosing excessive redo generation [ID 199298.1]
    Excessive Archives / Redo Logs Generation Troubleshooting [ID 832504.1]
    Troubleshooting High Redo Generation Issues [ID 782935.1]
    How to Disable (Temporary) Generation of Archive Redo Log Files [ID 177218.1]
    Regard
    Helios

  • High redo log space wait time

    Hello,
    Our DB is having very high redo log space wait time :
    redo log space requests 867527
    redo log space wait time 67752674
    LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
    Also, the amount of redo generated per hour :
    START_DATE START NUM_LOGS MBYTES DBNAME
    2008-07-03 10:00 2 1000 TKL
    2008-07-03 11:00 4 2000 TKL
    2008-07-03 12:00 3 1500 TKL
    Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
    Thanks in advance ,
    Regards,
    Aman

    Looking quickly over the AWR report provided the following information could be helpful:
    1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
    In particular the large_pool_size setting seems to be quite high although you're using shared servers.
    Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
    2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
    In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
    3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
    4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
    5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
    6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
    7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Redo log data

    How to find the actual data redolog contains? My redo log is of 1 GB of each group. We have moved the redolog to newer disk, so i want to know the actual size of redolog, how much data it carries and how we can analyze/tune redolog to increase performance.
    Thanks

    Hi,
    >>How to find the actual data redolog contains?
    I think that you can take a look at the "actual_redo_blks" column in v$instance_recovery dynamic performance view in order to see the current number of redo blocks required to be read in case of recovery.
    >>how much data it carries and how we can analyze/tune redolog to increase performance.
    Oracle 10g has introduced a redo logfile sizing advisor that will recommend a size for the redo logs that limit excessive log switches, incomplete and excessive checkpoints, log archiving issues, DBWR performance and excessive disk I/O. So, you can obtain redo sizing advice on the redo log groups page of Oracle Enterprise Manager Database Control.
    Cheers

  • 10G redo log switchs with no activity

    We have been testing 8i to 10G migrations and two things I have noticed that are different bwteeen the 2 databases are:
    1) redo log switching occurring even though the database has no users and just sitting there. This is not happeniong in our 8i databases.
    2) trace files of format instance_m001_ and instance_m000_ in ../bdump. Our 8i bdump's normally do not have trc files unless a problem occurs.
    Is it normal for 10G to automatically switch redo logs with no activity and are these .trc files also normal? In general I get nervous with trc files. -quinn

    Still haven't figured out the redo log switches, they seem to be slowing down, but the excess trace files are apparently cuased by a bug and can be fixed with Patch 3432729.

  • Excessive flashback log generates with select statement

    Hi everyone;
    We have some extractions taken from a "flashback on" database.
    Extractions are just select statements but when they are run, database produces excessive flashback logs.
    What may be the reason database produce flashback logs with just select statements?
    (It's certain that there are no insert-update-delete operations)
    Version: 10.2.0.4.3
    Thanks...

    Do you do heavy update/delete before you select the statements ?
    I am not very sure if delayed block cleanout also have the same effect on flashback logs but the output below is leading me to think that way
    HR@ORACOS> select * from v$flashback_database_stat;
    BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
    20100527 15:32:53 20100527 15:50:16      875266048 1207132160 2038729728                        0
    20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
    20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
    20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
    20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10
    HR@ORACOS> set autotrace traceonly statistics
    HR@ORACOS>  update base_table_np set y='INVALID';
    commit;
    4021808 rows updated.
    Statistics
           2512  recursive calls
        8341430  db block gets
        4069140  consistent gets
         120569  physical reads
    1908471980  redo size
            848  bytes sent via SQL*Net to client
            793  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
        4021808  rows processed
    HR@ORACOS> set autotrace off;
    HR@ORACOS> select * from v$flashback_database_stat; 
    HR@ORACOS>
    BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
    20100527 15:32:53 20100527 16:00:36     1236664320 2021974016 4019910656                        0
    20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
    20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
    20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
    20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10
    HR@ORACOS> set autotrace traceonly statistics
    HR@ORACOS> select * from base_table_np;
    4021808 rows selected.
    Statistics
            139  recursive calls
              0  db block gets
          53908  consistent gets
           4404  physical reads
        1652384  redo size                                                  ------->delayed block cleanout effect
      175008833  bytes sent via SQL*Net to client
          88996  bytes received via SQL*Net from client
           8045  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
        4021808  rows processed
    HR@ORACOS> set autotrace off
    HR@ORACOS> select * from v$flashback_database_stat;    ----flashback data size increases
    HR@ORACOS>
    BEGIN_TIME        END_TIME          FLASHBACK_DATA    DB_DATA  REDO_DATA ESTIMATED_FLASHBACK_SIZE
    20100527 15:32:53 20100527 16:01:11     1305264128 2054594560 4021728256                        0
    20100527 14:32:50 20100527 15:32:53      248160256  127295488  450139648               1.3215E+10
    20100527 13:32:48 20100527 14:32:50       10452992   15646720    4400640               1.5549E+10
    20100527 12:32:43 20100527 13:32:48      745693184  948461568 1311620608               2.2789E+10
    20100527 11:25:56 20100527 12:32:43     1262026752 1984741376 2358546432               2.7212E+10Basically what I do is I update a 4 million table big redo generated with flashback logs
    When I do select after the update I still see the redo generated because of delayed block cleanout but what I also see is the slight increase in flashback data size (check the first row of flashback_database_stat) which suits what you asking for. Select statement generates flashback log
    Tested on 11.2.0.1 with single active session on the db
    Coskan Gundogar
    Blog: http://coskan.wordpress.com
    Twitter: http://www.twitter.com/coskan
    Linkedin: http://uk.linkedin.com/in/coskan
    ---------

  • How do I manually archive 1 redo log at a time?

    The database is configured in archive mode, but automatic archiving is turned off.
    For both Oracle 901 and 920 on Windows, when I try to manually archive a single redo log, the database
    archives as many logs as it can up to the log just before the current log:
    For example:
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    6 rows selected.
    SQL> alter system archive log next;
    System altered.
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    See - instead of only 1 log being archive, 4 of them were. Oracle behaves the same way if I use the "sequence" option:
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    6 rows selected.
    SQL> alter system archive log next;
    System altered.
    SQL> select * from v$log order by sequence#;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
    2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
    3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
    4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
    5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
    6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
    Is there some default system configuration property telling Oracle to archive as many logs as it can?
    Thanks,
    DGR

    Thanks Yoann (and Syed Jaffar Jaffar Hussain too),
    but I don't have a problem finding the group to archive or executing the alter system archive log command.
    My problem is that Oracle doesn't work as I expect it.
    This comes from the Oracle 9.2 online doc:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_23a.htm#2053642
    "Specify SEQUENCE to manually archive the online redo log file group identified by the log sequence number integer in the specified thread."
    This implies that Oracle will only archive the log group identified by the log sequence number I specify in the alter system archive log sequence statement. However, Oracle is archiving almost all of the log groups (see my first post for an example).
    This appears to be a bug, unless there is some other system parameter that is configured (by default) to allow Oracle to archive as many log groups as possible.
    As to the reason why - it is an application requirement. The Oracle db must be in archive mode, automatic archiving must be disabled and the application must control online redo log archiving.
    DGR

  • Select from .. as of - using archived redo logs - 10g

    Hi,
    I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
    I've been searching for a while and cant find an answer.
    My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
    I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
    When is issue the following query
    select * from supplier_codes AS OF TIMESTAMP
    TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
    I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
    My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
    Any help would be greatly appreciated!
    Thanks
    Robert

    If you want to go back 24 hours, you need to undo the changes...
    See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search].

  • How to disable write to redo log file in oracle7.3.4

    in oracle 8, alter table no logged in redo log file like: alter table tablename nologging;
    how to do this in oracle 7.3.4?
    thanks.

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Is There a Way to Run a Redo log for a Single Tablespace?

    I'm still fairly new to Oracle. I've been reading up on the architecture and I am getting the hang of it. Actually, I have 2 questions.
    1) My first question is..."Is there a way to run the redo log file...but to specify something so that it only applies to a single tablespace and it's related files?"
    So, in a situation where, for some reason, only a single dbf file has become corrupted, I only have to worry about replaying the log for those transactions that affect the tablespace associated with that file.
    2) Also, I would like to know if there is a query I can run from iSQLPlus that would allow me to view the datafiles that are associated with a tablespace.
    Thanks

    1) My first question is..."Is there a way to run the
    redo log file...but to specify something so that it
    only applies to a single tablespace and it's related
    files?"
    No You can't specify a redolog file to record the transaction entries for a particular tablespace.
    In cas if a file gets corrupted.you need to apply all the archivelogs since the last backup plus the redologs to bring back the DB to consistent state.
    >
    2) Also, I would like to know if there is a query I
    can run from iSQLPlus that would allow me to view the
    datafiles that are associated with a tablespace.Select file_name,tablespace_name from dba_data_files will give you the
    The above will give you the number of datafiles that a tablespace is made of.
    In your case you have created the tablespace iwth one datafile.
    Message was edited by:
    Maran.E

  • Problem in creating database -Missing Redo log file

    I am try to create a new database using DBCA .While creating a database it shows the error oracle instance terminated.Force Disconnected.
    My alert log file is
    Errors in file /u01/app/oracle/diag/rdbms/oracl/oracl/trace/oracl_ora_5424.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u01/app/oracle/oradata/oracl/redo01.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Wed Nov 06 10:07:27 2013
    Errors in file /u01/app/oracle/diag/rdbms/oracl/oracl/trace/oracl_ora_5424.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/oracl/redo02.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u01/app/oracle/diag/rdbms/oracl/oracl/trace/oracl_ora_5424.trc:
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/u01/app/oracle/oradata/oracl/redo03.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Wed Nov 06 10:07:38 2013
    Setting recovery target incarnation to 2
    Wed Nov 06 10:07:38 2013
    Assigning activation ID 1876274518 (0x6fd5ad56)
    Thread 1 opened at log sequence 1
      Current log# 1 seq# 1 mem# 0: /u01/app/oracle/oradata/oracl/redo01.log
    Successful open of redo thread 1
    Wed Nov 06 10:07:38 2013
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Nov 06 10:07:38 2013
    SMON: enabling cache recovery
    Errors in file /u01/app/oracle/diag/rdbms/oracl/oracl/trace/oracl_ora_5424.trc  (incident=1345):
    ORA-00600: internal error code, arguments: [kpotcgah-7], [12534], [ORA-12534: TNS:operation not supported
    Incident details in: /u01/app/oracle/diag/rdbms/oracl/oracl/incident/incdir_1345/oracl_ora_5424_i1345.trc
    Wed Nov 06 10:07:46 2013
    Trace dumping is performing id=[cdmp_20131106100746]
    Errors in file /u01/app/oracle/diag/rdbms/oracl/oracl/trace/oracl_ora_5424.trc:
    ORA-00600: internal error code, arguments: [kpotcgah-7], [12534], [ORA-12534: TNS:operation not supported
    Error 600 happened during db open, shutting down database
    USER (ospid: 5424): terminating the instance due to error 600
    Instance terminated by USER, pid = 5424
    ORA-1092 signalled during: alter database "oracl" open resetlogs...
    ORA-1092 : opiodr aborting process unknown ospid (5424_47935551851664)
    Wed Nov 06 10:07:47 2013
    ORA-1092 : opitsk aborting process
                                                                                                                                   251,1         95%

    >I am try to create a new database using DBCA
    >Please help me to resolve this issue.My redo log file was missing
    DROP and recreate the database.  It is a *new* database without any data.
    Check what datafile locations and redo log file locations you specify when creating the new database. Check if you have permissions and enough disk space.
    Hemant K Chitale

  • Best way to move redo log from one disk group to another in ASM?

    Hi All,
    Our db is 10.2.0.3 RAC db. And database servers are window 2003 server.
    We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group 1 to 2. Here are the options we are thinking about but not sure which one is the best from easiness and safety prospective.
    Thank you very much for your help in advance.
    Shirley
    Option 1:
    1)     shutdown immediate
    2)     copy log files from disk group 1 to disk group2 using RMAN (need to research on this)
    3)     startup mount
    4)     alter database rename file ….
    5)     Open database open
    6)     delete the redo files from disk group 1 in ASM (how?)
    Option 2:
    1)     create a set of redo log groups in disk group 2
    2)     drop the redo log groups in disk group 1 when they are inactive and have been archived
    3)     delete the redo files associated with those dropped groups from disk group 1 (how?) (According to Oracle menu: when you drop the redo log group the operating system files are not deleted and you need to manually delete those files)
    Option 3:
    1)     create a set of redo members in disk group 2 for each redo log group in disk group 1
    2)     drop the redo log memebers in disk group 1
    3)     delete the redo files from disk group 1 associated with the dropped members

    Absolutely not, they are not even remotely similar concepts.
    OMF: Oracle Managed Files. It is an RDMBS feature, no matter what your storage technology is, Oracle will take care of file naming and location, you only have to define the size of a file, and in the case of a tablespace on an OMF DB Configuration you only need to issue a command similar to this:
    CREATE TABLESPACE <TSName>; So the OMF environment creates an autoextensible datafile at the predefined location with 100M by default as its initial size.
    On ASM it should only be required to specify '+DGroupName' as the datafile or redo log file argument so it can be fully managed by ASM.
    EMC. http://www.emc.com No further commens on it.
    ~ Madrid
    http://hrivera99.blogspot.com

Maybe you are looking for

  • MD06  Issue - Planned orders not getting displayed

    Hello, Planned order with the exception "Newly created order proposal" does not get displayed in MD06 with the option "Only Unprocessed MRP Lists" in the processing indicator. I am running MD06 report with the following parameters: Plant MRP Controll

  • Path to com.sap.aii.adapter.lib.sda: PI 7.1

    Hi all, I understand that this topic has been brought up many a times. I am posting my own as I din't find any proper answers. We are to include a 3rd party JDBC Database driver for PI 7.1. We find a Path to com.sap.aii.adapter.lib at the as /usr/sap

  • How to use STR function in expressions

    I'm having trouble trying to: a) concat 2 string + format specifiers like sprintf() does b) build 1 log string without weird characters being placed in the string as follows: str(global.name, "UseFile=1,File=c:\test.txt,Pgro=%s") there is a wierd cha

  • How you tell what generation your iPod is

    how may I determine what generation an iPod touch is? says just say model A1509 on back

  • HT1657 Why are there no credits on movies rented from iTunes?

    Can anyone tell why there are no credits (movie credits ) with movies rented from iTunes?