Redo Log Generation Rate is more as compaired to Single Instacne

Hi Experts,
i need views regarding the experience that i m facing in rac environment. actually we have two node rac 10gr2 running on AIX 6.1
on application side we have a process of 4 hrs and during this process we observed average 110 Gb of logs. but if we run same process on application and single instance DB is being used at back end, then it generates only 40Gb of log. now my question is why same process have different redo log generation values against rac db and single instacne.

Amiabu,
Your question calls for crystal balls, as you have provided way too few details.
Also you didn't use Logminer to find out what is going on. This would have been a sensible approach, even before posting a question which can be paraphrased as 'It doesn't work as expected, why?'.
Two general observations:
if your code doesn't scale, using RAC will make things worse.
Oracle 10g is a heavily instrumented product, assuming you use AWR or ADDM or statspack.
The Cache Fusion feature of RAC can easily result in extra wait events, which in turn are being tracked (INSERTed that is) in the AWR or statspack repository.
Only you can answer the question what was going on by using Logminer.
Sybrand Bakker
Senior Oracle DBA
Edited by: sybrand_b on 16-jun-2011 14:33

Similar Messages

  • How to reduce excessive redo log generation in Oracle 10G

    Hi All,
    Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
    previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
    below is the size of redo log file members:
    L.BYTES/1024/1024     MEMBER
    200     /u05/applprod/prdnlog/redolog1a.dbf
    200     /u06/applprod/prdnlog/redolog1b.dbf
    200     /u05/applprod/prdnlog/redolog2a.dbf
    200     /u06/applprod/prdnlog/redolog2b.dbf
    200     /u05/applprod/prdnlog/redolog3a.dbf
    200     /u06/applprod/prdnlog/redolog3b.dbf
    here is the some content of alert message for your reference how frequent log switch is occuring:
    Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Thread 1 advanced to log sequence 17439
    Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Tue Jul 13 14:46:17 2010
    Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Tue Jul 13 14:46:38 2010
    Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Thread 1 advanced to log sequence 17440
    Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
    Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
    Tue Jul 13 14:46:52 2010
    Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Tue Jul 13 14:53:33 2010
    Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Thread 1 advanced to log sequence 17441
    Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
    Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
    Tue Jul 13 14:53:37 2010
    Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Tue Jul 13 14:55:37 2010
    Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
    Tue Jul 13 15:15:37 2010
    Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
    Tue Jul 13 15:35:38 2010
    Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
    Tue Jul 13 15:55:39 2010
    Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
    Tue Jul 13 16:15:41 2010
    Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
    Tue Jul 13 16:35:41 2010
    Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
    Tue Jul 13 16:42:28 2010
    Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
    Thread 1 advanced to log sequence 17442
    Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Thanks in advance

    hi,
    Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
    L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by daythanks,
    baskar.l

  • Supressing Redo Log Generation

    Hi,
    I have a PL/SQL script that uses FORALL function with SAVE EXCEPTIONS clause to bulk insert/update a table.
    How can I supress redo log generation in my script?
    I have placed the database in NOARCHIVELOG mode, does this supress redo log generation?

    Read these chapters:
    [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/startup.htm#g12154]Oracle Database Instance
    [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/physical.htm#CNCPT11302]Overview of the Online Redo Log
    Redo is needed to redo transactions. Archiving of the redo is needed to redo transactions that go beyond the online redo. Archivelog mode means to keep the redo beyond the time it is immediately used. If your instance or disk crashes, you need to recover the transactions that were in process. There are reasons to use noarchivelog mode, but if you do, you have to be able to recreate what gets lost in a crash. That's why most normal production OLTP is done in archivelog mode, the data is important enough to need to come back if the instance or computer goes away. Other types of production may not need it, but it needs to be explicitly justified why running in noarchivelog mode is done, as well as how to recover if things go south.
    Since archiving redo requires reading the redo and writing it elsewhere, heavy batch loads may benefit from noarchivelog mode. So it might be ok to suppress archiving, but you must have redo. Sometimes it can be justified to do large loads in noarchivelog mode, then switch to archivelog mode, but you must take backups before and after. Since the backups will use the same order of magnitude I/O and cpu as archivelog mode, only a time-window limitation can make it make sense, and you can't have production access either. So it usually makes more sense just to have the hardware to stay archivelog.
    Demo, test, archive and development databases are usually somebodies production, too. They all have their own restoration and recovery requirements.

  • Larger redo log file members or more log groups

    Oracle 11gR1 RHEL5 64 bit
    Hi,
    I was wondering what is better from a perfomance tuning perspective. I have log swiches occuring every 2 minutes in our production database. I know definitely that our log file members are too small (100MB). The redo log sizing tool in OEM told me to make it 40G according to the fast_start_mttr_target setting which is set to 600. Now, my question is what is better to do?
    1. Increase the size of my current redo log members? Right now there are 4 groups with 2 members in each.
    OR
    2. Should I create additonal redo log groups (4 more) and then re-rerun the sizing tool or query the v$instance_recovery view?
    Which is better? tradeoffs?
    Thanks all.

    If you want to reduce the number (frequency) of Log Switches, you should increase the size of the Online Redo Logs -- ie create new Log File Groups of a larger size and drop the older ones.
    If the issue is "checkpoint not complete" waits, then either
    a. Increasing the size of the Log Files
    or
    b. Increasing the number of Log Files
    is doable
    Note that if you increase the number but not the size, you still have a checkpoint every N Mbytes -- ie, possibly too frequently !
    On the other hand if you increase your size to be very large, at every switch, the Archiver is going to kick in with a large Read + large Write operation -- reading that Redo Log of N GBytes and writing it out to the archive log target location, imposing that additional I/O spike on your system. (Writing to filesystem will go through the FileSystem Buffers so if your database SGA isn't very large and your database performance relies on hitting the FileSystem Buffer Cache to avoid Disk Reads, that performance will be impacted as a large portion of the FileSystem Buffer Cache will be taken over by the Archiver for some time).
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Redo log generation

    Hi,
    I have oracle 9i on HP-UX, I want to know how many redo getting generated every hour for the past one month or so, is there any script that I can use to find this?
    Thanks

    Hi,
    You can use this script which gives hour by hour log generation and its total for a month
    SQL> L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by day
    31*
    SQL> @log_hour
    DAY      00   01   02   03   04   05   06   07   08   09   10   11   12   13   14   15   16   17   18   19   20   21   22   23          TOT
    01-07-10   29   19   10    8    8   13   12   19   24   20   10   16   10   10   33   24   23   26   27   32   25   13    8   11        430
    02-07-10    9    8    8    8    8    8   10    8   15    8   10    8    8    9   44   30   14    8   25   11    8   12    9   11        297
    03-07-10   10    9    8    8    8    8    9   13   14   18   14   22   16   14   10    8    9   10    9   12   10   11    9   11        270
    04-07-10   10    8    8    9    8    8    8    9   13    9    8   17   10    8   10   13   14    8    9   11    9   10    8   11        236
    05-07-10   10    8    9    8    8    8    9    8   16   14   11    9   10    8    9   10    8   19   13   16    8   10    9   11        249
    06-07-10   10    8    9    8    8    8    9   12   13   15    9    8   10   12   13   11   10    9   10   11   13    9    8   11        244
    07-07-10   10   11    8    8    8    8    9    8   15    8   19   10   19   14   12   10   21    9    8   12    9    9   14   17        276
    08-07-10   13    9    8    8    8    8    9   14   13   11    8    8   13    9    8   21    8    8   13    9    8    9    8    8        239
    09-07-10   10    8    8    8    9    8   10   12   15    9    8   14    8    9   19    9    8    8   16    8    9    8    8    9        238
    10-07-10    9    9    8    8    8    8    9   14   14   10    8    8    9    9   13    8    8    8    8    9    8    9    8    8        218
    11-07-10   10    8    9    8    8    8    9   11   14    8    8    8    8    9    8    8    9    8   10    8    8    8    8    8        209
    12-07-10   10    9    8    8    8    8    9   12   14   11   14   13    8   13   13    9    8   13   10    8   10    8    8    8        240
    13-07-10    9   12    8    8    8    8    8   14   16   17   11    9   12   17   12   11    9    9    8   15    9   12   11    8        261
    14-07-10   10   12    8    8    8    8    9   12   14   10    8    8    8    8    9    9    8    8    9    9    9   31   52   40        315
    15-07-10   16    8    9    8    8    8   12   10   17    9   13   16    9   11    8   15    8   10   15   12    8   12    8    8        258
    16-07-10   10    9    8    8    8    8    9   11   15   11   14   12   20    8    9    9    9    8    8   12   10    8    8    8        240
    17-07-10   10    8    8    8    8    8    8   12   16   10   15    8    9    8    9    8    8    8    9    8    8    9    8    8        219
    18-07-10   11   10    8    8    8    8    8    8   10    9   10    8    9   12   16   10    9    8    8    8    8    8   10    8        220
    19-07-10   10    9    8    8    8    8    9    4   15    9    8    9    8    8    9   11    9    8   17    8   21    8    8    8        228
    20-07-10    9    9    8    8    8    8   12    9   15   16   11    8    9    8   10   12    8    8    9   11    8    9    8    9        230
    21-07-10   19    9    8    8    9    8    8    8    9    9   13    8    8    8    9   11    8   14   24   12   37   40   35    8        330
    22-07-10   10    8   12   10    8    8    8   11   17    9    8    9    8    9    9    8   14   16   31   11   39   53   40    9        365
    23-07-10   10   15   18    8    9    8    9    8   13    9    8   16    9   10    8   14   11   10    9    8    9    9    8   14        250
    24-07-10   10    9    8    8    8    8    8   12   14    9    8    8   10    9    9    9    8   11   12    8    8    8    8    8        218
    25-07-10   10    8    9    8    8    8    9    8   14    9    8    8    8    9    8    9    8   10    8    9    9    8    8    8        209
    26-07-10   10    9    8    8    8    8    9    9   13   10    8   14    8    8   10   19   24   10   27   37   36   23    8    9        333
    27-07-10    9    9    8    8    8    8    8    6    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0         64
    30-06-10    0    1   49   11    9    8   15    8    8   15   17   25   16   27   28    9   13    9   12   14   12   28   40   43        417
    28 rows selected.thanks,
    baskar.l

  • Resizing redo log files on a 3 node RAC with single node standby database

    Hi
    On a 3 node 11g RAC system,I have to resize the redo logs on primary database from 50M to 100M. I was planning to do the following steps:
    SQL> select group#,thread#,members,status from v$log;
    GROUP# THREAD# MEMBERS STATUS
    1 1 3 INACTIVE <-- whenefver INACTIVE, logfile group can be dropped
    2 1 3 CURRENT & resized, switch logfile can change logfile group
    3 1 3 INACTIVE
    4 2 3 INACTIVE
    5 2 3 INACTIVE
    6 2 3 CURRENT
    7 3 3 INACTIVE
    8 3 3 INACTIVE
    9 3 3 CURRENT
    9 rows selected.
    SQL> alter database drop logfile group 1;
    Database altered.
    SQL> ALTER DATABASE ADD LOGFILE THREAD 1
    GROUP 1 (
    '/PROD/redo1/redo01a.log',
    '/PROD/redo2/redo01b.log',
    '/PROD/redo3/redo01c.log'
    ) SIZE 100M reuse; 2 3 4 5 6
    Database altered.
    However I am not sure what needs to be done for the standby. The standby_file_management is set to auto and it is single instance standby.
    SQL> select group#,member from v$logfile where type='STANDBY';
    GROUP#
    MEMBER
    10
    /PROD/flashback/PROD/onlinelog/o1_mf_10_7b44gy67_.log
    11
    /PROD/flashback/PROD/onlinelog/o1_mf_11_7b44h7gy_.log
    12
    /PROD/flashback/PROD/onlinelog/o1_mf_12_7b44hjcr_.log
    Please let me know.
    Thanks
    Sumathy

    Hello;
    For Redo and Standby redo this won't help :
    standby_file_management is set to auto
    On the Standby cancel recovery, then drop and recreate the redo and or Standby redo.
    Then start recovery again.
    Example ( I have a habit of removing the old file at the OS to avoid REUSE and conflicts )
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='MANUAL';
    alter database add standby logfile group 4
    ('/u01/app/oracle/oradata/orcl/standby_redo04.log') size 100m;
    ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='AUTO'
    Notes worth reviewing :
    Online Redo Logs on Physical Standby [ID 740675.1]
    Error At Standby Database Ora-16086: Standby Database Does Not Contain Available Standby Log Files [ID 1155773.1]
    Example of How To Resize the Online Redo Logfiles [ID 1035935.6]
    Best Regards
    mseberg

  • Redo Log efficiency

    Hi,
    I am running Oracle 11g on Red Hat with a data block size of 8K
    I have question on the efficiency of redo logs
    I have an application that is extracting data from one DB and updating a second DB based on this data (not a copy). I have the option of doing it as a single mass up date inserts such as the one below. My question is whether this is more efficient (purely in terms of redo log generation) then having discrete single row inserts. The reason for my question is that the current ETL process design is to drop the table and do a mass re-insert and there have been some concerns raised about the impact on redo log generation. I am of the belief that it will be efficient as there will only be one set of logs per block impacted.
    Picking your brains and gaining from our knowledge is much appreciated.
    Cheers,
    Daryl
    INSERT INTO TMP_DIM_EXCH_RT
    (EXCH_WH_KEY,
    EXCH_NAT_KEY,
    EXCH_DATE, EXCH_RATE,
    FROM_CURCY_CD,
    TO_CURCY_CD,
    EXCH_EFF_DATE,
    EXCH_EFF_END_DATE,
    EXCH_LAST_UPDATED_DATE)
    VALUES
    (1, 1, '28-AUG-2008', 109.49, 'USD', 'JPY', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
    (2, 1, '28-AUG-2008', .54, 'USD', 'GBP', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
    (3, 1, '28-AUG-2008', 1.05, 'USD', 'CAD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
    (4, 1, '28-AUG-2008', .68, 'USD', 'EUR', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
    (5, 1, '28-AUG-2008', 1.16, 'USD', 'AUD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
    (6, 1, '28-AUG-2008', 7.81, 'USD', 'HKD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008');

    darylo wrote:
    I have an application that is extracting data from one DB and updating a second DB based on this data (not a copy). I have the option of doing it as a single mass up date inserts such as the one below. My question is whether this is more efficient (purely in terms of redo log generation) then having discrete single row inserts. Single mass insert is much more efficient in terms of redo usage, and almost everything else.
    {message:id=4337747}
    Run1 - Single insert, Run2 - multiple inserts for same data.
    Name                                  Run1        Run2        Diff
    STAT...Heap Segment Array Inse         554          13        -541
    STAT...free buffer requested           222         909         687
    STAT...redo subscn max counts          221         990         769
    STAT...redo ordering marks              44         871         827
    STAT...calls to kcmgas                  46         873         827
    STAT...free buffer inspected           133         982         849
    LATCH.object queue header oper         625       2,624       1,999
    LATCH.simulator hash latch             223       6,005       5,782
    STAT...redo entries                  1,258     100,643      99,385
    STAT...HSC Heap Segment Block          555     100,014      99,459
    STAT...session cursor cache hi           5     100,010     100,005
    STAT...opened cursors cumulati           7     100,014     100,007
    STAT...execute count                     7     100,014     100,007
    STAT...session logical reads         2,162     102,988     100,826
    STAT...db block gets                 1,853     102,780     100,927
    STAT...db block gets from cach       1,853     102,780     100,927
    STAT...recursive calls                  33     101,092     101,059
    STAT...db block changes              1,873     201,552     199,679
    LATCH.cache buffers chains           7,176     507,892     500,716
    STAT...undo change vector size     240,500   6,802,736   6,562,236
    STAT...redo size                 1,566,136  24,504,020  22,937,884

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • RAC Redo Log Internal

    Hi all
    I want ask some questions about redo log generation in RAC.
    1. Does Oracle confirms that each committed transaction, from begin_transaction to commit, redo and undo information resides in redo log of one node? Which means, would it possible that Oracle put begin transaction in redo log of node A, and put commit in redo log of another node B?

    Reup this thread :)
    Another questions:
    What about RAC broadcast performs? I mean is it true that when node A commits a transaction T, then it broadcast this information to all other nodes, would Oracle write this information about T to node B's online redo log? for example, in redo log of Node B contains a redo record including opcode=5.4 and T's transaction id and Node A's thread number.
    But during my analyze, Oracle 10.2.0.1.0 RAC, (NFS share), both dump file of redo log and binary redo log don't contains that redo record.
    So I wonder what Oracle does?
    Black Thought

  • Redo log tuning - improving insert rate

    Dear experts!
    We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is performed (the system architecture can't be changed - for example to commit every 10th record).
    So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.
    Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?
    Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?
    I would be grateful for any information!
    Thanks
    Markus

    We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is >>performed (the system architecture can't be changed - for example to commit every 10th record).Doing commit after each insert (or other DML command) doesn't means that dbwriter process is actually writing this data immediately in db files.
    DBWriter process is using an internal algorithm to decide where to apply changes to db files. You can adjust the writing frequency into db files by using "fast_start_mttr_target" parameter.
    So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running >>in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.Placing the redo log files on SSD disks is indeed a good action. Also you can check buffer cache hit rate and size. Also stripping for filesystems where redo files resides should be taken into account.
    Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?It's an extremely bad idea. NOLOGGING option for a tablespace will lead to an unrecovearble tablespace and as I stated on first sentence will not increase the insert speed.
    Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?I don't think you need this.
    Better check indexes associated with tables where you insert data. Are they analyzed regularly, are all of them used indeed (many indexes are created for some queries but after a while they are left unused but at each DML all indexes are updated as well).

  • Redo Log Files - more than 12 per hour

    Hello @all,
    I have a problem with my redo log files. I get more than 12 switches per Hour. I have 3 Files with 5oM size. I increased the sitz to 15o M, but
    I still have 12 switches per hour.
    Do anyone know, what I did wrong?
    Database:
    Oracle 9i
    Thanks
    Martin

    user9528362 wrote:
    Hello @all,
    yes I know, that 3 switches per hour are perfekt, but I had increased the Size from 5o M to 15oM already and the amount from switches, are not reduced.
    So there must be something else, that causes the log switches.Martin,
    As I said somewhere above too, 150meg is a tiny size if you are managing a production db. I have already mentioned that make your log file size to at least 500meg and than check. As for the high redo activity, only you can confirm that whether this has been started from now only or was happening before too? In anycase, for an active oltp, 500 to 1gb of redo log file size should be okay.
    For the extra redo generation, you have been given link for mining log files using logminer. Try using it to see what is causing extra redo.
    HTH
    Aman....

  • Redo Generation Rate

    Hi,
    I am having problem with the redo generation rate. The size of redo logfiles in my DB is 400MB and log switch happens approximately after every 10 min which i think is very fast interval comparing to the size of my redo logfiles. I have even checked the log miner settings but i did not find any problem with it.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN,SUPPLEMENTAL_LOG_DATA_PK,SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUP SUP SUP
    NO NO NO
    Plz can anyone tell me what is wrong with the redo generation rate?

    First of all, it simply means your system is doing lots of work(generating 400MB redo per 10 minutes), well if you have work to do you need to do it. Besides, 400MB per 10 minutes is not that big for some busy system. So killing sessions may not be a good idea, your system just has so much work to do. If you have millions of rows need to be loaded, you just have to do it.
    Secondly, you many query v$sesstat for statistics name called "redo size" periodically (i.e. every 10 minutes) and get some idea when these happen during the day period. And user SQL_TRACE to get those SQL statements in the tkprof trace report, find out if you can optimize SQL to generate less redo, or other alternative ways. Some common options to minimize redo :
    insert /*+ append */ hint
    create table as (select ...)
    table nologging
    if updating millions of rows, can it be dont by creating new tables
    is it possible to use temp table feature (some applications use permanant tables but indeed they can use temp table)
    Anyway, you have to know what your database is doing while generating tons of redo. Until you find out what SQLs are generating the large redo, you can not solve the problem at system level by killing sessions or so.
    Regards,
    Jianhui

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • High redo, log.xml and alert log generation with streams

    Hi,
    We have a setup where streams and messaging gateway is implemented on Oracle 11.1.0.7 to replicated the changes.
    Until recently there was no issue with the setup but for last few days there is an excessive amount of redo and log.xml and alert log generation, which takes up about 50gb for archive log and 20 gb for the rest of the files.
    For now we have disabled the streams.
    Please suggest the possible reasons for this issue.
    Regards,
    Ankit

    Obviously, as no one here has access to the two files with error messages, log.xml and alert log, the resolution starts with looking into those files
    and you should have posted this question only after doing this.
    Now no help is possible.
    Sybrand Bakker
    Senior Oracle DBA

  • Trying to archive more often: archive_lag_target vs downsizing redo logs

    Hi,
    I have a small production 10.2.0.1 database (small: all the files are 17GB total) on Linux which is used in Agile PLM. Ever since we went live, it produces only 2-4 archive logs per day. This was OK when our company was only concerned with being able to restore from the nightly cold backup. Now we want to make sure we can recover to the last hour of work. So I need the database to spit out more logs. It has 4 log groups (with 2 members each), each 50M in size. I am going to add 2 more log groups to have 6, since that is what we did for our 11i instance based on a consultants recommendation. That could prevent some problems, but won't cause it to spit out more logs. I did some research (here included) and found that setting archive_lag_target-3600 will FORCE the db to spit out logs every hour, and doing some testing, this works very nicely. The archive logs are only about 1.2M when they do get spit out every hour, but that is fine. The question is, is it OK to turn on archive_lag_target while keeping the size of the logs at 50M, and have mostly "small" logs being spit out? Or should I reduce the size of the logs to say 20M (by dropping the old ones and creating new ones)? I actually tried 20M and then during a busy time it spit out 2 of them within 10 minutes, but then I looked and saw it did the same thing with the 50M size, so I figured why not keep the 50M redo log size in the first place? It would actually make my go-live plan easier as I would just add 2 log groups at the current size in prod, and not have to drop and recreate a bunch of logs. I think this is a good plan -- my only worry is that if traditionally the way to increase the frequency of the logs was to reduce their size, I feel like I am "cheating" by using the archive_lag_target parameter to do this. I also want to not change too many things at once in production at one time. Thanks in advance. Marv

    user11965205 wrote:
    is, is it OK to turn on archive_lag_target while keeping the size of the logs at 50M, and have mostly "small" logs being spit out?Yes it is OK: you should keep "large" redo log in case your database instance has sometimes much more write activity to avoid checkpoint not complete issue.

Maybe you are looking for

  • End-of-file on communicaton channel when creating text index on xmltype ta

    Hi, I'm trying to create an oracle text index on a table of xmltype. This table is using an xml schema and the table is using object relational storage. I have another table that is a relational table that has an xml type column, and I was able to cr

  • Wrong Apple ID's keep popping up on phone

    At one point in time, my brother, sister, and I all shared one computer. We shared all of our music because of this. We no longer share music libraries. On my iPhone now, their ID's keep popping up asking for their passwords even though I don't have

  • Special Stock E with a deleted Sales Order Position

    Dear Experts, Does anyone know how to unlink or move an special stock lot for a Sales Order Position that was deleted? I have a situation where a have some stock produced in a MTO scenario and the Sales Order position does not longer exist and the ph

  • Multiprovider on top of Infocube and ods

    Hi, I'm just wondering, I would like to create an Multiprovider on top of infocube and ods. While creating multirovider I can able to see my infocube but in ODS tab I'm not getting ODS which I need to consider. I've gone through all available ODS's i

  • "Invalid Keystore Format" with Java Web Start

    I got a user with a "invalid keystore format" problem He's running Windows XP and it happens when our java webstart application is starting up. He had a mix of java 5 and java 6. We uninstalled all his javas and deleted his c:\program files\java\jre6