Improving redo log writer performance

I have a database on RAC (2 nodes)
Oracle 10g
Linux 3
2 servers PowerEdge 2850
I'm tuning my database with "spotilght". I have alredy this alert
"The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
The serveres are not in RAID5.
How can I improve redo log writer performance?
Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
Therefore, redo log devices should be placed on fast devices.
Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
To reduce redo write time see Improving redo log writer performance.
See Also:
Tuning Contention - Redo Log Files
Tuning Disk I/O - Archive Writer

Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
> Assuming that you are not CPU constrained,
moving the online redo to
high-speed solid-state disk can make a hugedifference.
Do you honestly think this is practical and usable
advice Don? There is HUGE price difference between
SSD and and normal hard disks. Never mind the
following disadvantages. Quoting
(http://en.wikipedia.org/wiki/Solid_state_disk):[
i]
# Price - As of early 2007, flash memory prices are
still considerably higher  
per gigabyte than those of comparable conventional
hard drives - around $10
per GB compared to about $0.25 for mechanical
drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
Capacity - The capacity of SSDs tends to be
significantly smaller than the
capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
Lower recoverability - After mechanical failure the
data is completely lost as
the cell is destroyed, while if normal HDD suffers
mechanical failure the data
is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
Vulnerability against certain types of effects,
including abrupt power loss
(especially DRAM based SSDs), magnetic fields and
electric/static charges
compared to normal HDDs (which store the data inside
a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
Limited write cycles. Typical Flash storage will
typically wear out after
100,000-300,000 write cycles, while high endurance
Flash storage is often
marketed with endurance of 1-5 million write cycles
(many log files, file
allocation tables, and other commonly used parts of
the file system exceed
this over the lifetime of a computer). Special file
systems or firmware
designs can mitigate this problem by spreading
writes over the entire device,
rather than rewriting files in place.
Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
>
Looking at many of your postings to Oracle Forums
thus far Don, it seems to me that you are less
interested in providing actual practical help, and
more interested in self-promotion - of your company
and the Oracle books produced by it.
.. and that is not a very nice approach when people
post real problems wanting real world practical
advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

Similar Messages

  • Redo Log Writer Problem

    Hello
    What can I do, when Average redo log write time is 17'300ms (averaged over 30ms)?
    I have only one redolog writer. Should I start more then one redolog writer? We have 3 Redolog Groups (64MB) and we work with oracle dataguard. Its Oracle 11.2 (Unix Solaris 10).
    The system switch every 45 min the redogroup.
    Thanks for your support...
    Best regards...
    Roger

    Street wrote:
    Hello
    What can I do, when Average redo log write time is 17'300ms (averaged over 30ms)?
    I have only one redolog writer. Should I start more then one redolog writer? We have 3 Redolog Groups (64MB) and we work with oracle dataguard. Its Oracle 11.2 (Unix Solaris 10).
    The system switch every 45 min the redogroup.
    Thanks for your support...
    Best regards...
    Roger
    Why do you think that this time, 30ms is not good enough for your database ? Did you get any redo log related issues in the Statspack/AWR report?
    There is only one LGWR possible, you can't have more than one LGWR.
    Aman....

  • To where does the LGWR write information in redo log buffer ?

    Suppose my online redo logfiles are based on filesystems .I want to know to where the LGWR writes information in redo log buffer ? Just write to filesystem buffer or directly write to disk ? And the same case is associated with the DBWR and the datafiles are based on filesystems too ?

    It depends on the filesytem. Normally there is also a filesystem buffer too, which is where LGWR would write.Yes but a redo log write must always be a physical write.
    From http://asktom.oracle.com/pls/ask/f?p=4950:8:15501909858937747903::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:618260965466
    Tom, I was thinking of a scenario that sometimes scares me...
    **From a database perspective** -- theoretically -- when data is commited it
    inevitably goes to the redo log files on disk.
    However, there are other layers between the database and the hardware. I mean,
    the commited data doesn't go "directly" to disk, because you have "intermediate"
    structures like i/o buffers, filesystem buffers, etc.
    1) What if you have commited and the redo data has not yet "made it" to the redo
    log. In the middle of the way -- while this data is still in the OS cache -- the
    OS crashes. So, I think, Oracle is believing the commited data got to the redo
    logs -- but is hasn't in fact **from an OS perspective**. It just "disapeared"
    while in the OS cache. So redo would be unsusable. Is it a possible scenario ?
    the data does go to disk. We (on all os's) use forced IO to ensure this. We
    open files for example with O_SYNC -- the os does not return "completed io"
    until the data is on disk.
    It may not bypass the intermediate caches and such -- but -- it will get written
    to disk when we ask it to.
    1) that'll not happen. from an os perspective, it did get to disk
    Message was edited by:
    Pierre Forstmann

  • Sizing the redo log files using optimal_logfile_size view.

    Regards
    I have a specific question regarding logfile size. I have deployed a test database and i was exploring certain aspects with regards to selecting optimal size of redo logs for performance tuning using optimal_logfile_size view from v$instance_recovery. My main goal is to reduce the redo bytes required for instance recovery. Currently i have not been able to optimize the redo log file size. Here are the steps i followed:-
    In order to use the advisory from v$instance_recovery i had to set fast_start_mttr_target parameter which is by default not set so i did these steps:-
    1)SQL> sho parameter fast_start_mttr_target;
    NAME TYPE VALUE
    fast_start_mttr_target               integer                           0
    2) Setting the fast_start_mttr_target requires nullifying following deferred parameters :-
    SQL> show parameter log_checkpoint;
    NAME TYPE VALUE
    log_checkpoint_interval integer 0
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean FALSE
    SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'log_checkpoint_timeout';
    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    FALSE IMMEDIATE TRUE FALSE
    SQL> alter system set log_checkpoint_timeout=0 scope=both;
    System altered.
    SQL> show parameter log_checkpoint_timeout;
    NAME TYPE VALUE
    log_checkpoint_timeout               integer                           0
    3) Now setting fast_start_mttr_target
    SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'fast_start_mttr_target';
    ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
    FALSE IMMEDIATE TRUE FALSE
    Setting the fast_mttr_target to 1200 = 20 minutes of checkpoint switching according to Oracle recommendation
    Querying the v$instance_recovery view
    4) SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    276 165888 *93* 59 361 16040
    Here Target Mttr was 93 so i set the fast_mttr_target to 120
    SQL> alter system set fast_start_mttr_target=120 scope=both;
    System altered.
    Now the logfile size suggested by v$instance_recovery is 290 Mb
    SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    59 165888 93 59 290 16080
    After altering the logfile size to 290 as show below by v$log view :-
    SQL> select GROUP#,THREAD#,SEQUENCE#,BYTES from v$log;
    GROUP# THREAD# SEQUENCE# BYTES
    1 1 24 304087040
    2 1 0 304087040
    3 1 0 304087040
    4 1 0 304087040
    5 ) After altering the size i have observed the anomaly as redo log blocks to be applied for recovery has increased from *59 to 696* also now v$instance_recovery view is now suggesting the logfile size of *276 mb*. Have i misunderstood something
    SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
    ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
    *696* 646947 120 59 *276* 18474
    Please clarify the above output i am unable to optimize the logfile size and have not been able to achieve the goal of reducing the redo log blocks to be applied for recovery, any help is appreciated in this regard.

    sunny_123 wrote:
    Sir oracle says that fast_start_mttr target can be set to 3600 = 1hour. As suggested by following oracle document
    http://docs.oracle.com/cd/B10500_01/server.920/a96533/instreco.htm
    I set mine value to 1200 = 20 minutes. Later i adjusted it to 120=2 minutes as Target_mttr suggested it to be around 100 (if fast_mttr_target value is too high or too low effective value is contained in target_mttr of v$instance_recovery)Just to add, you are reading the documentation of 9.2 and a lot has changed since then. For example, in 9.2 the parameter FSMTTR was introduced and explicitly required to be set and monitored by the DBA for teh additional checkpoint writes which might get caused by it. Since 10g onwards this parameter has been made automatically maintained by Oracle. Also it's been long that 9i has been desupported followed by 10g so it's better that you start reading the latest documentation of 11g and if not that, at least of 10.2.
    Aman....

  • Redo log wait

    Dear All,
    We are usinf ecc5 ans the databse oacle 9i on wondows 2003I have notice that the
    Redo log wait S has been suddenly increase in number   690
    Please suggest what si the problem and to solve it.
    Data buffer
    Size              kb      1,261,568
    Quality            %           96.2
    Reads                 4,234,462,711
    Physical reads          160,350,516
              writes           3,160,751
    Buffer busy waits         1,117,697
    Buffer wait time   s          3,507
    Shared Pool
    Size              kb        507,904
    DD-Cache quality   %           84.3
    SQL Area getratio  %           95.6
             pinratio  %           98.8
          reloads/pins %         0.0297
    Log buffer
    Size              kb          1,176
    Entries                  11,757,027
    Allocation retries              722
    Alloc fault rate   %            0.0
    *Redo log wait      s            690*
    Log files (in use)            8( 8)
    Calls
    User calls               41,615,763
         commits                367,243
         rollbacks                7,890
    Recursive calls         100,067,593
    Parses                    7,822,590
    User/Recursive calls            0.4
    Reads / User calls            101.8
    Time statistics
    Busy wait time     s        697,392
    CPU time           s         42,505
    Time/User call    ms             18
      Sessions busy      %           9.26
      CPU usage          %           4.51
      CPU count                         2
    Redo logging
    Writes                    1,035,582
    OS-Blocks written        14,276,056
    Latching time      s              1
    Sessions busy      %           9.26
    CPU usage          %           4.51
    CPU count                         2
    Redo logging
    Writes                    1,035,582
    OS-Blocks written        14,276,056
    Latching time      s              1
      Write time         s            806
      Mb written                    6,574
    Table scans & fetches
    Short table scans           607,891
    Long table scans             32,468
    Fetch by rowid        1,620,054,083
       by continued row         761,131
    Sorts
    Memory                    3,046,669
    Disk                             32
    Rows sorted             446,593,854
    Regards,
    Shiva

    Hi Stefan,
    As per the doc you have suggest. The details are as following.
    In the day there is only 24 log switch , but in hour there is no more than 10 to 15 as per doc ,so ti is very less.
    The DD-Cache quality   %           84.1 is less
    The elapsed time since start
    Elapsed since start (s)       540,731
      Log buffer
      Size              kb          1,176
      Entries                  13,449,901
      Allocation retries              767
      Alloc fault rate   %            0.0
    *Redo log wait      s            696*
       Log files (in use)            8( 8)
    Check DB Wait times
    TCode ST04->Detail Analysis Menu->Wait Events
    Statistics on total waits for an event
    Elapsed time:             985  s
    since reset at 09:34:06
    Type   Client   Sessions      Busy wait            Total wait           Busy wait
                                time (ms)    time (ms)            time (%)
    USER   User          40            1,028,710           17,594,230        5.85
    BACK   ARC0           1                2,640            1,264,410        0.21
    BACK   ARC1           1                  540            1,020,400        0.05
    BACK   CKPT           1                  950              987,490        0.10
    BACK   DBW0           1                  130              983,920        0.01
    BACK   LGWR           1                  160              986,430        0.02
    BACK   PMON           1                    0              987,000        0.00
    BACK   RECO           1                   10            1,800,010        0.00
    BACK   SMON           1                3,820            1,179,410        0.32
    Disk based sorts
    Sorts
    Memory                    3,443,693
    Disk                             41
    Rows sorted             921,591,847
    Check DB Shared Pool Quality
    Shared Pool
    Size              kb        507,904
    DD-Cache quality   %           84.1
    SQL Area getratio  %           95.6
      pinratio  %                           98.8
          reloads/pins %         0.0278
      V$LOGHIST
    THREAD#   SEQUENCE#   FIRST_CHANGE#   FIRST_TIME            SWITCH_CHANGE#
    1         31612       381284375       2008/11/13 00:01:29   381293843
    1         31613       381293843       2008/11/13 00:12:12   381305142
    1         31614       381305142       2008/11/13 03:32:39   381338724
    1         31615       381338724       2008/11/13 06:29:21   381362057
    1         31616       381362057       2008/11/13 07:00:39   381371178
    1         31617       381371178       2008/11/13 07:13:01   381457916
    1         31618       381457916       2008/11/13 09:26:17   381469012
    1         31619       381469012       2008/11/13 10:27:19   381478636
    1         31620       381478636       2008/11/13 10:59:54   381488508
    1         31621       381488508       2008/11/13 11:38:33   381498759
    1         31622       381498759       2008/11/13 12:05:14   381506545
    1         31623       381506545       2008/11/13 12:33:48   381513732
    1         31624       381513732       2008/11/13 13:08:10   381521338
    1         31625       381521338       2008/11/13 13:50:15   381531371
    1         31626       381531371       2008/11/13 14:38:36   381540689
    1         31627       381540689       2008/11/13 15:02:19   381549493
    1         31628       381549493       2008/11/13 15:43:39   381556307
    1         31629       381556307       2008/11/13 16:07:47   381564737
    1         31630       381564737       2008/11/13 16:39:45   381571786
    1         31631       381571786       2008/11/13 17:07:07   381579026
    1         31632       381579026       2008/11/13 17:37:26   381588121
    1         31633       381588121       2008/11/13 18:28:58   381595963
    1         31634       381595963       2008/11/13 20:00:41   381602469
    1         31635       381602469       2008/11/13 22:23:20   381612866
    1         31636       381612866       2008/11/14 00:01:28   381622652
    1         31637       381622652       2008/11/14 00:09:52   381634720
    1         31638       381634720       2008/11/14 03:32:00   381688156
    1         31639       381688156       2008/11/14 07:00:30   381703441
    14.11.2008         Log File information from control file                                10:01:32
      Group     Thread    Sequence   Size         Nr of     Archive          First           Time 1st SCN
      Nr        Nr        Nr         (bytes)      Members        Status      Change Nr       in log
      1         1         31638      52428800     2         YES  INACTIVE    381634720       2008/11/14 03:32:00
      2         1         31639      52428800     2         YES  INACTIVE    381688156       2008/11/14 07:00:30
      3         1         31641      52428800     2         NO   CURRENT     381783353       2008/11/14 09:50:09
      4         1         31640      52428800     2         YES  ACTIVE      381703441       2008/11/14 07:15:07
    Regards,

  • Redo Log - Storage Consideration

    I have one question about redo Log storage guidelines, that i red in one article on metalink
    In that article recommended place redo log on raid device level 1 (*Mirroring*),
    and NOT recommended place it on raid 10 or 5
    If you know - explain detailed please, why it is so?
    Scientia potentia est

    I haven't seen a raid 0, raid 1, or raid 5 filesystem in the last 5 years. Most companies now use SAN or NAS.
    That is not entirely true is it Robert. Most default SAN installation are set up as Raid5 and are presented to the users as filesystem mounts.
    I agree entirely re the benefits of using ASM
    The reason why redo logs are recommended to be on Raid1 (1+0, 10) and not Raid5 is that redo logs write differently to all other oracle datafiles as they are written sequentially by the LGWR process. Raid5 involves writing parity data to another disk and therefore adds additional writes to what can already be a very intensive single-streamed process
    John
    www.jhdba.wordpress.com

  • 질문-Redo LOG 와 RollBack Segment 와의 상관관계에 대해서

    Redo LOG 와 RollBack Segment 와의 상관관계 및 기능에 대해서 정확하게 알고 싶은데여?

    저도 이부분에 대해서 한참 고민을 해봤고..
    작년인가에.. 미국 오라클 본사에 있는 아는 분하고..
    한국 오라클에서 십여년간 일하신 분에게도 여쭈어 보았는데..;;;
    오라클은 SCN기반이고, redo log 와 rollback segment가 모두
    scn기반으로 움직인다는 것의 상관관계를 빼고는 전혀 두개는
    서로 연관성이 없다고 합니다.
    redo log는 DB가 비정상종료되었을 경우에, commit을 했으나
    데이터파일에 적용하지 못한 데이터에 대해서 rollforward를 수행하고,
    또한 DB에 대한 변경사항대해서 저장만 하는 기능을 할 뿐 redo를
    가지고 rollback을 하지는 않습니다. 즉 redo는 말 그대로 REDO
    다시쓰기(재실행) 이지요. commit했으나 데이터파일에 쓰지 않은 데이터를
    다시쓰기(재실행)하는 기능입니다.
    그렇다면.. rollback segment는 commit하지 않은 데이터에 대해서
    rollback을 하는 기능을 가집니다. rollforward와는 전혀 관계가
    없겠죠..
    이 두가지 작용을 통해서 DB에서 트랜잭션의 rollforward, rollback을
    수행하나 그것은 SCN을 기반으로 작동 될 뿐 서로 어떤 연관성은
    가지고 있지 않다고 합니다.
    글 수정:
    민천사 (민연홍)
    그렇다면 대량 트랜잭션이 일어나고 redo log에 write하는 데이터와
    undo에 쓰는 데이터는 관련성이 없을까요? 이 때 redo log는 트랜잭션에
    대한 복구를 위한 로그를 남기는 역할 일 뿐이지 undo와 관련성을
    가지지 않습니다. 관련성을 따지자면 "redo는 undo로 복제되는 실 데이터(변경전 데이터)에
    대해서 복구로그를 쌓는다"는 것이됩니다. 차후에 트랜잭션 rollback
    은 redo를 참조하는 것이 아니라 undo만을 참조하고, rollback하는
    과정의 로그를 redo에 기록하는 것일 뿐입니다.

  • How to improve the event log read performance under intensive event writing

    We are collecting etw events from customer machines. In our perf test, the event read rate can reach 5000/sec when there is no heavy event writing. However, the customer machine has very intensive event writing and our read rate dropped a lot (to 300/sec).
    I understand there is IO bound since event write and read will race for the log file, which is also confirmed by the fact that whenever there is a burst of event write, a dip of event read happens at the same time. Therefore, the event read cannot catch up
    the event write and the customer gets lagging behind logs.
    Note that most of the events are security events generated by windows (instead of customers).
    Is there a way to improve the event read performance under intensive event write? I know it is a hard question given the theory blocker just mentioned. But we will lose customers if there is no solution. Appreciate any clue very much!

    Hi Leonjl,
    Thank you for posting on MSDN forum.
    I am trying to invite someone who familiar with this to come into this thread.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Where RFS exactly write redo data ?  ( archived redo log or standby redo log ) ?

    Good Morning to all ;
    I am getting bit confused from oracle official link . REF_LINK : Log Apply Services
    Redo data transmitted from the primary database is received by the RFS on the standby system ,
    where the RFS process writes the redo data to either archived redo log files  or  standby redo log files.
    In standby site , does rfs write redo data in any one file or both ?
    Thanks in advance ..

    Hi GTS,
    GTS (DBA) wrote:
    Primary & standby log file size should be same - this is okay.
    1) what are trying to disclose about  largest & smallest here ? -  You are confusing.
    Read: http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_transport.htm#SBYDB4752
    "Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the standby redo log at a redo transport destination be of the same size."
    GTS (DBA) wrote:
    2) what abt group members ? should be same as primary or need  to add some members additionally. ?
    Data Guard best practice for performance, is to create one member per each group in standby DB. on standby DB, one member per group is reasonable enough. why? to avoid write penalty; writing to more than one log files at the standby DB.
    SCENARIO 1: if in your source primary DB you have 2 log member per group, in standby DB you can have 1 member  per group, additionally create an extra group.
    primary
    standby
    Member per group
    2
    1
    Number of log group
    4
    5
    SCENARIO 2: you can also have this scenario 2 but i will not encourage it
    primary
    standby
    Member per group
    2
    2
    Number of log group
    4
    5
    GTS (DBA) wrote:
    All standby redo logs of the correct size have not yet been archived.
      - at this situation , can we force on standby site ? any possibilities ? 
    you can not force it , just size your standby redo files correctly and make sure you don not have network failure that will cause redo gap.
    hope there is clarity now
    Tobi

  • Desactivate writting to Redo Logs

    Hi
    i want my database to not writting Redo Logs.I have searched on the net and i found articles that speak about archiving and turning archive off (ALTER DATABASE noarchivelog)
    but i have a very heavy activity .i don't want to decrease performance by writting those redo logs.
    Thanks in advance

    user12019948 wrote:
    Hi
    i want my database to not writting Redo Logs.I have searched on the net and i found articles that speak about archiving and turning archive off (ALTER DATABASE noarchivelog)
    but i have a very heavy activity .i don't want to decrease performance by writting those redo logs.
    Thanks in advanceDatabase perdormance doesn't come down by writing to the redo log files and moreover, writing to the redo log files would be very important for recovery. Now, you can't disable the redo generation completely. What you can do is to go for a nologging clause which would generate a minimal redo .
    HTH
    Aman....

  • How to disable write to redo log file in oracle7.3.4

    in oracle 8, alter table no logged in redo log file like: alter table tablename nologging;
    how to do this in oracle 7.3.4?
    thanks.

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Redo log tuning - improving insert rate

    Dear experts!
    We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is performed (the system architecture can't be changed - for example to commit every 10th record).
    So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.
    Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?
    Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?
    I would be grateful for any information!
    Thanks
    Markus

    We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is >>performed (the system architecture can't be changed - for example to commit every 10th record).Doing commit after each insert (or other DML command) doesn't means that dbwriter process is actually writing this data immediately in db files.
    DBWriter process is using an internal algorithm to decide where to apply changes to db files. You can adjust the writing frequency into db files by using "fast_start_mttr_target" parameter.
    So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running >>in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.Placing the redo log files on SSD disks is indeed a good action. Also you can check buffer cache hit rate and size. Also stripping for filesystems where redo files resides should be taken into account.
    Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?It's an extremely bad idea. NOLOGGING option for a tablespace will lead to an unrecovearble tablespace and as I stated on first sentence will not increase the insert speed.
    Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?I don't think you need this.
    Better check indexes associated with tables where you insert data. Are they analyzed regularly, are all of them used indeed (many indexes are created for some queries but after a while they are left unused but at each DML all indexes are updated as well).

  • Oracle Performance 11g - Warning: log write elapsed time

    Hello ,
    We are facing quite bad performance with our SAP cluster running Oracle 11g .
    In the ora alert file we are having constant message for "
    Thread 1 cannot allocate new log, sequence xxxxxx
    Private strand flush not complete"
    However , this seems to be quite old as we have recently started facing the performace issue.
    Moreover , in the sid_lgwr_788.trc file we are getting warning for log write elapsed time as follow.
    *** 2013-07-25 08:43:07.098
    Warning: log write elapsed time 722ms, size 4KB
    *** 2013-07-25 08:44:07.069
    Warning: log write elapsed time 741ms, size 32KB
    *** 2013-07-25 08:44:11.134
    Warning: log write elapsed time 1130ms, size 23KB
    *** 2013-07-25 08:44:15.508
    Warning: log write elapsed time 1161ms, size 25KB
    *** 2013-07-25 08:44:19.790
    Warning: log write elapsed time 1210ms, size 10KB
    *** 2013-07-25 08:44:20.748
    Warning: log write elapsed time 544ms, size 3KB
    *** 2013-07-25 08:44:24.396
    Warning: log write elapsed time 1104ms, size 14KB
    *** 2013-07-25 08:44:28.955
    Warning: log write elapsed time 1032ms, size 37KB
    *** 2013-07-25 08:45:13.115
    Warning: log write elapsed time 1096ms, size 3KB
    *** 2013-07-25 08:45:46.995
    Warning: log write elapsed time 539ms, size 938KB
    *** 2013-07-25 08:47:55.424
    Warning: log write elapsed time 867ms, size 566KB
    *** 2013-07-25 08:48:00.288
    Warning: log write elapsed time 871ms, size 392KB
    *** 2013-07-25 08:48:04.514
    Warning: log write elapsed time 672ms, size 2KB
    *** 2013-07-25 08:48:08.788
    Warning: log write elapsed time 745ms, size 466KB
    Please advice to further understand the issue.
    Regards

    Hi,
    Seem the I/O issue, Check the metalink id
    Intermittent Long 'log file sync' Waits, LGWR Posting Long Write Times, I/O Portion of Wait Minimal (Doc ID 1278149.1)

  • How does LGWR write  redo log files, I am puzzled!

    The document says:
    The LGWR concurrently writes the same information to all online redo log files in a group.
    my undestandint of the sentence is following for example
    group a includes file(a1, a2)
    group b includes file(b1, b2)
    LGWR write file sequence: write a1, a2 concurrently; afterwards write b1, b2 concurrently.
    my question is following:
    1、 my understanding is right?
    2、 if my understanding is right, I think that separate log file in a group should save in different disk. if not, it cann't guarantee correctly recovery.
    my opinion is right?
    thanks everyone!

    Hi,
    >>That is multplexing...you should always have members of a log file in more than 1 disk
    Exactly. You can keep multiple copies of the online redo log file to safeguard against damage to these files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. In addition, when multiplexing redo log files, it is preferable to keep the members of a group on different disks, so that one disk failure will not affect the continuing operation of the database. If LGWR can write to at least one member of the group, database operation proceeds as normal.
    Cheers
    Legatti

  • Upon commit, lgwr writes to redo logs but dbwr does not write to datafiles

    Guys,
    Upon issuing a commit statement, in which scenarios, the lgwr only writes to redo logs but the dbwr does not at all write to the datafiles?
    Thanx.

    The default behaviour is - on Commit, the lgwr writes to the redo logs immediately, but it may not get immediately written to the datafiles by the dbwr, but sooner or later it would (based on certain conditions). The only situation, which I can think of, when dbwr may not be able to write is datafiles is when the databases crashes, after the commit and before the DBWR could write to the datafiles.
    Not sure, what you are exactly looking for, but hope this helps.
    Thanks
    Chandra Pabba

Maybe you are looking for

  • How to select OS X startup disk after booting up in 8.6

    I recently purchased an old iBook G3 (original clamshell model) to run some old Mac games I had laying around (was in a nostalgic mood I guess). It came with OS 8.6 and I installed an old copy of OS X 10.2 Jaguar onto it. It wouldn't let me run in th

  • Problem with printing only 1 copy from the console on an HP photosmart 7760.

    When printing 4X6 borderless copies from a memory stick at the console the first picture correctly prints 1 copy but subsequent pictures show they are printing 1 of 2 copies. No matter how I set the number of copies this problem persists. I have trie

  • I am unable to download the 10.7 update of Itunes

    it says an error has occurred every time i try. I uninstalled and tried to reinstall, that has also failed leaving me with out any access. Please help

  • Ot - removing duplicates in lists

    hi sorry for the ot question. i created a very long list of keywords (for seo). anyone knows of a tool to remove duplicates? it's really a very long list thanks lenny

  • Charging off difference in F-58

    Hi, May I know what is the function of "charging off difference" in F-58 during payment? Here is the scenario of my issue: Vendor A as 2 bank details in the master data. Both bank details are differentiated by using the BnkT as 100 and 200. I posted