Lot of Redo log wait

Dear all,
In st04 I see Redo log wait is this a problem. Please suggest how to solve it
Please find the details.
Size (kB)                            14,352
Entries                          42,123,046
Allocation retries                    9,103
Alloc fault rate(%)                     0.0
Redo log wait (s)                       486
Log files (in use)                8 (   8 )
DB_INST_ID     Instance ID     1
DB_INSTANCE     DB instance name     prd
DB_NODE     Database node     A
DB_RELEASE     Database release     10.2.0.4.0
DB_SYS_TIMESTAMP     Day, Time     06.04.2010 13:07:10
DB_SYSDATE     DB System date     20100406
DB_SYSTIME     DB System time     130710
DB_STARTUP_TIMESTAMP     Start up at     22.03.2010 03:51:02
DB_STARTDATE     DB Startup date     20100322
DB_STARTTIME     DB Startup time     35102
DB_ELAPSED     Seconds since start     1329368
DB_SNAPDIFF     Sec. btw. snapshots     1329368
DATABUFFERSIZE     Size (kB)     3784704
DBUFF_QUALITY     Quality (%)     96.3
DBUFF_LOGREADS     Logical reads     5615573538
DBUFF_PHYSREADS     Physical reads     207302988
DBUFF_PHYSWRITES     Physical writes     7613263
DBUFF_BUSYWAITS     Buffer busy waits     878188
DBUFF_WAITTIME     Buffer wait time (s)     3583
SHPL_SIZE     Size (kB)     1261568
SHPL_CAQUAL     DD-cache Quality (%)     95.1
SHPL_GETRATIO     SQL area getratio(%)     98.4
SHPL_PINRATIO     SQL area pinratio(%)     99.9
SHPL_RELOADSPINS     SQLA.Reloads/pins(%)     0.0042
LGBF_SIZE     Size (kB)     14352
LGBF_ENTRIES     Entries     42123046
LGBF_ALLORETR     Allocation retries     9103
LGBF_ALLOFRAT     Alloc fault rate(%)     0
LGBF_REDLGWT     Redo log wait (s)     486
LGBF_LOGFILES     Log files     8
LGBF_LOGFUSE     Log files (in use)     8
CLL_USERCALLS     User calls     171977181
CLL_USERCOMM     User commits     1113161
CLL_USERROLLB     User rollbacks     34886
CLL_RECURSIVE     Recursive calls     36654755
CLL_PARSECNT     Parse count     10131732
CLL_USR_PER_RCCLL     User/recursive calls     4.7
CLL_RDS_PER_UCLL     Log.Reads/User Calls     32.7
TIMS_BUSYWT     Busy wait time (s)     389991
TIMS_CPUTIME     CPU time session (s)     134540
TIMS_TIM_PER_UCLL     Time/User call (ms)     3
TIMS_SESS_BUSY     Sessions busy (%)     0.94
TIMS_CPUUSAGE     CPU usage (%)     2.53
TIMS_CPUCOUNT     Number of CPUs     4
RDLG_WRITES     Redo writes     1472363
RDLG_OSBLCKWRT     OS blocks written     54971892
RDLG_LTCHTIM     Latching time (s)     19
RDLG_WRTTIM     Redo write time (s)     2376
RDLG_MBWRITTEN     MB written     25627
TABSF_SHTABSCAN     Short table scans     12046230
TABSF_LGTABSCAN     Long table scans     6059
TABSF_FBYROWID     Table fetch by rowid     1479714431
TABSF_FBYCONTROW     Fetch by contin. row     2266031
SORT_MEMORY     Sorts (memory)     3236898
SORT_DISK     Sorts (disk)     89
SORT_ROWS     Sorts (rows)     5772889843
SORT_WAEXOPT     WA exec. optim. mode     1791746
SORT_WAEXONEP     WA exec. one pass m.     93
SORT_WAEXMULTP     WA exec. multipass m     0
IEFF_SOFTPARSE     Soft parse ratio     0.9921
IEFF_INMEM_SORT     In-memory sort ratio     1
IEFF_PARSTOEXEC     Parse to exec. ratio     0.9385
IEFF_PARSCPUTOTOT     Parse CPU to Total     0.9948
IEFF_PTCPU_PTELPS     PTime CPU / PT elps.     0.1175
Regards,
Kumar

Hi,
If the redo buffers are not large enough, the Oracle log-writer process waits for space to become available. This wait time becomes wait time for the end user. Hence this may cause perfromance problem at database end and hence need to be tuned. 
The size of the redo log buffer is defined in the init.ora file using the 'LOG_BUFFER' parameter. The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer.
If the size of redo log buffer is not big enough causing this wait, recommendation is to increase the size of redo log buffer in such a way that the value of "redo log space requests" should be near to zero.
regards,
rakesh

Similar Messages

  • Redo log wait

    Dear All,
    We are usinf ecc5 ans the databse oacle 9i on wondows 2003I have notice that the
    Redo log wait S has been suddenly increase in number   690
    Please suggest what si the problem and to solve it.
    Data buffer
    Size              kb      1,261,568
    Quality            %           96.2
    Reads                 4,234,462,711
    Physical reads          160,350,516
              writes           3,160,751
    Buffer busy waits         1,117,697
    Buffer wait time   s          3,507
    Shared Pool
    Size              kb        507,904
    DD-Cache quality   %           84.3
    SQL Area getratio  %           95.6
             pinratio  %           98.8
          reloads/pins %         0.0297
    Log buffer
    Size              kb          1,176
    Entries                  11,757,027
    Allocation retries              722
    Alloc fault rate   %            0.0
    *Redo log wait      s            690*
    Log files (in use)            8( 8)
    Calls
    User calls               41,615,763
         commits                367,243
         rollbacks                7,890
    Recursive calls         100,067,593
    Parses                    7,822,590
    User/Recursive calls            0.4
    Reads / User calls            101.8
    Time statistics
    Busy wait time     s        697,392
    CPU time           s         42,505
    Time/User call    ms             18
      Sessions busy      %           9.26
      CPU usage          %           4.51
      CPU count                         2
    Redo logging
    Writes                    1,035,582
    OS-Blocks written        14,276,056
    Latching time      s              1
    Sessions busy      %           9.26
    CPU usage          %           4.51
    CPU count                         2
    Redo logging
    Writes                    1,035,582
    OS-Blocks written        14,276,056
    Latching time      s              1
      Write time         s            806
      Mb written                    6,574
    Table scans & fetches
    Short table scans           607,891
    Long table scans             32,468
    Fetch by rowid        1,620,054,083
       by continued row         761,131
    Sorts
    Memory                    3,046,669
    Disk                             32
    Rows sorted             446,593,854
    Regards,
    Shiva

    Hi Stefan,
    As per the doc you have suggest. The details are as following.
    In the day there is only 24 log switch , but in hour there is no more than 10 to 15 as per doc ,so ti is very less.
    The DD-Cache quality   %           84.1 is less
    The elapsed time since start
    Elapsed since start (s)       540,731
      Log buffer
      Size              kb          1,176
      Entries                  13,449,901
      Allocation retries              767
      Alloc fault rate   %            0.0
    *Redo log wait      s            696*
       Log files (in use)            8( 8)
    Check DB Wait times
    TCode ST04->Detail Analysis Menu->Wait Events
    Statistics on total waits for an event
    Elapsed time:             985  s
    since reset at 09:34:06
    Type   Client   Sessions      Busy wait            Total wait           Busy wait
                                time (ms)    time (ms)            time (%)
    USER   User          40            1,028,710           17,594,230        5.85
    BACK   ARC0           1                2,640            1,264,410        0.21
    BACK   ARC1           1                  540            1,020,400        0.05
    BACK   CKPT           1                  950              987,490        0.10
    BACK   DBW0           1                  130              983,920        0.01
    BACK   LGWR           1                  160              986,430        0.02
    BACK   PMON           1                    0              987,000        0.00
    BACK   RECO           1                   10            1,800,010        0.00
    BACK   SMON           1                3,820            1,179,410        0.32
    Disk based sorts
    Sorts
    Memory                    3,443,693
    Disk                             41
    Rows sorted             921,591,847
    Check DB Shared Pool Quality
    Shared Pool
    Size              kb        507,904
    DD-Cache quality   %           84.1
    SQL Area getratio  %           95.6
      pinratio  %                           98.8
          reloads/pins %         0.0278
      V$LOGHIST
    THREAD#   SEQUENCE#   FIRST_CHANGE#   FIRST_TIME            SWITCH_CHANGE#
    1         31612       381284375       2008/11/13 00:01:29   381293843
    1         31613       381293843       2008/11/13 00:12:12   381305142
    1         31614       381305142       2008/11/13 03:32:39   381338724
    1         31615       381338724       2008/11/13 06:29:21   381362057
    1         31616       381362057       2008/11/13 07:00:39   381371178
    1         31617       381371178       2008/11/13 07:13:01   381457916
    1         31618       381457916       2008/11/13 09:26:17   381469012
    1         31619       381469012       2008/11/13 10:27:19   381478636
    1         31620       381478636       2008/11/13 10:59:54   381488508
    1         31621       381488508       2008/11/13 11:38:33   381498759
    1         31622       381498759       2008/11/13 12:05:14   381506545
    1         31623       381506545       2008/11/13 12:33:48   381513732
    1         31624       381513732       2008/11/13 13:08:10   381521338
    1         31625       381521338       2008/11/13 13:50:15   381531371
    1         31626       381531371       2008/11/13 14:38:36   381540689
    1         31627       381540689       2008/11/13 15:02:19   381549493
    1         31628       381549493       2008/11/13 15:43:39   381556307
    1         31629       381556307       2008/11/13 16:07:47   381564737
    1         31630       381564737       2008/11/13 16:39:45   381571786
    1         31631       381571786       2008/11/13 17:07:07   381579026
    1         31632       381579026       2008/11/13 17:37:26   381588121
    1         31633       381588121       2008/11/13 18:28:58   381595963
    1         31634       381595963       2008/11/13 20:00:41   381602469
    1         31635       381602469       2008/11/13 22:23:20   381612866
    1         31636       381612866       2008/11/14 00:01:28   381622652
    1         31637       381622652       2008/11/14 00:09:52   381634720
    1         31638       381634720       2008/11/14 03:32:00   381688156
    1         31639       381688156       2008/11/14 07:00:30   381703441
    14.11.2008         Log File information from control file                                10:01:32
      Group     Thread    Sequence   Size         Nr of     Archive          First           Time 1st SCN
      Nr        Nr        Nr         (bytes)      Members        Status      Change Nr       in log
      1         1         31638      52428800     2         YES  INACTIVE    381634720       2008/11/14 03:32:00
      2         1         31639      52428800     2         YES  INACTIVE    381688156       2008/11/14 07:00:30
      3         1         31641      52428800     2         NO   CURRENT     381783353       2008/11/14 09:50:09
      4         1         31640      52428800     2         YES  ACTIVE      381703441       2008/11/14 07:15:07
    Regards,

  • Redo log wait event

    Hi,
    in my top evens i've:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 1,894 36.1
    log file sync 36,862 1,008 27 19.2 Commit
    db file scattered read 165,508 970 6 18.5 User I/O
    db file sequential read 196,596 857 4 16.3 User I/O
    log file parallel write 35,847 565 16 10.8 System I/O
    Log file are on a separate disks, with no activity, only 1 redo per group, and 4 groups.
    I think that 27ms for log file synch is high.
    I raised commits in sqlloader putting rows=100000 instead 30000 but it's always high.
    Which check i can perform?
    I'm on AIX 5.3 and database in 10.2.0.4.4

    Log File Sync
    The “log file sync” wait event is triggered when a user session issues a commit (or a rollback). The user session will signal or post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session. The wait is entirely dependent on LGWR to write out the necessary redo blocks and send confirmation of its completion back to the user session. The wait time includes the writing of the log buffer and the post, and is sometimes called “commit latency”.
    The P1 parameter in <View:V$SESSION_WAIT> is defined as follows for this wait event:
    P1 = buffer#
    All changes up to this buffer number (in the log buffer) must be flushed to disk and the writes confirmed to ensure that the transaction is committed and will be kept on an instance crash. The wait is for LGWR to flush up to this buffer#.
    Reducing Waits / Wait times:
    If a SQL statement is encountering a significant amount of total time for this event, the average wait time should be examined. If the average wait time is low, but the number of waits is high, then the application might be committing after every row, rather than batching COMMITs. Applications can reduce this wait by committing after “n” rows so there are fewer distinct COMMIT operations. Each commit has to be confirmed to make sure the relevant REDO is on disk. Although commits can be "piggybacked" by Oracle, reducing the overall number of commits by batching transactions can be very beneficial.
    If the SQL statement is a SELECT statement, review the Oracle Auditing settings. If Auditing is enabled for SELECT statements, Oracle could be spending time writing and commit data to the AUDIT$ table.
    If the average time waited is high, then examine the other log related waits for the session, to see where the session is spending most of its time. If a session continues to wait on the same
    If the average time waited is high, then examine the other log related waits for the session, to see where the session is spending most of its time. If a session continues to wait on the same buffer# then the SEQ# column of V$SESSION_WAIT should increment every second. If not then the local session has a problem with wait event timeouts. If the SEQ# column is incrementing then the blocking process is the LGWR process. Check to see what LGWR is waiting on as it may be stuck. If the waits are because of slow I/O, then try the following:
    Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks.
    Try to reduce resource contention. Check the number of transactions (commits + rollbacks) each second, from V$SYSSTAT.
    Alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
    Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1).
    Consider using raw devices (or simulated raw devices provided by disk vendors) to speed up the writes.
    See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options in order to reduce the amount of redo being written.
    See if any of the processing can use the COMMIT NOWAIT option (be sure to understand the semantics of this before using it).
    Check the size of the log buffer as it may be so large that LGWR is writing too many blocks at one time. 

  • High redo log space wait time

    Hello,
    Our DB is having very high redo log space wait time :
    redo log space requests 867527
    redo log space wait time 67752674
    LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
    Also, the amount of redo generated per hour :
    START_DATE START NUM_LOGS MBYTES DBNAME
    2008-07-03 10:00 2 1000 TKL
    2008-07-03 11:00 4 2000 TKL
    2008-07-03 12:00 3 1500 TKL
    Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
    Thanks in advance ,
    Regards,
    Aman

    Looking quickly over the AWR report provided the following information could be helpful:
    1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
    In particular the large_pool_size setting seems to be quite high although you're using shared servers.
    Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
    2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
    In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
    3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
    4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
    5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
    6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
    7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Streams capture waiting for dictionary redo log

    Hi ,
    The stream capture is waiting for the dictionary redo log
    as per the alert logs ,the logminer is able to register the logfile
    RFS LogMiner: Registered logfile [TEMP_114383_1_628824420.arc] to LogMiner session id [142]
    Fri Feb 13 00:00:39 2009
    Capture Session Redo Total
    Capture Process Session Serial Entries LCRs
    Name Number ID Number State Scanned Enqueued
    C_REF C001 675 2707 WAITING FOR DICTIONARY REDO 0 0
    Capture Capture Capture
    Process Process Positive Negative Process
    Name Queue Rule Set Rule Set Status
    C_REF CA_REF RULESET$_80 ENABLED
    Capture Capture
    Capture Process Process
    Name Queue START_SCN Status STATUS_CHAN CAPTURED_SCN APPLIED_SCN USE FIRST_SCN
    C_REF CA_REF 8586133398117 ENABLED 12-Feb-2009 8586133398117 8586133398117 YES 8586133398117
    CONSUMER_NAM SEQUENCE# FIRST_SCN NEXT_SCN TO_DATE(FIR TO_DATE(NEX NAME
    C_REF 114378 8586133399062 8586162685837 12-Feb-2009 12-Feb-2009 /TEMP_114378_1_628824420.arc
    C_REF 114379 8586162685837 8586163112496 12-Feb-2009 12-Feb-2009 /TEMP_114379_1_628824420.arc
    C_REF 114380 8586163112496 8586163984886 12-Feb-2009 12-Feb-2009 /TEMP_114380_1_628824420.arc
    C_REF 114381 8586163984886 8586163986301 12-Feb-2009 12-Feb-2009 /TEMP_114381_1_628824420.arc
    C_REF 114382 8586163986301 8586163987651 12-Feb-2009 12-Feb-2009 /TEMP_114382_1_628824420.arc
    C_REF 114383 8586163987651 8586163989497 12-Feb-2009 13-Feb-2009 /TEMP_114383_1_628824420.arc
    C_REF 114384 8586163989497 8586163989674 13-Feb-2009 13-Feb-2009 /TEMP_114384_1_628824420.arc
    Capture Time of
    Name LogMiner ID Last Redo SCN Last Redo SCN
    C_REF 142 8586166339742 00:10:13 02/13/09
    i am not still able to make out even after the archivelogs are registered they are not done logmining by logminer.
    Can you please help ,i am stuck up this situation.i have rebuild streams by completely removing the stream configuration and also dropped and recreated the strmadmin.

    Perhaps I missed it in your post but I didn't see a version number or any information as to what form of Streams was implemented or how.
    There are step-by-step instructions for debugging Streams applications at metalink. I would suggest you find the directions for your version and follow them.

  • Redo log space requests and Enqueue Waits

    Hi all,
    I am seeing an increase on the Enqueue Waits and Redo Log Space Request from 58, 274 to 192, 1245 in two weeks time respectively.
    The DB is a production database and runs on an HP cluster with 4X1G ram and 550mghz cpu.
    There are four Redo Log files with 200M (2 members each)which I have increased to 400M over this past weekend.
    I have included below the memory structure details:
    Redo Log Summary
    Total System Global Area 1646094824 bytes
    Fixed Size 104936 bytes
    Variable Size 408989696 bytes
    Database Buffers 1228800000 bytes
    Redo Buffers 8200192 bytes
    My question is that, who do I stop it from growing further and passing the 1:5000 ratio ?
    At the moment the ratio is in the range of 1:186194.
    Your input is much appreciated.
    Cheers,
    Seyoum.

    Here is some information from Oracle's Peformance Tuning Guide.
    The V$SYSSTAT statistic redo log space requests indicates how many times a server process had to wait for space in the online redo log, not for space in the redo log buffer. A significant value for this statistic and the wait events should be used as an indication that checkpoints, DBWR, or archiver activity should be tuned, not LGWR. Increasing the size of log buffer does not help.

  • High redo log buffer wait

    Hi,
    I can see "high redo log buffer wait" event. The instance spent 23% of its resources waiting for this event. Any suggestion to tune redo log buffer?
    DB version : 10.2.0.4.0
    Os : AIX
    SQL> SELECT name, value FROM v$sysstat WHERE name = 'redo log space requests';
    NAME VALUE
    redo log space requests 3542
    SQL> sho parameter buffer
    NAME TYPE VALUE
    buffer_pool_keep string
    buffer_pool_recycle string
    db_block_buffers integer 0
    log_buffer integer 14238720
    use_indirect_data_buffers boolean FALSE
    SQL> select GROUP#,BYTES from v$log;
    GROUP# BYTES
    1 1073741824
    4 1073741824
    3 1073741824
    2 1073741824
    SQL> show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 5G
    sga_target big integer 5G
    Thanks

    Gowin_dba wrote:
    I can see "high redo log buffer wait" event. The instance spent 23% of its resources waiting for this event. Any suggestion to tune redo log buffer?
    SQL> SELECT name, value FROM v$sysstat WHERE name = 'redo log space requests';
    NAME VALUE
    redo log space requests 3542 How are you getting from 3,542 "redo log space requests" to 23% of the instance resources waiting for "high redo log buffer wait" (which is not a wait event that can be found in v$event_name in any version of Oracle) ?
    "redo log space requests" is about log FILE space, by the way, not about log BUFFER space.
    Regards
    Jonathan Lewis

  • Best practice - online redo logs and virtualization

    I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
    We use a non-standard disk distribution scheme -
    on the c: drive we have oracle_home as well as directories for control files and online redo logs.
    on the d: drive we have datafiles
    on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
    my question is this:
    is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
    Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
    Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
    We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
    Ideas?
    Mike

    Hi,
    "Old" Oracle standard is to use as many spindles as possible.
    It looks to me, you have only 1 disk with several partitions on it ??
    In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
    Take another physical seperate D: drive to install you application.
    Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
    And finally yet another disk for the archive logs.
    We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
    All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
    The machine, or must I say the database, operates like a high speed train, very, very fast.
    Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
    Try to prevent the system from swapping, because that is a performance killer!
    Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
    Success!
    FJFranken
    Edited by: fjfranken on 7-okt-2011 7:19

  • SMON generating lot of redo

    DB Version 10.2.0.4
    OS HP-UX
    Hello we are facing problem with wait event Redo Log space Wait
    when i checked for the session which are generating lot of Redo i found SMON at the top..
    Just couldn't understand what SMON would be doing so it is generating highest amount of Redo
    Please Advise ..
    Regards
    Vinayak

    I would use DBMS_LOGMNR package to see what kind of activity is performed. For details about this package see Oracle® Database PL/SQL Packages and Types Reference 10g Release 2 (10.2).
    Kind regards,
    Joze

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • Redo Space wait

    oracle : 9i
    os : linux
    log : archive
    dg : primary
    Production : Yes
    The instance is up. the performace is poor due to redospace wait.
    I checked the following.
    1. select (sw.value)*100/lw.valuefrom v$sysstat sw, v$sysstat lwwhere sw.name='redo log space requests' and lw.name='redo writes';
    24.9544131
    2. Parameters
    log_checkpoint_interval integer 0
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean TRUE
    log_buffer integer 524288
    3. SELECT name, value
    FROM SYS.v_$sysstat
    WHERE NAME in ('redo buffer allocation retries',
    'redo log space wait time');
    redo buffer allocation retries 5216940
    redo log space wait time 40519121
    4. Select name, value from v$sysstat
    Where name in ('redo log space requests', 'redo entries'); 2
    redo entries 785620472
    redo log space requests 1295431
    Other than restarting the server, Any action can be taken ?

    1. Make sure your archiving destination isn't getting
    full. If ARCH can't archive, LGWR can't switch, the
    log buffer fills up and users then have to wait until
    it unclogs.
    space is enough . File is getting generated and able to transport also.
    2. Make sure you have sufficient ARCH processes. As a
    rough rule of thumb, LGWR can fill a log about three
    times faster than ARCH can copy it. Therefore, there
    is always a risk of ARCH getting bogged down with a
    backlog of unarchived logs. If that happens, LGWR
    will eventually get to a point where it can't
    switch... and the log buffer fills up etc etc. ARCH
    is supposed to be self-tuning (that is, extra ARCH
    processes are spawned automatically). But you may
    want to set log_archive_max_processes to provide a
    minimum number of processes to start with (yeah, the
    name of the parameter and its job are very confusing!
    MAX_PROCESSES actually specifies a minimum (and
    initial) number of processes!)
    log_archive_max_processes integer 2
    need to increase further?
    3. Make sure you have sufficient log groups.If you've
    only the minimum two, for example, LGWR will likely
    be unable to switch back to log 1 when it's finished
    in log 2, because log 1 is still being archived and
    checkpointed. If LGWR waits, the log buffer fills up,
    users have to wait until space becomes free once
    more... With lots of log groups, however, LGWR can
    switch into group 3, group 4, group 5 and so on
    before having to switch back to attempt to re-use
    group 1. I'd generally recommend a minimum of 4
    groups.
    SQL> select group#, (bytes)/(1024* 1024) from v$log;
    GROUP# (BYTES)/(1024*1024)
    1 5
    2 5
    3 5
    I wll add 1 more group as per your suggestion
    4. Make sure your logs are sufficiently large. Small
    logs switch quicker, and hence LGWR can catch up with
    itself more readily. If the logs are big, the rate of
    switching slows down, but incremental checkpointing
    means you don't build up a huge backlog of checkpoint
    work to perform when the switch finally happens. Few
    switches that don't have to do much will help keep
    LGWR able to work, and if LGWR can keep clearing the
    log buffer, users won't experience redo space waits.
    5. Put your redo logs on your best hardware. If
    you've got a mix of very fast disks and very ordinary
    disks, put your online logs on the good stuff. If
    you've got RAID 5 for the data files, don't put your
    redo logs on it. Use RAID1+0 ideally, or RAID0 with
    multiplexing. Keep the redo subsystem fast, in other
    words.
    running on RAID0 and archive logs are on san device
    6. Related: keep the IO done to redo logs away from
    the IO done to data files and anything else. Anything
    which disrupts LGWR's ability to clear the log buffer
    efficiently will increase your risk of redo space
    waits.

  • What's the point of redo logs?

    Why does Oracle bother writing everything to redo logs? If it's going to write data changes to the disk, why not just write them once to the data files and be done with it? What's the point of doing it twice? And if it's a redundancy thing, why not mirror the data disks?

    Hemant K Chitale wrote:
    How would you backup a database while it is in use ? You can't lock all the datafiles to prevent writes to them. Yet, transactions may be updating different blocks in different datafiles even as the backup is in progress. Say your backup starts with datafile 1 (or even datafiles 1,2,3,4 in parallel) at time t0. By time t5, it has copied 20% of the datafile to tape or alternate disk backup location. Along comes a transaction that updates the 100th block (somewhere within the 10-11% range) of datafile 1 and also the 60th block of datafile 5. Meanwhile, the backup continues running, already having taken a prior image of the 100th block and not being aware that the block has been changed. At time t25 it completes datafile 1 (or datafiles 1,2,3,4) and starts backing up datafile 5. Now, when it copies the 60th block of datafile 5, it (the backup utility) doesn't know that this block is inconsistent with the backup image of the 100th block of datafile 1.
    Instead of 1 transaction imagine 100 or 1000 transactions occurring while the backup is running.
    Surely, Oracle must be able to regenerate a consistent image of the whole database when it is restored ?
    That is what the Redo stream provides. The Redo stream is written to Archivelogs so that it can be backed up -- no Archivelog file is "in flux" (particularly if you use RMAN to backup the Archivelogs as well !).
    Had Oracle been merely writing to the datafiles alone, without a Redo stream, there is no way it could recreate a consistent database -- whether after Crash Recovery OR after Media Recovery.Interesting point about how redo logs facilitate backups. So what you're saying is that the redo logs help keep the data in the actual data files in a consistent state by only writing full transactions to them at a time. Presumably Oracle will either write out the redo log data to the data files before a backup or will at least prevent the redo logs from writing to the data files during a backup. I always wondered how databases got around that problem of keeping the system available for writing during a backup. I wonder how SQL Server does it.
    Hemant K Chitale wrote:
    Now, approach this from another angle. A database consists of 10 or 100 or 500 datafiles. You have 10 or 100 or 1000 sessions issuing COMMITs to complete their transactions, which could be of 1 row or 100 rows or 1million rows, each transaction of a different size. Should the 1000 sessions be forced to wait while Oracle writes all those updated blocks to disk in different datafiles -- how many blocks can it write in "an instant" ?
    But what if Oracle manages to write much less information -- the bare minimum (called "change vectors") to re-play every transaction to a single file serially ? That would be much faster. Imagine writing to 500 datafiles concurrently, having to open the file, progess to the required block address and update the block, for each block changed in each file VERSUS writing much lesser information serially to a single file -- if the file is full, switch to another file, but keep writing serially.As to your second point, I don't really have a good enough understanding about the format of redo logs vs. the data files to follow you totally. Are you saying that it takes more time to write to the data files because you have to find the proper place in the B-Tree before you can write to it? And that doing that is slower than just opening the redo log and always appending new information to the very end? Maybe so, but it seems like all transactions having to write to a single redo log in serial would slow things down since there would be a ton of contention for one file. Whereas with the data files, you could potentially have several transactions writing to different files simultaneously (provided you hardware would support doing that). And it seems to me like a change vector would contain a lot more information than a field value, but, like I said, I'm not really familiar with the format.

  • Urgent Help - Redo Log problem

    Hi,
    I have an implementation of Oracle 9i. Presently I have got 3 redo log groups with 2 members each. There are lot of updation and insertion of records going on with few tables. When the load increases the DB lands in a hang state and in the Log I get this message
    Thread 1 cannot allocate new log, sequence 234
    All online logs needed archiving
    Current log# 1 seq# 233 mem# 0: D:\ORACLE\ORADATA\ORCL\REDO01.LOG
    Current log# 1 seq# 233 mem# 1: D:\ORACLE\ORADATA\ORCL\REDO04.LOG
    Can any one help me in solving this problem. I tried to Switch the log files but as they already are waiting for archiving so no use.
    Any help on this will be highly helpfull to me as I am in Live environment.
    Arvind

    The way the log groups work is.
    Log Writer(lgwr) starts writing to group 1. When group 1 fills up, it switches to group 2. When lgwr starts writing to group 2, the Archiver (arc) wakes up and starts writing group 1 to the archive file. When group 2 fills up lgwr start writing group 3 and arc archives group 2 once it has finished writing group 1. When group 3 fills up, lgwr starts writing to group 1 again, assuming that arc has finished writing group 1 to the archive.
    In your case, it appears that arc is still writing group 1 when lgwr wants to use it again, so the database stalls until arc is finished writing group 1.
    Fundamentally you have two choices. You can increase the size of each file in each log group, so they will fill up less often. However, this will also make arc take longer to archive the group. If you can gain enough time based on slower filling to offset the slower writing you should be ok.
    The other option is to add a few more groups of the same size as the existing groups. This will give lgwr more groups to use before needing to start re-using earlier groups.
    Typically, in our systems we run between 4 and 6 64M log groups and never see these hangs.
    HTH
    John

  • ORA-00333: redo log read error block

    ORA-01033: ORACLE initialization or shutdown in progress ...
    / as sysdba
    SQL> shutdown immediate;
    SQL> startup nomount;
    SQL> alter database mount;
    SQL> alter database open;
    ORA-00333: redo log read error block 8299 count 8192
    SQL> SELECT * FROM V$VERSION;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Product
    PL/SQL Release 10.2.0.1.0 - Production
    CORE 10.2.0.1.0 Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> select group#,members,THREAD, STATUS,ARCHIVED,BYTES,FIRST_TIME,FIRST_CHAGE#,SEQUENCE# from v$log;
    GROUP# MEMBERS,THREAD,STATUS,ARCHIVED,BYTES,FIRST_TIME,FIRST_CHAGE#,SEQUENCE#
    1 1 1 CURRENT NO 52428800 29-FEB-12 1597643 57
    2 1 1 INACTIVE NO 52428800 29-FEB-12 1573462 56
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Wed Feb 29 19:46:38 2012
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 56 Reading mem 0
    Mem# 0 errs 0: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_7LZYZK8S_.LOG
    Wed Feb 29 19:46:40 2012
    Completed redo application
    Wed Feb 29 19:46:40 2012
    Completed crash recovery at
    Thread 1: logseq 56, block 6568, scn 1597642
    270 data blocks read, 270 data blocks written, 1460 redo blocks read
    Wed Feb 29 19:46:43 2012
    Thread 1 advanced to log sequence 57
    Thread 1 opened at log sequence 57
    Current log# 2 seq# 57 mem# 0: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG
    Successful open of redo thread 1
    Wed Feb 29 19:46:43 2012
    SMON: enabling cache recovery
    Wed Feb 29 19:46:55 2012
    Successfully onlined Undo Tablespace 1.
    Wed Feb 29 19:46:55 2012
    SMON: enabling tx recovery
    Wed Feb 29 19:46:56 2012
    Database Characterset is AL32UTF8
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=19, OS id=3024
    Wed Feb 29 19:47:09 2012
    Completed: alter database open
    Wed Feb 29 19:47:14 2012
    db_recovery_file_dest_size of 10240 MB is 0.98% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Wed Feb 29 20:33:30 2012
    MMNL absent for 1537 secs; Foregrounds taking over
    Wed Feb 29 20:33:31 2012
    MMNL absent for 1540 secs; Foregrounds taking over
    Wed Feb 29 20:33:31 2012
    MMNL absent for 1540 secs; Foregrounds taking over
    MMNL absent for 1540 secs; Foregrounds taking over
    Wed Feb 29 20:33:32 2012
    MMNL absent for 1540 secs; Foregrounds taking over
    Wed Feb 29 20:33:33 2012
    MMNL absent for 1540 secs; Foregrounds taking over
    Wed Feb 29 21:45:24 2012
    MMNL absent for 4318 secs; Foregrounds taking over
    MMNL absent for 4318 secs; Foregrounds taking over
    MMNL absent for 4322 secs; Foregrounds taking over
    Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Wed Feb 29 22:30:01 2012
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows XP Version V5.1 Service Pack 3, v.3244
    CPU : 2 - type 586, 2 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:3097M/3546M, Ph+PgF:5143M/5429M, VA:1943M/2047M
    Wed Feb 29 22:30:01 2012
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =10
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    sessions = 49
    __shared_pool_size = 201326592
    __large_pool_size = 8388608
    __java_pool_size = 4194304
    __streams_pool_size = 0
    spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
    sga_target = 805306368
    control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
    __db_cache_size = 587202560
    compatible = 10.2.0.1.0
    db_recovery_file_dest = C:\oraclexe\app\oracle\flash_recovery_area
    db_recovery_file_dest_size= 10737418240
    undo_management = AUTO
    undo_tablespace = UNDO
    remote_login_passwordfile= EXCLUSIVE
    dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
    shared_servers = 4
    local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=winsp3ue)(PORT=1522))
    job_queue_processes = 4
    audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
    background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
    user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
    core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
    db_name = XE
    open_cursors = 300
    os_authent_prefix =
    pga_aggregate_target = 268435456
    PMON started with pid=2, OS id=2176
    PSP0 started with pid=3, OS id=2204
    MMAN started with pid=4, OS id=2208
    DBW0 started with pid=5, OS id=2212
    LGWR started with pid=6, OS id=2220
    CKPT started with pid=7, OS id=2240
    SMON started with pid=8, OS id=2460
    RECO started with pid=9, OS id=2464
    CJQ0 started with pid=10, OS id=2480
    MMON started with pid=11, OS id=2484
    Wed Feb 29 22:30:02 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=12, OS id=2492
    Wed Feb 29 22:30:02 2012
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Feb 29 22:30:02 2012
    alter database mount exclusive
    Wed Feb 29 22:30:06 2012
    Setting recovery target incarnation to 2
    Wed Feb 29 22:30:06 2012
    Successful mount of redo thread 1, with mount id 2657657770
    Wed Feb 29 22:30:06 2012
    Database mounted in Exclusive Mode
    Completed: alter database mount exclusive
    Wed Feb 29 22:30:07 2012
    alter database open
    Wed Feb 29 22:30:07 2012
    Beginning crash recovery of 1 threads
    Wed Feb 29 22:30:07 2012
    Started redo scan
    Wed Feb 29 22:30:15 2012
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_2544.trc:
    ORA-00333: redo log read error block 10347 count 6144
    ORA-00312: online log 2 thread 1: 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG'
    ORA-27070: async read/write failed
    OSD-04016: Error queuing an asynchronous I/O request.
    O/S-Error: (OS 23) Data error (cyclic redundancy check).
    Waiting for Help
    Regards

    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_2544.trc:
    ORA-00333: redo log read error block 10347 count 6144
    ORA-00312: online log 2 thread 1: 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG'
    ORA-27070: async read/write failed
    OSD-04016: Error queuing an asynchronous I/O request.
    O/S-Error: (OS 23) Data error (cyclic redundancy check).Might your redo log file is corrupted or not exist, check physically. -> C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG
    is it archivelog mode?
    perform fake recovery and open resetlogs.

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

Maybe you are looking for