LGWR and DBWn

I learnt from the books that LGWR writes every three seconds
1. Can we change this time?
2. Is it so that the Log_Buffer size should be just large enough to accommodate redo entries generated for 3 seconds because after that it will flush the contents
Re: Concepts of checkpoint, dbwr, lgwr processes
Also want to know whether the commit includes flushing of all the redo log buffer to redo logs or just the redo entries related to commit?

Also want to know whether the commit includes
flushing of all the redo log buffer to redo logs or
just the redo entries related to commit?These paragraph in concept answers your question.
When a user issues a COMMIT statement, LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the transaction's redo entries. The corresponding changes to data blocks are deferred until it is more efficient to write them. This is called a fast commit mechanism. The atomic write of the redo entry containing the transaction's commit record is the single event that determines the transaction has committed. Oracle returns a success code to the committing transaction, although the data buffers have not yet been written to disk.
Note:
Sometimes, if more buffer space is needed, LGWR writes redo log entries before a transaction is committed. These entries become permanent only if the transaction is later committed.
When a user commits a transaction, the transaction is assigned a system change number (SCN), which Oracle records along with the transaction's redo entries in the redo log. SCNs are recorded in the redo log so that recovery operations can be synchronized in Real Application Clusters and distributed databases.
In times of high activity, LGWR can write to the redo log file using group commits. For example, assume that a user commits a transaction. LGWR must write the transaction's redo entries to disk, and as this happens, other users issue COMMIT statements. However, LGWR cannot write to the redo log file to commit these transactions until it has completed its previous write operation. After the first transaction's entries are written to the redo log file, the entire list of redo entries of waiting transactions (not yet committed) can be written to disk in one operation, requiring less I/O than do transaction entries handled individually.
Therefore, Oracle minimizes disk I/O and maximizes performance of LGWR. If requests to commit continue at a high rate, then every write (by LGWR) from the redo log buffer can contain multiple commit records.

Similar Messages

  • Standby Database Configuration with LGWR and ASYNC

    Hi,
    I am running a standby database configuration on a 100MBit LAN
    with the following set-up:
    log_archive_dest_2='service=standby mandatory reopen=300 lgwr
    async=2000'
    The system is handling a lot of very small transactions, not
    involving large amounts of data.
    My questions are:
    - What are the potential problems in using a small ASYNC buffer
    like the one above?
    - Does a larger ASYNC buffer influence the latency in copying
    changes from the production database to the standby database -
    will it buffer more changes before sending them to the standby
    database?

    Murlib,
    I have few more doubt-
    Our requirement is to configure a Standby( Physical-MAXIMUM PERFORMANCE
    mode) in a place, which is 600 KM away from our primary destination.
    Currently our LAN network traffic rate is 100 Mbps. but this traffic is
    virtually reduced to 1Mbps out side our LAN.
    Our Production Database is 24X7 and Its generating 17 GB Archive files every
    day.
    Since the net work traffic is slow i think, it will create some log gaps,and
    also we couldn't do a point in time recovery.
    We are configuring a Standby, here inside our LAN in Managed Recovery Mode
    for recovery purpose and will keep a Standby there in remote place for
    Reports.
    and it will be recover it in every morning.If iam following this procedure,
    my
    log_archive_dest_2 ='service=stby ARCH NOAFFIRM' ( which is the standby here
    inside our LAN and should be in MANAGED RECOVERY mode)
    and i need to configure the parameters for standby in my remote location.So
    my doubt is-
    Shall i need to configure "log_archive_dest_n" in the parameterlist of my
    Primary for that remote Standby ?
    I think for manual recovery we can aviod that.But we need to eonnect it
    thrrough Oracle Net
    Can you please tell me the essential PRIMARY parameter list entries for this
    kind of remote standby , recovering in a manual mode ?
    i think the following parameter should be there -
    FAL_SERVER
    FAL_CLIENT
    DB_FILE_NAME_CONVERT
    LOG_FILE_NAME_CONVERT
    STANDBY_FILE_MANAGEMENT=AUTO
    STANDBY_ARCHIVE_DEST
    Thanks and Regards,
    Raj

  • Lgwr and cursor

    please i appreciate your help for these 2 questions:
    1- can i add more lgwr process like dbwr ? if yes why if no why
    2- regarding cursors: when a session issues a select statement ex:
    select * from hr.employees
    is the employees table blocks are in the buffer cache they are used otherwise the server process copy the blocks
    from the data file into the buffer cache..is that right so far ?
    lets say the same session is updating the employees records , are the same blocks moved to the shared pool to satisfy sharing
    the same blocks with other session, and by then only these data blocks are called a cursor and moved to the shared pool and shared
    by other sessions using cursor_sharing parameter setting
    please clarification is really appreciated
    regards

    Maoro wrote:
    1- can i add more lgwr process like dbwr ? if yes why if no why
    If I remember correctly, there is a metalink note which does talk about adding more slaves to the LGWR. Its indeed correct that you can't add more processes of LGWR to the system. Reason , IMHO,is that LGWR doesn't need to scan a lot of the data. Normally , the redo log buffer is actually having a very tiny size as compared to the other caches that are there in the SGA. In addition to that, the algorithm of log buffer is pushing the data out from this buffer much fastly as compared to other caches, 3seconds is the time out even when there is no other even triggering LGWR to write. So , there is no need actually to go for more than one LGWR. ORacle has done couple of changes in the latches though , in order to to make the working of redo buffer better.
    You may be not knowing but there are other enhancements done in the redo and undo management to make its working better. There is a concept of Private Redo and In Memory Undo from 10g onwards, targeted to make the things less contending for the standard caches.
    2- regarding cursors: when a session issues a select statement ex:select * from hr.employees
    is the employees table blocks are in the buffer cache they are used otherwise the server process copy the blocks
    from the data file into the buffer cache..is that right so far ?
    lets say the same session is updating the employees records , are the same blocks moved to the shared pool to satisfy sharing
    the same blocks with other session, and by then only these data blocks are called a cursor and moved to the shared pool and shared
    by other sessions using cursor_sharing parameter setting
    >
    1s statement is correct.
    Again, this is correct to say that buffer cache's data is not going to be in the shared pool and that' s how the things have been so far.I have given a link above, read that link which does show that Oracle has done some changes probably and now, they may use shared pool's buffers too for the data buffers. I haven't done the research for it but the note is from Tanel Poder and if he says something, its not just like that.
    If you have some other questions about how things work, feel free to post.
    HTH
    Aman....

  • How DBWn Works

    Hi,
    I want to know how DBWn works
    SAy; There is a tablespace DATASR3.
    This tablespace consists of 10 Datafiles.
    6 Datafiles are already filled. (Reached to its MAXLIMIT.)
    and Other 4 datafiles is in the preocess of getting the data.
    Im interesting in why its happens like this?... How DBWn will going to add/write data to datafiles.
    Is it in a circular fashion OR randomly it (DBwn) decides to which datafiles it goes to write.
    If its circular fashion then all datafiles must filled...according to order. (then it is not possible to full-fill only 6 Datafiles?)
    DBWn and LGWR works in the same way? .... Writing to files one after the other!
    Regards,

    Hi user2734911,
    Did you search via google?
    Database writer process (DBWn) is a background process that writes buffers in the database buffer cache to data files.
    Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate data files all at once (in bulk mode), as determined by the background process database writer process (DBWn).
    The disk write can happen before or after the commit. See Database Buffer Cache.
    After a COMMIT, the database writes the redo buffers to disk but does not immediately write data blocks to disk. Instead, database writer (DBWn) performs lazy writes in the background.
    The database writer (DBWn) process periodically writes cold, dirty buffers to disk. DBWn writes buffers in the following circumstances:
    * A server process cannot find clean buffers for reading new blocks into the database buffer cache.
    As buffers are dirtied, the number of free buffers decreases. If the number drops below an internal threshold, and if clean buffers are required, then server processes signal DBWn to write.
    The database uses the LRU to determine which dirty buffers to write. When dirty buffers reach the cold end of the LRU, the database moves them off the LRU to a write queue. DBWn writes buffers in the queue to disk, using multiblock writes if possible. This mechanism prevents the end of the LRU from becoming clogged with dirty buffers and allows clean buffers to be found for reuse.
    * The database must advance the checkpoint, which is the position in the redo thread from which instance recovery must begin.
    * Tablespaces are changed to read-only status or taken offline.
    DBWn and LGWR works in the same way? .... Writing to files one after the other!*
    LGWR does sequential writes - so it takes lesser time to do its job.
    It needs to keep a record of what operations were peformed on the data and hence doesn't have a requirement to post the data to the actual data blocks of interest.
    However DBWn needs to write data to appropriate data blocks - which means it need to first find where to write and then do the writing job.
    Since a sequential write takes lesser time, it is advantageous to have separate LGWR and DBWn processes.
    The user gets better performance because LGWR does it faster sequential job while DBWn does its slow job in the background*.

  • Increased logfile size and now get cannot allocate new log

    Because we were archiving up to 6 and 7 times per minute, I increased our logfiles from size from 13M to 150M. I also increased the number of groups from 3 to 5.
    Because we want to ensure recoverability within a certain timeframe, I also have a script that runs every 15 minutes and issues 2 commands: ALTER SYSTEM SWITCH LOGFILE; and ALTER SYSTEM CHECKPOINT;
    I am now seeing in my alert.log file the following, almost every time we do a log switch.
    Thread 1 cannot allocate new log, sequence 12380
    Private strand flush not complete
    No other changes have been made to the database.
    Why would this now be doing this?
    Should I not be doing both the ALTER SYSTEM SWITCH LOGFILE and the ALTER SYSTEM CHECKPOINT?
    Is there something else I should be looking at?
    Any suggestions/answers would be greatly appreciated.
    Db version: 11.1.0.7
    OS: OEL 5.5

    Set the FAST_START_MTTR_TARGET parameter to the instance recovery time in seconds that you want...
    ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=upto 10; this will make sure that the redo logs are copied faster....
    The sizing redo log files can influence performance because DBWR, LGWR and ARCH are all working during high DML periods.
    A too small online redo log file size can cause slowdowns from excessive DBWR and checkpointing behavior. A high checkpointing frequency and the "log file switch (checkpoint incomplete) can also cause slowdowns.
    Add additional log writer processes (LGWR).
    Ensure that the archived redo log filesystem resides on a separate physical disk spindle.
    Put the archived redo log filesystem on super-fast solid-state disks.

  • DB_WRITER_PROCESSES in Oracle 10.2.0.4 HP UX

    Hi all,
    I noticed that asynchronous i/o cannot be used with regular filesystem on HP UX (only raw devices).
    As far as I know what I can do is to configure multiple db writers with DB_WRITER_PROCESSES parameter.
    DB has CPU_COUNT in 16, but 2 db writers (default value = CPU_COUNT / 8).
    Will I have any benefit by increasing DB_WRITER_PROCESSES?
    Or only if there is a bottleneck in writing dirty buffers?
    Thanks

    >
    If bottleneck exists it would be at the disk end & not the CPU end.
    Server processes are at least 1000 times faster than mechanical disk drives.
    a single DB_WRITER could keep 10 disk controllers busy & not be a bottleneck.Well, then why does Oracle default to 1 process every 8 cpus?
    DBWn also handles checkpoints, file open synchronization, and logging of Block Written records, according to [url http://docs.oracle.com/cd/E11882_01/server.112/e25513/bgprocesses.htm#BBBDIIHC]the docs. On hp-ux Oracle does file handles oddly - see MOS How to Disable Asynch_io on HP to Avoid Ioctl Async_config Error Errno = 1 [ID 302801.1] if you (the OP) haven't already.
    So on hp-ux you have a potential problem of the architecture assuming asyncronicity but the dickering between dbw, lgwr and ckpt turning into bickering. Just because something is 1000 times faster doesn't mean it isn't doing 1000 more operations. In real life, you wind up with Oracle telling you to ignore Private strand flush not complete errors, because there are no actual lgwr problems unless switches are slow.

  • Internal Apex Process(?) Killing Resources.

    Hey All,
    I got an email from my DBA today asking if I have any idea what the following code is and what it does. I see the use of the "f" procedure so my best guess is that this is what apex uses to render pages? That aside, the problem is that according to my DBA when this runs it causes background process to "go crazy". In asking him to dfine "go crazy" he responded with this:
    "Log writer (LGWR), and DBwriter (DBWn). That tells me its doing a bunch of writing, so much so that we need to switch logs often."
    Anyone familiar with this or know what could cause that?
    Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
    Application Express 3.2.0.00.27
    declare
      rc__ number;
      simple_list__ owa_util.vc_arr;
      complex_list__ owa_util.vc_arr;
    begin
      owa.init_cgi_env(:n__,:nm__,:v__);
      htp.HTBUF_LEN := 63;
      null;
      null;
      simple_list__(1) := 'sys.%';
      simple_list__(2) := 'dbms\_%';
      simple_list__(3) := 'utl\_%';
      simple_list__(4) := 'owa\_%';
      simple_list__(5) := 'owa.%';
      simple_list__(6) := 'htp.%';
      simple_list__(7) := 'htf.%';
      if ((owa_match.match_pattern('f', simple_list__, complex_list__, true))) then
        rc__ := 2;
      else
        null;
        null;
        f(p=>:p);
        if (wpg_docload.is_file_download) then
          rc__ := 1;
          wpg_docload.get_download_file(:doc_info);
          null;
          null;
          null;
          commit;
        else
          rc__ := 0;
          null;
          null;
          null;
          commit;
          owa.get_page(:data__,:ndata__);
        end if;
      end if;
    :rc__ := rc__;
    end;Thank You
    Tyson Jouglet

    Hi Jen,
    apex.oraclecorp.com is the Oracle internal website for Oracle Confidential inwards facing applications that you should use, assuming you are an Oracle employee. If you go to that site it explains the conditions of use and how to apply for a workspace. SSO is the recommended authentication method and if you simply go to the Oracle search facility on my.oracle.com and search for "SSO apex" you will be given links to blog and forum articles on how to implement this for your application.
    There are internal forums and blogs you can access that give information on APEX, but I would recommend this forum for general technical questions and information as it has a large and active community, both internal and external to Oracle. Also, you should update your profile to give more friendly forum name rather than user12602263.
    Regards
    Andre

  • Corrupting the block to continue recovery in physical standby

    Hi,
    Just like to inquire how I will be able to corrupt the block to be able to continue the recovery in the physical standby.
    DB Version: 11.1.0.7
    Database Type: Data Warehouse
    The setup we have is primary database and standby database, we are not using dataguard, and our standby setup is another physical copy of production which act as standby and being sync using script that being run from time to time to apply the archive log came from production (its not configured to sync using ARCH or LGWR and its corresponding configurations).
    Then, the standby database is not sync due to errors encountered while trying to apply the archive log, error is below:
    Fri Feb 11 05:50:59 2011
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Media Recovery Log /u01/archive/<sid>/1_50741_651679913.arch
    Fri Feb 11 05:52:06 2011
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0, kdr9ir2rst0()+326]
    Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc (incident=631460):
    ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
    Incident details in: /u01/app/oracle/diag/rdbms/<sid>/<sid>/incident/incdir_631460/<sid>pr0028085_i631460.trc
    Fri Feb 11 05:52:10 2011
    Trace dumping is performing id=[cdmp_20110211055210]
    Fri Feb 11 05:52:14 2011
    Sweep Incident[631460]: completed
    Fri Feb 11 05:52:17 2011
    Slave exiting with ORA-10562 exception
    Errors in file /u01/app/oracle/diag/rdbms/<sid>/<sid>/trace/<sid>pr0028085.trc:
    ORA-10562: Error occurred while applying redo to data block (file# 36, block# 1576118)
    ORA-10564: tablespace <tablespace name>
    ORA-01110: data file 36: '/u02/oradata/<sid>/<datafile>.dbf'
    ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 14877145
    ORA-00607: Internal error occurred while making a change to a data block
    ORA-00602: internal programming exception
    ORA-07445: exception encountered: core dump [kdr9ir2rst0()+326] [SIGSEGV] [ADDR:0x7FFFD2F18FF8] [PC:0x60197E0] [Address not mapped to object] []
    Based on the error log it seems we are hitting some bug from metalink (document id 460169.1 and 882851.1)
    my question is, the datafile # is given, block# is known too and the data object is also identified. I just verified that object is not that important, is there a way to set the block# to corrupted to be able the recovery to continue? Then I will just drop the table from production so that will also happen in standby, and the block corrupted will be gone too. Is this feasible?
    If its not, can you suggest what's next I can do so the the physical standby will be able to sync again to prod aside from rebuilding the standby?
    Please take note that I also tried to dbv the file to confirm if there is marked as corrupted and the result for that datafile is also good:
    dbv file=/u02/oradata/<sid>/<datafile>_19.dbf logfile=dbv_file_36.log blocksize=16384
    oracle@<server>:[~] $ cat dbv_file_36.log
    DBVERIFY: Release 11.1.0.7.0 - Production on Sun Feb 13 04:35:28 2011
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    DBVERIFY - Verification starting : FILE = /u02/oradata/<sid>/<datafile>_19.dbf
    DBVERIFY - Verification complete
    Total Pages Examined : 3840000
    Total Pages Processed (Data) : 700644
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 417545
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 88910
    Total Pages Processed (Seg) : 0
    Total Pages Failing (Seg) : 0
    Total Pages Empty : 2632901
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Total Pages Encrypted : 0
    Highest block SCN : 3811184883 (1.3811184883)
    Any help is really appreciated. I hope to hear feedback from you.
    Thanks

    damorgan, i understand the opinion.
    just new with the organization and just inherit a data warehouse database without rman backup. I am still setting up the rman backup thats why i can't use rman to resolve the issue, the only i have is physical standby and its not a standby that automatically sync using dataguard or standard standby setup, i am just checking solution that is applicable in the current situation

  • Upgrading protection mode

    I've never used any Data Guard protection mode except Maximum Performance before so please forgive me if my questions seem ignorant.
    I have a primary DB with 2 standbys both currently operating in maximum performance mode. The primary is a 2 node RAC cluster. Both standbys are single instance. None of the databases in the configuration use standby logs (yet). I am currently using ARCH to ship the logs to both standbys. All databases are running version 11.2. I am not using Data Guard broker.
    I am considering upgrading just standby # 2 to maximum availability protection mode, but I'm not sure if it's really what I want or not. Perhaps someone here with a better understanding can confirm for me whether its the best choice.
    My goal is to achieve the maximum data protection possible with no performance impact whatsoever on the primary database.
    If I understand maximum availability correctly (and I am not certain I do), a commit issued on the primary will not complete until the maximum availability standby has acknowledged receipt of the redo data. If the link between the two goes down, that could introduce a significant delay in the commit. Is that correct? In my situation any performance impact on the primary DB is unacceptable.
    Perhaps what I want is maximum performance using standby logs, with the LGWR and ASYNC attributes enabled on the corresponding log_archive_dest_n parameter. If I understand that correctly, a commit on the primary will complete as soon as the redo is written to the local redo log and the availability of the standby destination will have no performance impact at all. In general the standby will remain only a few seconds behind the primary. If the link between the two goes down, I can alert myself to the situation with a simple query of v$dataguard_stats and correct whatever the underlying problem is.
    Can someone please confirm for me whether I am understanding the behavior of commits under these two different protection modes and transport modes?
    Also, if I do create standby logs for the one destination, do I also need to create them on the primary and other standby as well.
    Thanks.

    Here's what I can think of :
    Max protection mode will shutdown automatically if no standby destinations available for log transfer. To avoid this at least two log_archive_dest are needed.
    Max protection mode will shutdown if lgwr,async and noaffirm is used.
    Max Availability will behave like max performance mode when no standby archive destinations available.
    Max Availability will remain in max availability mode when standby redo logs will not available.
    So it will depend on the business
    Best Regards
    mseberg

  • Oracle10gR2 indexing changes the block ?

    DB: Oracle 10g R2 x864
    OS: SUSE Linux x864
    DR Product/Tool: Novell Platespin(http://www.novell.com/products/forge/)
    We are using a block based replication tool(Platespin) to maintain a DR of our customer's Database Server(oracle 10g R2 atop SUSE Linux) using Platespin.
    Block based Incremental replication is configured to run on hourly basis over a 30 Mbps dedicated fiber link.
    Maximum changes of data per day(during business hours) wont exceed 300 MB on Production Database Server. During business hours Platespin replicates at least 1 GB at every replication cycle(8 to 10 GB during business hours), while during off hours it replicates 300 to 500 MB per hour(at every replication cycle). We are facing this strange issue with this box only(SUSE 10 + Oracle 10g R2), we have protected MS Exchange 2007 Server based workloads without this strange issue, i.e in case of Exchange only incremental data (delta) replicates from Production server to DR site on Platespin.
    We opened a support ticket with Novell Platespin about why Volume of Incremental replication is too high, Novell/Platespin support says us that Oracle re-indexes its database for better performance, so it is possible that re-indexing causes the blocks level changes on the storage, and since Platespin works on Block level, thats why it replicates so much(even though data is not changed that much)
    here is actual words of Novell Platespin support:
    I think whenever Oracle database Indexing happens, it changes almost most of the Blocks of database and Platespin replicate all those Blocks.
    As you know, Platespin checks the Date/Time attribute of every blocks before replication and if Date/Time attribute changes from last
    replication, it considers as changed block and replicate those blocks on Platespin Appliance. So, my suggestion is just look into the Oracle
    server behaviour before/after Data indexing process and do needful or do some workaround to overcome this issue.
    is there anything we can do at oracle level ? is it really a database(indexing) issue ? any one can please comments/suggest on this ?
    Regards,

    user571635 wrote:
    1 - we are doing Block based replication of entire /oracle file system, /oracle is a single file system contains every thing(binaries, database, archiving and logging is under /oracle )
    2 - Archiving is enabled
    3 - actually this machine is not just a Oracle 10g R2 DB server but SAP ERP is also installed and we use to check the change of data via SAP
    4 - Maximum data changes per day never exceeds 300 MB
    5 - Normally change of data(delta) per day is about 100 to 175 MB.
    6 - Indexing is configured on SAP level(we have no idea how SAP manages indexing with Oracle)
    8 - no special updates or jobs are configured
    So am I right to understand that if data change is 300 MB then Archiving creates another 300 MB ? means in any case the amount of change
    per day would be like 300*2=600MB, so the block based replication tool shouldn't replicate more then 600 MB per day, while in our case the block based replication tool
    replicates 1 GB hourly during Business Hours, and 200-300 MB hourly during off hours.No. You need to read the concepts manual (available at http://tahiti.oracle.com among other places), and probably should read Tom Kyte's books too. As the others mentioned, you should look and see how much redo is being written. The best way for your system depends on configuration, sometimes it can be as simple as looking at the sizes and times of the files in the archive log (which are actually archived redo logs) directory.
    How much redo is written depends on what is going on. How much undo is written also depends on what is going on. The log writer and database writer processes are doing the actual writing, so the data being replicated can vary from what SAP thinks is being generated. For example, if you are changing a value in a column, the redo only needs to have a change vector for the block that contains that column. But if you are copying blocks, either the copying tool has to be aware of Oracle blocks (I don't know how likely that is for your tool), or you risk having fractured blocks. User managed backup has that problem, so Oracle lets you put tablespaces in hot backup mode, which has a consequence of writing the [url http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:271815712711]entire block to redo the first time it is changed in that mode. That's why I was wondering how your block based tool deals with that. Oracle data files also have a header with system change number information in it, and blocks have similar things, and they all have to be coordinated, so that's why I question whether you are indeed doing block based transfers - Gigabytes implies either files or way lots block changes.
    >
    and also if Indexing is enabled, it wont change the Blocks on Storage so drastically ?Well, not knowing what is going on, we can't say. Does your storage tell you how much writing is going on? Does Oracle see lots of waits on the lgwr and/or dbwr processes? (There are command line ways to see waits, and if you are licensed for performance monitoring there are pretty graphs in dbconsole/enterprise manager).
    >
    Edited by: user571635 on Jun 21, 2011 8:29 AM

  • Clarification about  Database_Buffer_cache workings

    Hi All,
    Clarification about Database_Buffer_cache workings:(This statement from my course material)
    *1.The information read from disk is read a block at a time,not a row at a time,because a database block*
    is the smallest addressable storage space on disk.
    Before answering, my please check whether my above statement is correct or not,becoz i get this from My course material.
    If i am querying ,
    select * from emp;
    Whether server_process bring the whole block belongs to EMP table right or it just bring the table itself?
    Thank you,
    Regards,
    DB
    Edited by: DB on May 30, 2013 3:19 PM
    Edited by: DB on May 30, 2013 4:35 PM

    Both happens, the LGWR may call the DBWR to write dirty blocks from the buffer cache to disk. Dirty in this context means, that the blocks in the buffer cache have been modified and not yet written to disk, i.e. their content differs from the on disk image. Conversely the DBWR can also call LGWR to write redo records from the redo log buffers (in memory) to the redo log files on disk.
    To understand why both is possible, you need to understand the mechanics how Oracle does recovery, in particular REDO and UNDO and how they play together. The excellent book "Oracle Core" from Jonathan Lewis describes this in detail.
    I'll try to sketch each of the two cases. I am aware that this is only an overview which leaves out many details. For a complete description please look at the various Oracle books and documentation that cover this topic.
    1. LGWR posts DBWR to write blocks to disk
    As you probably know, any modifications done by DML (which modify data blocks) are recorded in the redo. In case of recovery this redo can be used to bring the data blocks to the last committed stated before failure by re-applying modifications that are recorded in the redo. Redo is written into redo log files and the redo log files are used in a round robin fashion. As the log files are used in a round robin fashion, old redo data is overwritten at some point in time - thus the corresponding redo records are not longer available in a recovery scenario (they may be in the archived redo logs, which may however not exist if your database runs in NOARCHIVELOG mode and even if your database runs in ARCHIVELOG mode, the archived redo log files may not be accessible to the instance without manual intervention by the DBA).
    So before overwriting a redo log file, the Oracle instance must ensure, that the redo records being overwritten will not be used in a potential instance recovery (which the instance is supposed to do automatically, without any action by the DBA, after instance failure, e.g. due to a power outage). The way to ensure this is to have the DBWR write all modifications to disk that are protected by the redo records being overwritten (i.e. all data blocks where the first modification that has not yet been written to disk is older than a certain time) - this is called a "Thread checkpoint".
    2. DBWR posts LGWR to write redo records to disk
    Oracle uses a write ahead protocol (see http://en.wikipedia.org/wiki/Write-ahead_logging and Write Ahead Logging.... This means, that for any modification the corresponding redo records must been written to disk before the actual modification to the data blocks is written to disk (into the data files). The purpose of this I believe is, to ensure that for any data block modification that makes it to disk, the corresponding UNDO information can be restored (from redo) in case of recovery, in order to reverse uncommitted changes in a recovery scenario.
    Before writing a data block to disk, the DBWR must thus make sure, that all redo for modifications affecting this block has already been written to disk by the LGWR. If this is not the case, the DBWR will post the LGWR and only write the data block to the datafile once the redo has been written to the redo log file by LGWR.

  • Real time apply dataguard

    Hello
    I have some doubts about real time apply.
    If a transaction occurs in production, lgwr will automatically write this transaction to standby redologs in standby database.
    Is this transaction applied to standby database immediately or wait for standby redolog switch and applied as soon as standby log is archived ?

    Thanks Uwe.
    Eventhough you mentioned this before, I just want to ask last time to confirm:
    Regarding real time apply;
    When a transaction occurs in production, this transaction is not applied directly to standby database.
    It is first transferred to standby redolog( by the log writer of primary) and from there to standby database immediately. (I dont know which background proces is responsible for this)
    Is this right?By default, the archives are transferred(on completion) to standby database server using ARCH process and received by Remote File Server process (RFS) on standby database server. Optionally, you may choose to do the same using standby redo logs instead of archive log files (LGWR and LNS processes comes into picture). The standby redo logs are compulsory for real time apply regardless of protection mode.
    If you're using SRL, then redo data is transferred to SRL and once the SRL is archived the changes are written to the database using Managed Recovery Process (MRP) and hence minimizes the data loss in case of failover and minimizes the time in case of switchover and failover.
    If you're using Real Time Apply, then the standby redo log data is directly written to the database using LSP or MRP without waiting for SRL to be archived.
    Regards,
    S.K.
    Edited by: Santosh Kumar on Sep 22, 2009 3:53 PM

  • Checkpoint not complete + db_writer_processes/dbwr_io_slaves

    Hi,
    Oracle Database 11g Release 11.1.0.6.0 - 64bit Production With the Real Application Clusters option.
    After I noticed this error into the alert log:
    Thread 2 cannot allocate new log, sequence 152831
    Checkpoint not complete
    Current log# 17 seq# 152830 mem# 0: +ONLINELOG/evodb/onlinelog/group_17.272.729333887
    Thread 2 advanced to log sequence 152831
    Current log# 14 seq# 152831 mem# 0: +ONLINELOG/evodb/onlinelog/group_14.269.729333871
    And read a lot to understand the real cause (for the moment I increased the the redolog file from 5 to 7 (250mb each)).
    As it seems I've no problem with the ARCH processes, I read that the cause can be the DBWR0 process that is not "fast" enough to write block I've into redos, and free them for archiving.
    I read then something about the asynchronous I/O, and how db_writer_processes/dbwr_io_slaves can simulate the async write to disk.
    I think I understood the difference between db_writer_processes and dbwr_io_slaves.
    My question is how I can understand if my database needs more DBWR process.
    At the moment my configuration is:
    db_writer_processes 1
    dbwr_io_slaves 0
    Thanks in advance,
    Samuel

    Hi Samuel,
    There is still a major confusion on your side concerning the DBWR. It will NOT write data from your redo buffers to the redo logs, since it is the job of the LGWR.
    When a log switch occurs (so, you will use a different redo group), then it is the job of the ARCn process(es) to backup the 'used' redo log to a archive log.
    When your ARCn process(es) are not fast enough, and a log swicth occurs, it may happen that you have no inactive (read archived) redo group.... then Oracle 'hangs' till it can find such a redo group available.
    So, you may want to add 1 (or more) redo group, or increase the size of the redo log files, or have more archiver processes.
    DBWr job is to write dirty database blocks back to datafiles.
    CKPT also works independently of the LGWR and DBWR.
    Check this:
    http://www.dbasupport.com/forums/archive/index.php/t-5351.html
    And another couple of links:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/process.htm
    Concepts of checkpoint, dbwr, lgwr processes
    HTH,
    Thierry
    Edited by: Urgent-IT on Feb 10, 2011 5:33 PM
    Added mote about CKPT + link
    Edited by: Urgent-IT on Feb 10, 2011 5:37 PM
    Added another 2 links

  • Metrics "Database Time Spent Waiting (%)" is at ... for event class "Commit

    Hi guys
    I'm getting this warning every 20 minutes from Enterprise Manager 10g: Metrics "Database Time Spent Waiting (%)" is at 67.62891 for event class "Commit". My database is an Oracle 10g 10.2.0.4 (Linux x84_64) with dataguard, one physical and one standby networked at 100Mbit. I have 3 groups of 50M redo log with a low volume of transaction for now. Here are some metrics numbers:
    SQL> select sum(seconds_in_wait),event from v$session_wait group by event;
    SUM(SECONDS_IN_WAIT) EVENT
    121210 SQL*Net message from client
    0 Streams AQ: waiting for messages in the queue
    18 wait for unread message on broadcast channel
    6 LNS ASYNC end of log
    30 jobq slave wait
    37571 rdbms ipc message
    384 smon timer
    35090 pmon timer
    I tune the my listerners for Dataguard using the documentation. I don't know what to tune anymore, is someone has a clue.
    Thanks,
    Chris

    cprieur wrote:
    Metrics "Database Time Spent Waiting (%)" is at 67.62891 for event class "Commit"
    I'am also getting this message at a regular time:
    Metrics "Database Time Spent Waiting (%)" is at 64.09801 for event class "Network"
    The report, I was sent only to indicate the time spent on: SQL*Net message from client
    I believe that these metrics are explained here (near the bottom of the page):
    http://download.oracle.com/docs/cd/B19306_01/em.102/b25986/oracle_database.htm
    There are only two wait events in the Commit class (on 11.1.0.7):
    log file sync
    enq: BB - 2PC across RAC instances
    log file sync waits happen when sessions issue a COMMIT or ROLLBACK. Sessions will wait on this event until LGWR completes its writes to the redo logs (LGWR will likely wait on the event log file parallel wriite while waiting for the write to complete). I believe that your DataGuard configuration may contribute to the long waits experienced by LGWR, and it may be made worse by the network connection.
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28294.pdf
    "Maximum availability - This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database."
    So, if Maximum availability is configured, the sessions will have to wait for the redo to be applied to the remote database.
    At the system-wide level you will almost always be able to ignore the SQL*Net message from client wait events. At the session level this wait event has a more significant meaning - the client computer is not actively waiting on a response from the database instance - the client is either sitting idle, or performing client-side processing.
    Sybrand, if he joins this thread, will likely be able to provide a more complete answer regarding DataGuard's contribution to the Commit wait class.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • RAID Disk Layout advice for Hybrid OLTP/Batch app

    I was hoping for some advice from the community on my disk layout for a proposed upgrade to our Oracle setup (Windows 2003, JDE+Oracle9i, primarily OLTP/Batch extracts). Focusing purely from a hardware perspective, we're being given 14 disks to work with. I had initially thought that it was a lot, but when I start putting the spindles into RAID configurations, it doesn't give me a lot of room.<BR>
    <BR>
    <B>2 Disks RAID 1 (mirrored) for OS, App and Oracle Binaries<BR>
    2 Disks RAID 1 (mirrored) for 1 set of online redo log files<BR>
    2 Disks RAID 1 (mirrored) for archive log files<BR>
    8 Disks RAID 10 (mirrored and Striped) for all data/index/undo/temp files.<BR>
    ----------------------------------------------------------------------------------------------------------<BR>
    14 total disks</B><BR>
    <BR>
    My question is -- where would I put the other set of multiplexed redo log files? Hardware mirroring gives me fault tolerance against media corruption, but doesn't protect me against accidental deletion of the logs. I don't want to put the 2nd set on the OS drive because it's mixing sequential (100% read / write) and random access (OS paging, binaries).<BR>
    <BR>
    I also wanted to better understand how the LGWR and ARC0 processes work. From the Oracle Perf Tuning Guide, the LWGR writes to one member at a time, so it pays to have alternating log groups on 4 disks. We believe the ARC0 process reads only the primary member of the disk, yet the figure 8-1 in the above document shows an arrow from ARC# to both redo members. So how does it really behave? Does it actually read from one or both members? If it only ever writes to the second set and never reads it, where would I put it? Could I even put the second set of online redo's into the data/index/undo/temp set of disks?<BR>
    <BR>
    I've already been perusing the various AskTom articles and read the Oracle whitepapers (#1and #2) on S.A.M.E. (stripe and mirror everything), and they all make some good suggestions -- it's just with this few disks, I find myself having to make compromises. Any help or suggestions on a better layout would be appreciated.

    Hello Carl,
    You have to implement this SAP Note - 1313075 to meet your purpose.
    Create variant under this program - RFFOAVIS_FPAYM and assign the same in printjob tab in F110.Also you have to maintain the sender mail id details in FSAP T code and receiver mail id details in the concern vendor master data.
    I hope it helps else revert us with your query.
    Thanks & Regards,
    Lakshmi S

Maybe you are looking for