LGWR write

Hi,
Oracle Verson : Oracle Database 10g Release 10.2.0.4.0
OS:Windows XP
In Oracle document it is mentioned that:
When a user issues a COMMIT statement, LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the transaction's redo entries . The corresponding changes to data blocks are deferred until it is more efficient to write them. This is called a fast commit mechanism.
Now my query is:
After commit LGWR writes commit record on disk.Does LGWR also writes commit records in Redo log?

user12141893 wrote:
Hi,
Oracle Verson : Oracle Database 10g Release 10.2.0.4.0
OS:Windows XP
In Oracle document it is mentioned that:
When a user issues a COMMIT statement, LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the transaction's redo entries . The corresponding changes to data blocks are deferred until it is more efficient to write them. This is called a fast commit mechanism.
Now my query is:
After commit LGWR writes commit record on disk.Does LGWR also writes commit records in Redo log?Actually, this statement LGWR puts a commit record in the redo log buffer is wrong . All what LGWR does is to write the content in the redo logs . It doesn't write any sort of "commit record" by itself in the redo stream. The Commit is sort of marked by the Commit SCN which is generated at the time of the commit being issued, that's it!
So with that in mind, can you please state now again that what's your question is?
Aman....

Similar Messages

  • To where does the LGWR write information in redo log buffer ?

    Suppose my online redo logfiles are based on filesystems .I want to know to where the LGWR writes information in redo log buffer ? Just write to filesystem buffer or directly write to disk ? And the same case is associated with the DBWR and the datafiles are based on filesystems too ?

    It depends on the filesytem. Normally there is also a filesystem buffer too, which is where LGWR would write.Yes but a redo log write must always be a physical write.
    From http://asktom.oracle.com/pls/ask/f?p=4950:8:15501909858937747903::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:618260965466
    Tom, I was thinking of a scenario that sometimes scares me...
    **From a database perspective** -- theoretically -- when data is commited it
    inevitably goes to the redo log files on disk.
    However, there are other layers between the database and the hardware. I mean,
    the commited data doesn't go "directly" to disk, because you have "intermediate"
    structures like i/o buffers, filesystem buffers, etc.
    1) What if you have commited and the redo data has not yet "made it" to the redo
    log. In the middle of the way -- while this data is still in the OS cache -- the
    OS crashes. So, I think, Oracle is believing the commited data got to the redo
    logs -- but is hasn't in fact **from an OS perspective**. It just "disapeared"
    while in the OS cache. So redo would be unsusable. Is it a possible scenario ?
    the data does go to disk. We (on all os's) use forced IO to ensure this. We
    open files for example with O_SYNC -- the os does not return "completed io"
    until the data is on disk.
    It may not bypass the intermediate caches and such -- but -- it will get written
    to disk when we ask it to.
    1) that'll not happen. from an os perspective, it did get to disk
    Message was edited by:
    Pierre Forstmann

  • How does LGWR write  redo log files, I am puzzled!

    The document says:
    The LGWR concurrently writes the same information to all online redo log files in a group.
    my undestandint of the sentence is following for example
    group a includes file(a1, a2)
    group b includes file(b1, b2)
    LGWR write file sequence: write a1, a2 concurrently; afterwards write b1, b2 concurrently.
    my question is following:
    1、 my understanding is right?
    2、 if my understanding is right, I think that separate log file in a group should save in different disk. if not, it cann't guarantee correctly recovery.
    my opinion is right?
    thanks everyone!

    Hi,
    >>That is multplexing...you should always have members of a log file in more than 1 disk
    Exactly. You can keep multiple copies of the online redo log file to safeguard against damage to these files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. In addition, when multiplexing redo log files, it is preferable to keep the members of a group on different disks, so that one disk failure will not affect the continuing operation of the database. If LGWR can write to at least one member of the group, database operation proceeds as normal.
    Cheers
    Legatti

  • Upon commit, lgwr writes to redo logs but dbwr does not write to datafiles

    Guys,
    Upon issuing a commit statement, in which scenarios, the lgwr only writes to redo logs but the dbwr does not at all write to the datafiles?
    Thanx.

    The default behaviour is - on Commit, the lgwr writes to the redo logs immediately, but it may not get immediately written to the datafiles by the dbwr, but sooner or later it would (based on certain conditions). The only situation, which I can think of, when dbwr may not be able to write is datafiles is when the databases crashes, after the commit and before the DBWR could write to the datafiles.
    Not sure, what you are exactly looking for, but hope this helps.
    Thanks
    Chandra Pabba

  • DBWR writes before LGWR wirtes?

    Hi,
    Under what circumstances does DBWR write a block a dirty block out before LGWR writes to the logs?
    Thanks
    Dilip.

    Here is a great blog post regarding many of the parts of Oracle's architecture:
    http://blogs.ittoolbox.com/bi/confessions/archives/post-index-how-oracle-works-10605
    Check it out for your answer.

  • Wait Events "log file parallel write" / "log file sync" during CREATE INDEX

    Hello guys,
    at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
    To get some performance values, that i can compare i just built up a normal oracle database in the first step.
    Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
    My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
    I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
    After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
    And now take a look at these values from the AWR
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              10,019     .0         132      13      33.5
    log file sync                           293     .7           4      15       1.0
    ......How can this be possible?
    Regarding to the documentation
    -> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
    Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
    Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
    I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
    Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
    Do you have any idea how these values come about?
    Any thoughts/ideas are welcome.
    Thanks and Regards

    Surachart Opun (HunterX) wrote:
    Thank you for Nice Idea.
    In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
    CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
    Two points on nologging, though:
    <ul>
    it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
    If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
    </ul>
    Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
    The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
    There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Explanation on ARCH and LGWR during recovery stage

    Hi there,
    I've set my Data Guard and it works in "real time". However, the during the real time apply it's slow getting data on the online redo log from the primary. But, it's very quick when apply the archive log files to the standby.
    If I do a simple insert on a table TEST1 with one record on the primary database. Noted the TEST1 table has one column with NUMBER datatype ( insert into TEST1 values (1);). It will takes more than 20 seconds before I see the data on the standby when I apply real time. Noted, I didn't do a log switch (alter system switch logfile).
    However, when I insert thousand of records on the primary and do a log switch. I see the data right away on the standby.
    This really doesn't make sense to me. I know the LGWR write to the primary and standby and I know the ARCH process will archive the redo log when a manual switch or the log is full. But my test only has 1 record on the online redo log. Why does it takes so long to apply on the standby when the archived log has 20 meg apply much quicker?
    Thanks

    can you post log_archive_dest_<n> reading 'service= ...'
    for the primary database?
    Sybrand Bakker
    Senior Oracle DBA

  • Oracle Dataguard - LGWR

    greetings to you all,
    the two of us argued one thing - we want to clear it out
    during log writing on data guard configuration using highest protection mode does oracle write the log file first
    on the physical standby or on the primary database log file. and can you give me some oracle documentation reference for
    that.
    the question seems silly but I need to prove my point . my point is - it is not logical to write first on the standby log before writing it on the primary ofcourse commit message will only be given after it is written on both log files but the writing is done either simultaneously or the primary log comes first.
    thank you for your cooperation.

    The important thing to remember is that LGWR isn't the one doing the actual writing to the log_archive_dest_n for your standby (if you're talking 10g and up) - LGWR will spawn an LNSn process with the write request to perform parallel network IO to one or more [SYNC] log_archive_dest_n destinations at the same time that LGWR writes to the online redo locally. In terms of "what gets requested first", it's not really relevant, since the processes all run in parallel - in SYNC mode, all that Oracle cares about is whether all the destinations completed successfully within the timeout window. Logically, however, LGWR would probably finish writing to local redo first, since LNS is a newly spawned process, sends redo via network to the RFS service on the standby, which in turn writes to the standby redo logs. More hops/processes + network latency = longer time to write to standby redo, in most configurations.
    From an efficiency perspective, it doesn't matter - LGWR may submit the LNS request first, but finish writing to local online redo first too... but to database users, the database will wait until all sync destinations have completed successfully, in Maximum Protection mode, and to some extent, in Maximum Availability mode (although there's some flexibility there for unavailable standby databases).

  • LGWR and DBWn

    I learnt from the books that LGWR writes every three seconds
    1. Can we change this time?
    2. Is it so that the Log_Buffer size should be just large enough to accommodate redo entries generated for 3 seconds because after that it will flush the contents
    Re: Concepts of checkpoint, dbwr, lgwr processes
    Also want to know whether the commit includes flushing of all the redo log buffer to redo logs or just the redo entries related to commit?

    Also want to know whether the commit includes
    flushing of all the redo log buffer to redo logs or
    just the redo entries related to commit?These paragraph in concept answers your question.
    When a user issues a COMMIT statement, LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the transaction's redo entries. The corresponding changes to data blocks are deferred until it is more efficient to write them. This is called a fast commit mechanism. The atomic write of the redo entry containing the transaction's commit record is the single event that determines the transaction has committed. Oracle returns a success code to the committing transaction, although the data buffers have not yet been written to disk.
    Note:
    Sometimes, if more buffer space is needed, LGWR writes redo log entries before a transaction is committed. These entries become permanent only if the transaction is later committed.
    When a user commits a transaction, the transaction is assigned a system change number (SCN), which Oracle records along with the transaction's redo entries in the redo log. SCNs are recorded in the redo log so that recovery operations can be synchronized in Real Application Clusters and distributed databases.
    In times of high activity, LGWR can write to the redo log file using group commits. For example, assume that a user commits a transaction. LGWR must write the transaction's redo entries to disk, and as this happens, other users issue COMMIT statements. However, LGWR cannot write to the redo log file to commit these transactions until it has completed its previous write operation. After the first transaction's entries are written to the redo log file, the entire list of redo entries of waiting transactions (not yet committed) can be written to disk in one operation, requiring less I/O than do transaction entries handled individually.
    Therefore, Oracle minimizes disk I/O and maximizes performance of LGWR. If requests to commit continue at a high rate, then every write (by LGWR) from the redo log buffer can contain multiple commit records.

  • Log file sync question

    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?

    Tony Hasler wrote:
    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
    You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
    This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
    You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
    If you're stuck with Oracle diagnostics only then:
    redo size / redo synch writes for sessions will tell you the typical "commit size"
    redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
    If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
    Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
    It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
    Regards
    Jonathan Lewis

  • Archive Generation in RAC environment.

    Gems,
    I have one doubt it was not able to solve from my mind i.e.
    As per my knowledge,
    1) Every instance has its own thread so Thread is instance specific. My questions are, How the SCN will be maintained in ARCHIVES? will thread 1 & thread2 contains the same SCN?
    2) If i have primary database with two nodes, and standby as single stand alone, How it will apply the archives on standby? I monitored ALERT log file of standby, Mostly it applies first thread 1 sequence, then thread2 sequence then thread1 sequence and then thread 2 sequence and so on.. Let me know if my assumption is wrong.
    3) If i cloned from RAC to NON-RAC then, If in case if i performed SCN based recovery as
    run
    set until scn 10000;
    restore database;
    recover database;
    then scn *10000* will contain in both the archives or only in one archives? or it contains only in one sequence? Please explain me.
    Appriciated for the right answers, If you are unable to understand let me know, I will try to clear up more clearly.
    Thanks

    Hi,
    first of all there are 2 kinds of SCNs. A "global" SCN and a local SCN per instance. Since 10g the SCNs are advanced with a mechanism called BoC (Broadcast on Commit).
    Each time the LGWR writes to the redo log, LGWR sends a message to update the global SCN in all instances and to update the local SCN in all active instances.
    1.) So each RedoLog entry knows the global SCN. So even though you have multiple instances, the SCN is unique (and in order).
    2.) That depends on how the redlogs were written. If e.g. one instance did not write redologs for a while, it could be that 2 logs from instance 1 are needed before the log from instance 2 is needed. But in gerenall it will be "parts of redolog 1 then parts of redolog 2" etc.
    3.) Yes you can, since the (global) SCNs are unique and will be used.
    Regards
    Sebastian

  • PL/SQL block  -- commit

    will multiple commit 's in between my code, like the one below, affect the speed of the code ?
    Code not working so puttting commit after 500 records --- Vaibhav
         IF l_commit_count > 499
         THEN
              l_commit_count :=0;
              COMMIT;
         END IF;

    Hi,
    When a transaction is committed, the following occurs:
    1) The internal transaction table for the associated rollback segment records that the transaction has committed, and the corresponding unique system change number of the transaction is assigned and recorded in the table.
    2) The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the online redo log file. It also writes the transaction's SCN to the online redo log file.
    3) Oracle releases locks held on rows and tables.
    4) Oracle marks the transaction complete.
    Everything there needs some system resources and
    can affect the performance, in particular (2) because
    of the disk I/O. While your commit is being processed your program waits. Then you do commit very often it
    can be noticeable.
    Rgds.

  • Commit on thousands of records

    Hello,
    I've encountered the following problem while trying to update records in an Oracle 8i database :
    I have a java program that updates thousands of records from a flat file to the oracle database, the "commit" command is done at the end of the program,the problem is that some records are not updated in the database but no exception is raised !
    If I try to do a commit after each update, the problem seems to be solved, but of course it takes more time to do the massive update, and I think it is not recommended to do a commit after each record?
    Is there a limit to which a commit can be done? (a number of maximum records to be updated)
    Thanks greatly for your help!
    Regards,
    Carine

    If it was a problem with the size of the rollback statements, you would have received an error.
    But are you sure that you don't have any neglected errors (like a when others that does no handling?). In that case you wouldn't receive any error and no rollback would be performed (but a commit instead) resulting in "saving" your already done modifications.
    In the book "expert one-on-one" from thomas kyte, there is a chapter of what exactly a commit does.
    a small extract:
    basicly a commit has a fairly flat response time. This because 99.9 percent of the work is already done before you commit.
    [list]
    [*]you have already genererated the rollback segments in the sga
    [*]modified data blocks have been generated in the sga
    [*]buffered redo for the above two items has been generated in the sga
    [*]depending on the size of the above three, and the amount of time spent, some combination of the above data may have been flushed onto disk already
    [*]all locks have been acquired
    [list]
    when you commit, all that is left is the following
    [list]
    [*]generate a scn (system change number) for our transaction
    [*]lgwr writes all of our remaining buffered redo log entries to disk, and records the scn in the online redo log files as well. This step is actually the commit. if this step occurs, we have committed. Our transaction entry is removed, this shows that we have committed. Our record in the v$transaction view will 'disappear'.
    [*]All locks held by our session are released, and everyone who was enqueued waiting on locks we held will be released.
    [*]Many of the blocks our transaction modified will be visited and 'cleaned out' in a fast mode if they are still in the buffer cache.
    [list]
    Flushing the redo log buffer by lgwris the lengthiest operation.
    To avoid long waiting, this flushing is done continuously as we are processing:
    [list]
    [*]every three seconds
    [*]when the redo log buffer is one third or one MB full
    [*] upon any transaction commit
    [list]
    for more information do a search on akstom.oracle.com or read his book.
    But is must be clear that the commit on itself has no limits on processed rows.
    There's no limit re: commit. There is a limit on the number of rows that can be modified (updt, del, ins) ina transaction (e.g. between commits). It depends on rollback segment size (and other activity). This varies with each database (see your DBA).
    If you were hitting this limit it would normally "rolllback" all changes to the last commit.
    Ken
    =======
    Hello Ken,
    Thanks a lot for this quick answer. The wonder is that I do not get any error message concerning the rollback segment:
    if I do the commit at the end after updating thousands of records, it seems like it was done correctly but I see that only some records have not been updated in the database (thus I would not be hitting the limit as all changes would have been rolledback) ?
    Is there a way to get a return status from the commit ? Should I do a commit after each 1000 records for example?
    Thanks again,
    Carine

  • Standby db scenario

    Hi,
    Oracle version is 10.2.0.3. I am into the process of creating a physical standby. Both primary as well as standby are kept in same IDC connected by 1Gbps link.
    Client doesnt want primary db to get affected in case if, because of any reason, standby goes down or link goes down. In this situation the only choice I have is to configure primary db in maximum performance mode.
    In this scenario what is advisable for redo transport. ARCn or LGWR (with SYNC or ASYNC).
    My confusion is we can have multiple ARCn processes in Database server but we have only one LGWR, so aint I putting extra load(which may lead to poor primary db performance)on primary db if I choose LGWR for log transfer. At the same time if I go for ARCn, I feel like I am not making use of high bandwidth between two db's.
    Kindly have your recommendations/suggestions on this
    Regards,
    Amit

    Hi Amit,
    My suggestion would be LGWR with ASYNC.
    In that case the impact to your primary database is lowest.
    In this configuration, LGWR writes redo data from log buffer to online redo logs.
    Network Server Process, LNS reads the online redo logs and ships the redo streams to Remote File Server Process (RFS) on standby site.
    RFS takes the redo stream and updates the Active Standby Redo Log.
    Be aware that Standby Redo Log is different than regular Redo Logs.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/standby.htm#i72459
    One a Standby Redo Log logswitch occurs, it archives the active Standby Redo Log. Once archived it is ready to be used by MRP (Managed Recovery Process) process which applies it to the standby database.
    LGWR (SYNC or ASYNC) configuration also reduces the possibility of loosing data in a case of failover.
    Read the following documents to learn more about the redo transport services:
    Oracle® Data Guard Concepts and Administration
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm
    Data Guard Redo Transport & Network Best Practices Oracle Database 10g Release 2
    (http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf)
    Cheers,

  • DataGuard... SYNC/ASYNC

    I m studying dataguard, but I m having confusion in parameter SYNC, ASYNC, AFFIRM, NOAFFIRM. Please explain how these parameters affect the Primary/Standby Databases.

    HI,
    SYNC, ASYNC,AFFIRM, NOAFFIRMthese are basically used in which mode your standby is running.
    In maximum protection & availibilty mode the logs from Primary DB are shipped synchronously and shipped to standby.
    lgwr writes it synchronously .
    In Maximum performance mode asynchronously logs ar shipped by archiver or lgwr.(asynch. only if lgwr).
    HTH.
    Message was edited by:
    Ora-Lad

Maybe you are looking for

  • Adobe Muse page capacity

    HI , I am working on my page which is full of pictures and different kinds od stuff. The thing is that i can not create or delete any more pages or masters.Does Adobe Muse have a max use of space or something ? Because every time I try to make a new

  • Getting error in transporting table ztable...

    HI i am getting the below error from basis whne moving a table and table maintainence.... the field error which is being given i already delted from table  and replaced with the corect one ie zcusemail-email_id but it is giving error of old one? why

  • Msgs getting stucked in inbound queue

    Hi All, I am doing a jdbc- xi - ABAP proxy scenario. Data is coming into the R/3 via proxy but getting stucked up in inbound queue. When I go to the queue I can find entries with msg transaction recorded. When I manually execute the entry from the qu

  • Need to remove password on Satellite A660-18N

    Hi there, I have problem with an access to my laptop Satellite A660-18N, it is asking for the password, that i do not remember (or know). I bought it in UK back in 2010, since then it is the first time i am dealing with this problem. Is there any pos

  • [solved] allowing guest user full access to a subfloder on my account

    Ok, let's say I have a directory in location /home/myacc/shared-files/ I do chmod 777 on that dir, and make a link on a guest account ( ln -s /home/myacc/shared-files/ /home/guest/share ) When I try to access /home/guest/share from the guest account,