Archive Gap

Hello,
If there is a gap of 100s or 1000s archivelogs between primary & standby and if redo gap is not resolved automatically, Which one is the recommended method to fill that gap? I think using RMAN incremental backup is the best method.
Thanks

If there is a gap of 100s or 1000s archivelogs between primary & standby and if redo gap is not resolved automatically, Which one is the recommended method to fill that gap? I think using RMAN incremental backup is the best method.Other good method will be not to use such standby database if it can't resolve gaps. Or not to use such DBA if he creates such bad standby databases. :)

Similar Messages

  • Archive gap between primary and standby

    Hi,
    I've a production environment with 2node RAC with ASM as primary and standalone standby with datafiles stored on the filesystem.
    Always on the standby side, there is only one archive gap on the standby end, it is not applying it even after arrival of the archivelog.
    How to overcome it?
    Thanks

    Hello;
    Depending upon the query you are using "Real time apply" might show as 1 log behind. Is this possible?
    Example from mine :
    STANDBY               SEQUENCE# APPLIED    COMPLETIO                                               
    STANDBY2                 10711 YES        31-MAY-12                                               
    STANDBY2                 10712 YES        31-MAY-12                                               
    STANDBY2                 10713 YES        31-MAY-12                                               
    STANDBY2                 10714 YES        31-MAY-12                                               
    STANDBY2                 10715 YES        31-MAY-12                                               
    STANDBY2                 10716 YES        31-MAY-12                                               
    STANDBY2                 10717 YES        31-MAY-12                                               
    STANDBY2                 10718 YES        31-MAY-12                                               
    STANDBY2                 10719 YES        31-MAY-12                                               
    STANDBY2                 10720 YES        31-MAY-12                                               
    STANDBY2                 10721 YES        31-MAY-12                                               
    STANDBY2                 10722 YES        31-MAY-12                                               
    STANDBY2                 10723 YES        31-MAY-12                                               
    STANDBY2                 10724 YES        31-MAY-12                                               
    STANDBY2                 10725 NO         01-JUN-12     So sequence 10725 is still in progress so it shows 'NO'.
    Can you post the query you are using?
    Best Regards
    mseberg
    Edited by: mseberg on Jun 14, 2012 7:28 AM

  • Standby archive gap error ..

    I have created a standby database,on the same site where my primary
    database running,every thing works fine,but the problem i am facing
    is that i am not able to recover my standby database,althogh log
    transport service archiveing both the dir,primary abd standby database
    and i query the primary database archive log list ;
    Primary database
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination d:\oracle\admin\db01\arch
    Oldest online log sequence 8
    Next log sequence to archive 10
    Current log sequence 10
    Standby Database
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination d:\oracle\admin\stdby\arch1
    Oldest online log sequence 5
    Next log sequence to archive 5
    Current log sequence 10
    why this archive gap generating in standby databse ?
    And how to recover my standy database up to my pr. Database ??
    plz don't mention any link i've already gone through it..
    plz solve it,the help would be highly appricated..

    Hi,
    The standby database don't generete archive log, this database only reapply archive log which sended by primary database (into a second arch dest).
    In init.ora from your standby database :
    Are standby database parameters standby_archive_dest and log_archive_dest_1 equal ?
    standby_archive_dest parameter (from standby database) must equal to log_archive_dest_2 parameter (from primary database), is it the case ?
    Is your standby database in recovery managed ?
    Is standby database listener started ?
    Nicolas.

  • Archive gap in 9.2, archive logs are gone

    Hi all,
    I've got an old 9.2.0.6 DataGuard setup, it's been running fine for years. We recently had to move our DR server to a new datacenter, so it got shut down, shipped and brought back up. The database came up fine, but now I see that I've got a gap where I'm missing 3 archive logs. And of course, something got screwed up on the primary database server, to the point that I don't have a backup of those archive logs, and they are not on the server (or the DR server). They got deleted, and they are GONE.
    So now I have to figure out the quickest/easiest way to get this resolved. I was thinking I could do an incremental backup on the primary server, from the proper SCN number. But now that I am looking at it, it looks like the 9i RMAN didn't have this feature (looks like it was introduced in the 10g RMAN).
    So at this point, is my only option to drop and re-create the DR database? Any other way to get around this?
    I appreciate any help!!

    Hello,
    No other way in 9i even when archives are no more exist.
    You have to recreate entire setup. :(
    Of course as you said incremental SCN it's of 10g. So go and re-build.
    TheBlakester          
    Handle:     TheBlakester  
    Status Level:     Newbie
    Registered:     Feb 9, 2006
    Total Posts:     127
    Total Questions:     20 (19 unresolved)
    Name     Brad Blake
    Location     Boulder, CO
    Occupation     DBA
    >
    Your all of the questions are unanswered except one, consider closing threads and keep the forum clean.
    Edited by: CKPT on Jul 6, 2012 6:18 AM

  • Recovering for a archive gap

    Hi All,
    Using Oracle 11gR2 on RHEL 5.6. My Primary and Standby had different locations for datafiles and redolog files.
    Since the Standby has a big gap with the Primary due to a network outage, plan to do an incremental rollforward.
    Previously I have done a rollforward where file locations were same with no issues.
    Intend to follow the steps as in http://docs.oracle.com/cd/B19306_01/server.102/b14239/scenarios.htm#CIHIAADC.
    Now as per above Oracle reference why do I need to do ; I have already used earlier db_file_name_convert, log_file_name_convert in my pfile to create the DG.
    1. remove all online logs/standby logs in standby directories
    2. Standby – Clear all standby redo logs
    If I need to do then will the above files be created automatically when I start the MRP?
    Pleas advise ..

    mseberg wrote:
    OK.
    First off you have a setup problem if your archive log are no longer on your primary. If RMAN removed them you should configure it to APPLIED on STANDBY.
    RMAN> configure archivelog deletion policy to applied on standby;Do I have to set this on Primary and Standby both ?
    I answered your question because I have dealt with some large gaps. However I have not done what you are trying. My friend CKPT has this excellent document :
    RMAN Incremental Backups to Roll Forward a Physical Standby Database
    http://www.oracle-ckpt.com/category/dataguard/page/6/
    Have cheked the link earlier, it doesn't refer to my situation. My file locations/mount points are different.
    If my MRP is stopped and I delete my standby and online redo logs in Standby. Then will it create automatically in Standby based on the control file restored when I start my managed recovery? This step is as per my first link.

  • Archive log registered,but showing as not applied in standby db + oracle 9i

    Hi all,
    In my standby database some of the archive log files are not applied.found through the following command
    SQL > select sequence#,applied from v$archived_log where applied='NO';
    sequence# APP FIRST_TIME
    18425 NO 05-FEB-10
    but when try to register manually
    SQL > alter database register logfile '/disk12/arch/A00123000018425.arc';
    ERROR at line 1
    ORA-16089:archive log has already been registered.
    "recover standby database" also asking for the new archive file. how will i apply that?
    any solutions?

    user11919409 wrote:
    I have did that so many times from 05-feb-10 onwards.
    even then only one archive gap???
    or only one is not applied? if yes may be archive file which exist at standby may be corrupt try to restore it from backup and recover.
    also check for some clues at logs at standby and primary.. what actually happen. if archive gap is huge and you have all archvies then you may have to do manual recovery then keep it back to auto recovery mode.
    Hope that may help you.
    Anil Malkai

  • Gap resolution is not  always happening with real time apply

    Hi
    I noticed a strange behavior in one of my 10.2.0.4 dataguards , and I am wondering if anyone else has encountered it :
    When there is an archive gap that need to be resolved , the gap is NOT always being identified and acted upon , when the recovery is started as such :
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION.
    its just sitting there doing nothing, and nothing is written to the alert log regarding the gap and nothing is being transferred from the primary as needed.
    However, if I restart the apply , omitting the 'USING CURRENT LOGFILE' clause , the gap is identified and acted upon, as such :
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION.
    At first I thought its merely due to me stopping and starting the apply process ( nothing like a "reboot" , right ? ) , but that is not the case.
    if I restart , but still use the USING CURRENT LOGFILE clause , the gap is still not being identified .
    Anyone had this issue ? any theories as to why that happens ?

    Using Real-Time Apply to Apply Redo Data Immediately
    http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_apply.htm#i1022881
    1.What is compatible parameter, it should be 11.1
    2.Try to check parameters mentioned in below link:
    http://easyoradba.com/2011/01/10/real-time-apply-in-oracle-data-guard-10g/
    Regards
    Girish Sharma
    Edited by: Girish Sharma on Nov 15, 2012 12:37 PM

  • Archive not applying

    Setup: - Oracle 10.2.0.4 EE on HP-UX 11.23
    Two weeks ago we got our DG physical standby setup working, initial tests looked good, I could tail both primary and standby alert logs and see results of log switches, everything looked good for about a week. Then ..
    One morning I came in and logs weren't shipping. Checked the standby site and it had lost its mounts to the NAS. So far out setup was more proof of concept and I was getting ready to take a week off. So on the primary I set log arch dest 2 status to DEFER. In my absence our SA fixed the problem with mounting the fs and restarted the standby db (he's not a dba and just did a normal startup). I returned to the office this morning and find dest2 status back to ENABLE. But it appears that archivelogs are not applying.
    Prior to the crash, as redo was received at the standy, its alert would show
    Wed May 25 18:38:11 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[39]: Successfully opened standby log 5: '/oradata/ora_redo/BOSTON/sbyredo05a.rdo'
    Wed May 25 18:38:17 2011
    Media Recovery Waiting for thread 1 sequence 3123 (in transit)
    Wed May 25 18:38:17 2011
    Recovery of Online Redo Log: Thread 1 Group 5 Seq 3123 Reading mem 0
      Mem# 0: /oradata/ora_redo/BOSTON/sbyredo05a.rdo
      Mem# 1: /archive/ora_redo/BOSTON/sbyredo05b.rdoAfter the standby was restarted, I don't see the above sequence, but instead get
    Fri Jun  3 07:27:09 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[28]: Successfully opened standby log 4: '/oradata/ora_redo/BOSTON/sbyredo04a.rdo'
    Fri Jun  3 07:28:49 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[28]: Successfully opened standby log 5: '/oradata/ora_redo/BOSTON/sbyredo05a.rdo'
    Fri Jun  3 07:31:15 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[28]: Successfully opened standby log 4: '/oradata/ora_redo/BOSTON/sbyredo04a.rdo'
    Fri Jun  3 08:39:47 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[28]: Successfully opened standby log 5: '/oradata/ora_redo/BOSTON/sbyredo05a.rdo'It appears that I have exceeded control_file_record_keep_time. This showed up in the standby alert
    Mon Jun  6 08:25:47 2011
    FAL[client]: Failed to request gap sequence
    GAP - thread 1 sequence 3126-3224
    DBID 2542058214 branch 737468006
    FAL[client]: All defined FAL servers have been attempted.
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that is sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    -------------------------------------------------------------On primary:
    SQL> show parameter CONTROL_FILE_RECORD_KEEP_TIME
    NAME                                 TYPE        VALUE
    control_file_record_keep_time        integer     7
    SQL>I could take the "easy" way out and rebuild the standby from a fresh backup of primary, but before doing that would like to learn what I can from this, as I'm still pretty new to managaing a DB setup.

    Mon Jun 6 08:25:47 2011
    FAL[client]: Failed to request gap sequence
    GAP - thread 1 sequence 3126-3224
    DBID 2542058214 branch 737468006
    FAL[client]: All defined FAL servers have been attempted.
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that is sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    -------------------------------------------------------------Actually these information from alert logfile regarding CONTROL_FILE_RECORD_KEEP_TIME it shows any Archive gap found on standby even if it set to *30* days. It is very generic information, Nothing to worry,
    Fri Jun 3 07:27:09 2011
    Primary database is in MAXIMUM PERFORMANCE mode<RFS[28]: Successfully opened standby log 4: '/oradata/ora_redo/BOSTON/sbyredo04a.rdo'
    In above information, Whenever a archive ship from primary to standby it will assigned to logfile on standby. This is also very normal behaviour.
    I could take the "easy" way out and rebuild the standby from a fresh backup of primary, but before doing that would like to learn what I can from this, as I'm still pretty new to managaing a DB setup.
    According to your archive log gap GAP - thread 1 sequence 3126-3224 it is not an big, Why you want to rebuild.?
    Check what is the errors on primary alert log file.
    Post from PRIMARY:-
    show parameter dest_state_2select ds.dest_id id
    , ad.status
    , ds.database_mode db_mode
    , ad.archiver type
    , ds.recovery_mode
    , ds.protection_mode
    , ds.standby_logfile_count "SRLs"
    , ds.standby_logfile_active active
    , ds.archived_seq#
    from v$archive_dest_status ds
    , v$archive_dest ad
    where ds.dest_id = ad.dest_id
    and ad.status != 'INACTIVE'
    order by
    ds.dest_id
    select error_code,timestamp, message from v$dataguard_status where dest_id=2;Thanks.

  • RMAN-08120: WARNING: archived log not deleted, not yet applied by standby

    i get RMAN-08120: WARNING: archived log not deleted, not yet applied by standby on primary
    but when i run below query i get the same result from primary and standby
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
    44051
    SQL>
    standby is one log switch behind only!

    i get RMAN-08120: WARNING: archived log not deleted, not yet applied by standby on primary You already have answer by post of Mseberg.
    but when i run below query i get the same result from primary and standby
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
    44051
    SQL>
    standby is one log switch behind only!this is wrong query used on primary & standby. even if any one of archive gap available lets suppose sequence *44020* , this archive not transported to standby due to some network problem and so on. later if archives from *44021* all the archives transported on standby upto *44051* , then it shows the maximum sequence transferred to standby, It wont shows applied sequence.
    Check the below queries.
    Primary:-
    SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
    Standby:-
    SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
    HTH.

  • Need archives earlier than previous backup.

    Hi Experts,
    Here i want to share and to know the reason of one situation.
    DB Version :10.2.0.4
    OS Version: Linux
    I'm preparing Standby database from RAC database(PRIMARY) ----> (STANDBY) on ASM.
    I have taken backup of database on 25-DEC-2010
    taken standby controlfile backup on 26-DEC-2010
    i restore the database on 28-DEC-2010 and i found some archive gaps after staring MRP process.
    I have archive retention of 6 days. so as per retention i have all the archives in primary.
    But when i check alert log file, it was requesting for archive which was generated on *01-DEC-2010*... ]:) i have checked completion_time of archive from v$archived_log
    I got some basic information. can anyone post your views...
    Thanks

    STANDBY:-
    ~~~~
    SQL> select min(checkpoint_change#) from v$datafile;
    MIN(CHECKPOINT_CHANGE#)
    6053066844468
    PRIMARY:-
    ~~~~~~
    SQL> select first_change#,next_change# from v$archived_log where sequence# between 8998 and 9008;
    FIRST_CHANGE# NEXT_CHANGE#
    6052728491241 6052728594833
    6052728594833 6052728720598
    6052728720598 6052728880838
    6052728880838 6052729025406
    6052729025406 6052729207089
    6052729207089 6052729339202
    6052729339202 6052729509994
    6052729509994 6052732075048
    6052732075048 6052751377975
    6052751377975 6052763669833
    6052763669833 6052767026703
    FIRST_CHANGE# NEXT_CHANGE#
    6052840910461 6052847501485
    6052847501485 6052857374219
    6052857374219 6052857920410
    6052857920410 6052858390970
    6052858390970 6052898901735
    6052898901735 6052899018444
    6052899018444 6052906511296
    6052906511296 6052926911168
    6052926911168 6052947295154
    6052947295154 6052947546651
    6052947546651 6052949938434
    FIRST_CHANGE# NEXT_CHANGE#
    6052947546651 6052949938434
    6052947546651 6052949938434
    24 rows selected.
    SQL>

  • Standby database is not applying redo logs due to missing archive log

    We use 9.2.0.7 Oracle Database. My goal is to create a physical standby database.
    I have followed all the steps necessary to fulfill this in Oracle Data Guard Concepts and Administration manual. Archived redo logs are transmitted from primary to standby database regularly. But the logs are not applied due to archive log gap.
    SQL> select process, status from v$managed_standby;
    PROCESS STATUS
    ARCH CONNECTED
    ARCH CONNECTED
    MRP0 WAIT_FOR_GAP
    RFS RECEIVING
    RFS ATTACHED
    SQL> select * from v$archive_gap;
    THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
    1 503 677
    I have tried to find the missing archives on the primary database, but was unable to. They have been deleted (somehow) regularly by the existing backup policy on the primary database. I have looked up the backups, but these archive logs are too old to be in the backup. Backup retention policy is 1 redundant backup of each file. I didn't save older backups as I didn't really need them from up to this point.
    I have cross checked (using rman crosscheck) the archive log copies on the primary database and deleted the "obsolete" copies of archive logs. But, v$archived_log view on the primary database only marked those entries as "deleted". Unfortunately, the standby database is still waiting for those logs to "close the gap" and doesn't apply the redo logs at all. I am reluctant to recreate the control file on the primary database as I'm afraid this occurred through the regular database backup operations, due to current backup retention policy and it probably might happen again.
    The standby creation procedure was done by using the data files from 3 days ago. The archive logs which are "producing the gap" are older than a month, and are probably unneeded for standby recovery.
    What shall I do?
    Kind regards and thanks in advance,
    Milivoj

    On a physical standby database
    To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
    SQL> SELECT * FROM V$ARCHIVE_GAP;
    THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
    1 7 10
    The output from the previous example indicates your physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1.
    After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary
    database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
    SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND 2> SEQUENCE# BETWEEN 7 AND 10;
    NAME
    /primary/thread1_dest/arcr_1_7.arc /primary/thread1_dest/arcr_1_8.arc /primary/thread1_dest/arcr_1_9.arc
    Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
    SQL> ALTER DATABASE REGISTER LOGFILE
    '/physical_standby1/thread1_dest/arcr_1_7.arc';
    SQL> ALTER DATABASE REGISTER LOGFILE
    '/physical_standby1/thread1_dest/arcr_1_8.arc';
    After you register these log files on the physical standby database, you can restart Redo Apply.
    Note:
    The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
    Restoring the archived logs from the backup set
    If the archived logs are not available in the archive destination then at that time we need to restore the required archived logs from the backup step. This task is accomplished in the following way.
    To restore range specified archived logs:
    Run {
    Set archivelog destination to '/oracle/arch/arch_restore'
    Restore archivelog from logseq=<xxxxx> until logseq=<xxxxxxx>
    To restore all the archived logs:
    Run {
    Set archivelog destination to '/oracle/arch/arch_restore';
    Restore archivelog all;
    }

  • Can you help me about change data captures in 10.2.0.3

    Hi,
    I made research about Change Data Capture and I try to implement it between two databases for two small tables in 10g release 2.MY CDC implementation uses archive logs to replicate data.
    Change Data Capture Mode Asynchronous autolog archive mode..It works correctly( except for ddl).Now I have some questions about CDC implementation for large tables.
    I have one senario to implement but I do not find exactly how can I do it correctly.
    I have one table (name test) that consists of 100 000 000 rows , everyday 1 000 000 transections occurs on this table and I archive the old
    data more than one year manually.This table is in the source db.I want to replicate this table by using Change Data Capture to other stage database.
    There are some questions about my senario in the following.
    1.How can I make the first load operations? (test table has 100 000 000 rows in the source db)
    2.In CDC, it uses change table (name test_ch) it consists of extra rows related to opearations for stage table.But, I need the orjinal table (name test) for applicaton works in stage database.How can I move the data from change table (test_ch) to orjinal table (name test) in stage database? (I don't prefer to use view for test table)
    3.How can I remove some data from change table(name test_ch) in stage db?It cause problem or not?
    4.There is a way to replicate ddl operations between two database?
    5. How can I find the last applied log on stage db in CDC?How can I find archive gap between source db and stage db?
    6.How can I make the maintanence of change tables in stage db?

    Asynchronous CDC uses Streams to generate the change records. Basically, it is a pre-packaged DML Handler that converts the changes into inserts into the change table. You indicated that you want the changes to be written to the original table, which is the default behavior of Streams replication. That is why I recommended that you use Streams directly.
    <p>
    Yes, it is possible to capture changes from a production redo/archive log at another database. This capability is called "downstream" capture in the Streams manuals. You can configure this capability using the MAINTAIN_* procedures in DBMS_STREAMS_ADM package (where * is one of TABLES, SCHEMAS, or GLOBAL depending on the granularity of change capture).
    <p>
    A couple of tips for using these procedures for downstream capture:
    <br>1) Don't forget to set up log shipping to the downstream capture database. Log shipping is setup exactly the same way for Streams as for Data Guard. Instructions can be found in the Streams Replication Administrator's Guide. This configuration has probably already been done as part of your initial CDC setup.
    <br>2) Run the command at the database that will perform the downstream capture. This database can also be the destination (or target) database where the changes are to be applied.
    <br>3) Explicitly define the parameters capture_queue_name and apply_queue_name to be the same queue name. Example:
    <br>capture_queue_name=>'STRMADMIN.STREAMS_QUEUE'
    <br>apply_queue_name=>'STRMADMIN.STREAMS_QUEUE'

  • How To Check Whether Physical Standby is in Sync with the Primary

    hi All,
    I'm new in data guard. In our current production, my boss is asking me to write a shell scripts monitoring physical standby and primary archive is sync (archive gap)
    I'm referring metalink [ID 861595.1], but when i ran the first query in Primary node, the screen hang.
    On primary
    ========
    SQL> SELECT THREAD# "Thread",SEQUENCE# "Last Sequence Generated"
    FROM V$ARCHIVED_LOG
    WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)
    ORDER BY 1;
    I tried to turn on 10046 SQL trace, the SQL consume a lot of CPU, and it full table scan in X$KCCAL table.
    TKPROF result look like below:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     88.29     115.59          0          0          0           0
    total        3     88.30     115.60          0          0          0           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 80 
    Rows     Row Source Operation
          0  SORT ORDER BY (cr=0 pr=0 pw=0 time=21 us)
          0   FILTER  (cr=0 pr=0 pw=0 time=19 us)
       1124    FIXED TABLE FULL X$KCCAL (cr=0 pr=0 pw=0 time=40537 us)
          0    FILTER  (cr=0 pr=0 pw=0 time=115538625 us)
       1122     HASH GROUP BY (cr=0 pr=0 pw=0 time=115530193 us)
    7465972      FIXED TABLE FULL X$KCCAL (cr=0 pr=0 pw=0 time=94648975 us)
    Elapsed times include waiting on following events:
      Event waited on                        Times   Max. Wait  Total Waited
      -----------------------------------   Waited  ----------  ------------
      SQL*Net message to client               1        0.00          0.00
      control file sequential read        16841        0.05         30.88
      SQL*Net break/reset to client           1        0.00          0.00Due to this is production environment, thus i had terminated the session, can anyone teach me or share with me any scripts to monitor physical standby is sync with primary or not? Or do you encounter the above issue when running the SQL?
    My db version is Oracle 11.2.0.1.0
    Thanks in advance.
    Regards,
    Klnghau

    Hello;
    Note 861595.1 has not been subject to an independent technical review. Not sure if that make it bad or not.
    This is what I'm using: ( I spool this to a file and have it e-mailed to me daily)
    PROMPT
    PROMPT Checking last sequence in v$archived_log
    PROMPT
    clear screen
    set linesize 100
    column STANDBY format a20
    column applied format a10
    SELECT name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
    prompt
    prompt----------------Last log on Primary--------------------------------------|
    prompt
    select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
    Best Regards
    mseberg

  • How to apply the changes in logical standby database

    Hi,
    I am new to Dataguard. I am now using 10.2.0.3 and followed the steps from Oracle Data Guard Concepts and Administration Guide to setup a logical standby database.
    When I insert a record to a table from the primary database side, when i query the same table from the logical standby database, it doesn't show the new records.
    Did I miss something? What I want to do is when I insert a record in the primary db, then the corresponding record will be inserted in the standby db.
    Or I totally misunderstand what Oracle data guard is? Any help are appreciated.
    Denis

    Hi
    Can anyone help to answer me is my logical standby db have a archive gap?
    SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME, NEWEST_SCN, NEWEST_T
    IME FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN APPLIED_TIME READ_SCN READ_TIME NEWEST_SCN
    NEWEST_TIME
    851821 29-JUL -08 17:58:29 851822 29-JUL -08 17:58:29 1551238
    08-AUG -08 08:43:29
    SQL> select pid, type, status, high_scn from v$logstdby;
    no rows selected
    SQL> alter database start logical standby apply;
    Database altered.
    SQL> select pid, type, status, high_scn from v$logstdby;
    PID
    TYPE
    STATUS HIGH_SCN
    2472
    COORDINATOR
    ORA-16116: no work available
    3380
    READER
    ORA-16127: stalled waiting for additiona 852063
    l transactions to be applied
    2480
    BUILDER
    ORA-16116: no work available
    2492
    ANALYZER
    ORA-16111: log mining and apply setting
    up
    2496
    APPLIER
    ORA-16116: no work available
    2500
    APPLIER
    ORA-16116: no work available
    3700
    APPLIER
    ORA-16116: no work available
    940
    APPLIER
    ORA-16116: no work available
    2504
    APPLIER
    ORA-16116: no work available
    9 rows selected.
    Thanks a lot.
    Message was edited by:
    Denis Chan

  • Reg : no rfs background processor in standby

    hai all,
    os:windows server
    oracle :10.2.0.3
    i have a problem with my standby database. i found no rfs bacckground processor in standby while siing the query
    >>> select process,status,sequence# fro mv$managed_standby;
    PROCESS STATUS SEQUENCE#
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    ARCH CONNECTED 0
    MRP0 WAIT_FOR_LOG 5949
    i have shutdown the standby instance ,mounted and start the recovery (twice). still having the same issue. .
    MORE OVER THERE IS NO ARCHIVE GAP
    HELP ME IN THIS ...
    REGARDS
    GOLD....

    my log file...............
    Mon Feb 25 07:48:05 2013
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 150
    __shared_pool_size = 352321536
    __large_pool_size = 16777216
    __java_pool_size = 16777216
    __streams_pool_size = 0
    nls_language = ENGLISH
    nls_territory = UNITED KINGDOM
    sga_target = 1610612736
    control_files = E:\ORADATA\WPLQDM\CONTROL01.CTL, I:\ORADATA\WPLQDM\CONTROL02.CTL, J:\ORADATA\WPLQDM\CONTROL03.CTL
    db_block_size = 8192
    __db_cache_size = 1207959552
    compatible = 10.2.0.1.0
    log_archive_config = DG_CONFIG=(wplqdmp,wplqdms2)
    log_archive_dest_2 = SERVICE=wpl_qdm_pro OPTIONAL LGWR ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=wplqdms2
    log_archive_max_processes= 10
    fal_client = wpl_qdm_stb2
    fal_server = wpl_qdm_pro
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = J:\flash_recovery_area
    db_recovery_file_dest_size= 31457280000
    standby_file_management = AUTO
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    service_names = WPLQDM
    dispatchers = (PROTOCOL=TCP) (SERVICE=wplqdmXDB)
    job_queue_processes = 10
    audit_file_dest = E:\ORACLE\PRODUCT\10.2.0\ADMIN\WPLQDM\ADUMP
    background_dump_dest = E:\ORACLE\PRODUCT\10.2.0\ADMIN\WPLQDM\BDUMP
    user_dump_dest = E:\ORACLE\PRODUCT\10.2.0\ADMIN\WPLQDM\UDUMP
    core_dump_dest = E:\ORACLE\PRODUCT\10.2.0\ADMIN\WPLQDM\CDUMP
    db_name = wplqdm
    db_unique_name = WPLQDMS2
    open_cursors = 300
    pga_aggregate_target = 1277165568
    PMON started with pid=2, OS id=3516
    PSP0 started with pid=3, OS id=4064
    MMAN started with pid=4, OS id=580
    DBW0 started with pid=5, OS id=4184
    LGWR started with pid=6, OS id=5032
    CKPT started with pid=7, OS id=4076
    SMON started with pid=8, OS id=3604
    RECO started with pid=9, OS id=344
    CJQ0 started with pid=10, OS id=800
    MMON started with pid=11, OS id=4200
    Mon Feb 25 07:48:05 2013
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=12, OS id=4804
    Mon Feb 25 07:48:05 2013
    starting up 1 shared server(s) ...
    Mon Feb 25 07:48:17 2013
    alter database mount standby database
    Mon Feb 25 07:48:21 2013
    Setting recovery target incarnation to 1
    ARCH: STARTING ARCH PROCESSES
    ARC0 started with pid=16, OS id=4024
    ARC1 started with pid=17, OS id=3712
    ARC2 started with pid=18, OS id=4376
    ARC3 started with pid=19, OS id=2052
    ARC4 started with pid=20, OS id=4468
    ARC5 started with pid=21, OS id=3556
    ARC6 started with pid=22, OS id=3212
    ARC7 started with pid=23, OS id=4476
    ARC8 started with pid=24, OS id=516
    Mon Feb 25 07:48:21 2013
    ARC0: Archival started
    ARC1: Archival started
    Mon Feb 25 07:48:21 2013
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    Mon Feb 25 07:48:21 2013
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Mon Feb 25 07:48:21 2013
    ARC1: Becoming the heartbeat ARCH
    Mon Feb 25 07:48:21 2013
    ARC8: Thread not mounted
    Mon Feb 25 07:48:21 2013
    Successful mount of redo thread 1, with mount id 171090433
    Mon Feb 25 07:48:21 2013
    Physical Standby Database mounted.
    Completed: alter database mount standby database
    Mon Feb 25 07:48:22 2013
    ARC7: Thread not mounted
    ARC0: Thread not mounted
    Mon Feb 25 07:48:24 2013
    ARC5: Thread not mounted
    Mon Feb 25 07:48:25 2013
    ARC2: Thread not mounted
    Mon Feb 25 07:48:26 2013
    ARC3: Thread not mounted
    ARC1: Thread not mounted
    Mon Feb 25 07:48:28 2013
    ARC6: Thread not mounted
    ARC9 started with pid=25, OS id=4384
    Mon Feb 25 07:48:30 2013
    ARC4: Thread not mounted
    Mon Feb 25 07:49:00 2013
    alter database recover managed standby database disconnect
    MRP0 started with pid=26, OS id=4768
    Managed Standby Recovery not using Real Time Apply
    parallel recovery started with 3 processes
    Mon Feb 25 07:49:07 2013
    Errors in file e:\oracle\product\10.2.0\admin\wplqdm\bdump\wplqdm_mrp0_4768.trc:
    ORA-00313: Message 313 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1]
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [J:\ORADATA\WPLQDM\REDO01_3.RDO]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [E:\ORADATA\WPLQDM\REDO01_1.RDO]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the fil
    Mon Feb 25 07:49:07 2013
    Errors in file e:\oracle\product\10.2.0\admin\wplqdm\bdump\wplqdm_mrp0_4768.trc:
    ORA-00313: Message 313 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1]
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [J:\ORADATA\WPLQDM\REDO01_3.RDO]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [I:]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [E:\ORADATA\WPLQDM\REDO01_1.RDO]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the fil
    Clearing online redo logfile 1 E:\ORADATA\WPLQDM\REDO01_1.RDO
    Clearing online log 1 of thread 1 sequence number 5937
    Mon Feb 25 07:49:07 2013
    Errors in file e:\oracle\product\10.2.0\admin\wplqdm\bdump\wplqdm_mrp0_4768.trc:
    ORA-00313: Message 313 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1]
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [J:\ORADATA\WPLQDM\REDO01_3.RDO]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [I:]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [E:\ORADATA\WPLQDM\REDO01_1.RDO]
    ORA-27041: Message 27041 not found; No message file for product=RDBMS, facility=ORA
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the fil
    Mon Feb 25 07:49:07 2013
    Errors in file e:\oracle\product\10.2.0\admin\wplqdm\bdump\wplqdm_mrp0_4768.trc:
    ORA-19527: Message 19527 not found; No message file for product=RDBMS, facility=ORA
    ORA-00312: Message 312 not found; No message file for product=RDBMS, facility=ORA; arguments: [1] [1] [E:\ORADATA\WPLQDM\REDO01_1.RDO]
    Clearing online redo logfile 1 complete
    Media Recovery Waiting for thread 1 sequence 5949
    Mon Feb 25 07:49:07 2013
    Completed: alter database recover managed standby database disconnect
    Mon Feb 25 08:03:11 2013
    db_recovery_file_dest_size of 30000 MB is 7.80% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.

Maybe you are looking for

  • Is there a way to automatically detect and delete duplicate photos in a library

    I just imported a load of photos into my main iPhoto '11 Library from several older MAC's Photo Libraries. Despite repeatedly using the "Don't Import Duplicate Photos" dialogue box, I have many duplicate photos. Is there a way to automatically detect

  • How do I delete all my info from my iPhone

    How do I delete all my info from my iPhone as I am selling the device?

  • Styling text submitted to a CMS

    Hi I am designing a website with CMS for a local church. Could anyone help me with controlling the appearance of text submitted to the database using the record insert wizard. My primary problem is that any text submitted to the database does not dis

  • Sender and receiver

    Dear forum, 1 sender and receiver only can happen in CO? why not in FI? 2 sender and receiver are referring to cost object? 3 when use secondary cost element, the sce must be linked to a sender and receiver only can work? Thanks alot

  • Coherence *Extend configuration

    Hello, To enable Coherence Extend, we should define a proxy-scheme in our cluster cache-config that defines a coherence *extend proxy-service, containing a tcp-acceptor with a local  address and port. Can there be only 1 such an extend proxy service