Archives on Primary DB

Hi
We have Oracle 11.1 database with 2 Physical Standby Database confgured. Archives are applied noramlly on Standby Database and when i query v$archived_log is shows applied.
However when i query the same on Primary database , the applied column shows 'NO'.
Can some one please explain what may be the reason for this behavior
Thank You

Indicates whether the archivelog has been applied to its corresponding standby database (YES) or not (NO). The value is always NO for local destinations.
This column is meaningful at the physical standby site for the ARCHIVED_LOG entries with REGISTRAR='RFS' (which means this log is shipped from the primary to the standby database). If REGISTRAR='RFS' and APPLIED is NO, then the log has arrived at the standby but has not yet been applied. If REGISTRAR='RFS' and APPLIED is YES, the log has arrived and been applied at the standby database.

Similar Messages

  • I need to restore old archives in primary, so that I could apply in standby

    Hi All,
    I Need to restore sequence from 135777-138246 before enabling the DG broker configuration (as they are not availble in the primary or standby site).
    So need to restore primary Ist with these missing logs, Please let me know how would i go about it.
    In standby till sequence 135097 have been applied (confirmed from alert log as the DB is down as of now)
    =================
    Fri Mar 23 14:43:38 2012
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[2]: Successfully opened standby log 7: '/u31/pjmagap/oradata/std01a.rdo'
    Fri Mar 23 14:43:40 2012
    Media Recovery Log /oraarc/pjmagap/pjmagap_1_135097_666467537.arc
    Media Recovery Waiting for thread 1 sequence 135098 (in transit)
    Sun Mar 25 20:31:38 2012
    Log Gap is :- 135098-138524 (~3427 logs)

    Log Gap is :- 135098-138524 (~3427 logs)You choose option of ROLLFORWARD.. Hmm is it really necessary ?
    May be you have problem with only archive *135098* , which is not transferred, even though that archive not transferred the next archives of them will be transferred,
    So if you just restore only missing archives(very less) that should be fine.
    If you are sure that above all archives are not transferred then you better to go with RollForward.
    Already you did, still check this link might helpful. http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/

  • Issue with physical standby database not in sync with primary

    Hi,
    I created an physical standby database couple of Hours back . I am trying to keep the standby database in Managed recovery mode to make the standby in sync with primary but it is throwing the below erros. Please share your suggestion ...
    SQL> select thread#,max(sequence#) from v$log_history group by thread#;
    THREAD# MAX(SEQUENCE#)
    1 7
    PROCESS STATUS THREAD# SEQUENCE#
    ARCH CONNECTED 0 0
    ARCH CONNECTED 0 0
    RFS OPENING 1 12
    ALERT LOG :
    Tue Mar 20 07:31:32 2012
    alter database recover managed standby database disconnect from session
    Tue Mar 20 07:31:32 2012
    Attempt to start background Managed Standby Recovery process (PRIMARY)
    MRP0 started with pid=18, OS id=16370
    Tue Mar 20 07:31:32 2012
    MRP0: Background Managed Standby Recovery process started (PRIMARY)
    Managed Standby Recovery not using Real Time Apply
    parallel recovery started with 8 processes
    Media Recovery Log /oracle/STDBY/arch/1_3_777567883.dbf
    Tue Mar 20 07:31:39 2012
    Completed: alter database recover managed standby database disconnect from session
    Tue Mar 20 07:31:54 2012
    Incomplete recovery applied all redo ever generated.
    Recovery completed through change 9677325080303
    Tue Mar 20 07:31:54 2012
    MRP0: Media Recovery Complete (PRIMARY)
    Tue Mar 20 07:31:55 2012
    MRP0: Background Media Recovery process shutdown (PRIMARY)
    Thanks,
    Rakesh

    HI CKPT,
    Thanks for the reply. All the Archives from primary are transferred to the standby by RFS. Also i tried to register the log files manually even but it says already registered. They are no errors in the primary instance alert log file. Please find the log
    SEVERITY ERROR_CODE MESSAGE TO_CHAR(TIMESTAMP,'D
    Informational 0 ARC0: Archival started 20-MAR-2012 06:51:36
    Informational 0 ARC1: Archival started 20-MAR-2012 06:51:36
    Informational 0 ARC0: Becoming the 'no FAL' ARCH 20-MAR-2012 06:51:36
    Informational 0 ARC0: Becoming the 'no SRL' ARCH 20-MAR-2012 06:51:36
    Informational 0 ARC1: Becoming the heartbeat ARCH 20-MAR-2012 06:51:36
    Informational 0 Redo Shipping Client Connected as PUBLIC 20-MAR-2012 06:52:07
    Informational 0 -- Connected User is Valid 20-MAR-2012 06:52:07
    Informational 0 RFS[1]: Assigned to RFS process 15934 20-MAR-2012 06:52:07
    Informational 0 RFS[1]: Identified database type as 'physical standby' 20-MAR-2012 06:52:07
    Warning 0 RFS[1]: No standby redo logfiles created 20-MAR-2012 06:52:07
    Control 0 Attempt to start background Managed Standby Recovery process 20-MAR-2012 06:52:42
    Control 0 MRP0: Background Managed Standby Recovery process started 20-MAR-2012 06:52:42
    Informational 0 Managed Standby Recovery not using Real Time Apply 20-MAR-2012 06:52:47
    Informational 0 Media Recovery Log /oracle/STDBY/arch/1_3_777567883.d 20-MAR-2012 06:52:49
    bf
    Control 0 MRP0: Media Recovery Complete 20-MAR-2012 06:53:04
    Control 0 MRP0: Background Media Recovery process shutdown 20-MAR-2012 06:53:06
    Informational 0 Managed Standby Recovery not using Real Time Apply 20-MAR-2012 06:53:24
    Control 0 Media Recovery Complete 20-MAR-2012 06:53:43
    Control 0 Attempt to start background Managed Standby Recovery process 20-MAR-2012 06:54:55
    Control 0 MRP0: Background Managed Standby Recovery process started 20-MAR-2012 06:54:55
    Informational 0 Managed Standby Recovery not using Real Time Apply 20-MAR-2012 06:55:00
    Informational 0 Media Recovery Log /oracle/STDBY/arch/1_3_777567883.d 20-MAR-2012 06:55:01
    bf
    Control 0 MRP0: Media Recovery Complete 20-MAR-2012 06:55:17
    Control 0 MRP0: Background Media Recovery process shutdown 20-MAR-2012 06:55:18
    Informational 0 Redo Shipping Client Connected as PUBLIC 20-MAR-2012 07:31:03
    Informational 0 -- Connected User is Valid 20-MAR-2012 07:31:03
    Informational 0 RFS[2]: Assigned to RFS process 16366 20-MAR-2012 07:31:03
    Informational 0 RFS[2]: Identified database type as 'physical standby' 20-MAR-2012 07:31:03
    Warning 0 RFS[2]: No standby redo logfiles created 20-MAR-2012 07:31:04
    Warning 0 RFS[2]: No standby redo logfiles created 20-MAR-2012 07:31:06
    Control 0 Attempt to start background Managed Standby Recovery process 20-MAR-2012 07:31:32
    Control 0 MRP0: Background Managed Standby Recovery process started 20-MAR-2012 07:31:32
    Informational 0 Managed Standby Recovery not using Real Time Apply 20-MAR-2012 07:31:37
    Informational 0 Media Recovery Log /oracle/STDBY/arch/1_3_777567883.d 20-MAR-2012 07:31:38
    bf
    Control 0 MRP0: Media Recovery Complete 20-MAR-2012 07:31:54
    Control 0 MRP0: Background Media Recovery process shutdown 20-MAR-2012 07:31:55
    36 rows selected.
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination /oracle/STDBY/arch/
    Oldest online log sequence 13
    Next log sequence to archive 0
    Current log sequence 14
    SQL> ho ls -ltra /oracle/STDBY/arch/
    total 3754456
    drwxr-xr-x 4 oracle dba 4096 Feb 13 17:38 ..
    -rw-r----- 1 oracle dba 908516864 Mar 20 06:37 1_8_777567883.dbf
    -rw-r----- 1 oracle dba 770419200 Mar 20 06:40 1_3_777567883.dbf
    -rw-r----- 1 oracle dba 757698048 Mar 20 06:41 1_4_777567883.dbf
    -rw-r----- 1 oracle dba 5171712 Mar 20 06:41 1_5_777567883.dbf
    -rw-r----- 1 oracle dba 1060801024 Mar 20 06:43 1_6_777567883.dbf
    -rw-r----- 1 oracle dba 323025920 Mar 20 06:43 1_7_777567883.dbf
    -rw-r----- 1 oracle dba 1558016 Mar 20 06:43 1_9_777567883.dbf
    -rw-r----- 1 oracle dba 4608 Mar 20 06:43 1_10_777567883.dbf
    -rw-r----- 1 oracle dba 1579008 Mar 20 06:52 1_11_777567883.dbf
    -rw-r----- 1 oracle dba 11876864 Mar 20 07:31 1_12_777567883.dbf
    -rw-r----- 1 oracle dba 2560 Mar 20 07:31 1_13_777567883.dbf
    drwxr-xr-x 2 oracle dba 36864 Mar 20 07:31 .
    SQL>

  • Archives are not transorting to standby database

    Hi Experts,
    DB version: 10.2.0.4
    OS Version: Windows 2003
    Here i have one issue, unable to transfer the archives from Primary to standby database.
    I have no errors found in primary and also in standby database, I can able to ping both primary & standby database from both locations, I unable to trace the issue,
    Take as high importance.Can any one help. Appriciated for your help.
    Completed: alter database recover managed standby database cancel
    Fri Jan 28 22:36:49 2011
    alter database recover managed standby database disconnect from session
    MRP0 started with pid=24, OS id=2164
    Managed Standby Recovery not using Real Time ApplyMedia Recovery Waiting for thread 1 sequence 84857
    Fri Jan 28 22:36:55 2011
    Completed: alter database recover managed standby database disconnect from session

    Post the values of the following parameters from your standby database: fal_client and fal_server, SQL> show parameter falNAME TYPE VALUE
    fal_client string STBY
    fal_server string SYS
    also post the last 100 lines from the alert log of your production database.
    Thread 1 advanced to log sequence 84895
    Current log# 1 seq# 84895 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO01.LOG
    Current log# 1 seq# 84895 mem# 1: F:\ORA_DATA\REDO01A.LOG
    Fri Jan 28 22:00:10 2011
    Thread 1 advanced to log sequence 84896
    Current log# 3 seq# 84896 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO03.LOG
    Current log# 3 seq# 84896 mem# 1: F:\ORA_DATA\REDO03A.LOG
    Thread 1 advanced to log sequence 84897
    Current log# 2 seq# 84897 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO02.LOG
    Current log# 2 seq# 84897 mem# 1: F:\ORA_DATA\REDO02A.LOG
    Fri Jan 28 22:00:34 2011
    Thread 1 advanced to log sequence 84898
    Current log# 1 seq# 84898 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO01.LOG
    Current log# 1 seq# 84898 mem# 1: F:\ORA_DATA\REDO01A.LOG
    Fri Jan 28 22:02:13 2011
    Thread 1 advanced to log sequence 84899
    Current log# 3 seq# 84899 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO03.LOG
    Current log# 3 seq# 84899 mem# 1: F:\ORA_DATA\REDO03A.LOG
    Fri Jan 28 23:37:21 2011
    ALTER SYSTEM SET log_archive_dest_state_2='enable' SCOPE=BOTH;
    Fri Jan 28 23:46:36 2011
    ALTER SYSTEM SET log_archive_max_processes=12 SCOPE=BOTH;
    Sat Jan 29 00:01:32 2011
    alter database add standby logfile thread 1
    group 4 size 50m,
    group 5 size 50m
    Sat Jan 29 00:01:35 2011
    Starting control autobackup
    Sat Jan 29 00:01:39 2011
    Errors in file e:\oracle\product\10.2.0\admin\sys\udump\sys_ora_5056.trc:
    Sat Jan 29 00:01:39 2011
    Errors in file e:\oracle\product\10.2.0\admin\sys\udump\sys_ora_5056.trc:
    Sat Jan 29 00:01:39 2011
    Errors in file e:\oracle\product\10.2.0\admin\sys\udump\sys_ora_5056.trc:
    Control autobackup written to DISK device
         handle 'J:\ORACLE\RECO\SYS\AUTOBACKUP\2011_01_29\O1_MF_S_741657695_6N77SKJM_.BKP'
    Sat Jan 29 00:09:32 2011
    Thread 1 advanced to log sequence 84900
    Current log# 2 seq# 84900 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO02.LOG
    Current log# 2 seq# 84900 mem# 1: F:\ORA_DATA\REDO02A.LOG
    Sat Jan 29 00:10:04 2011
    Thread 1 advanced to log sequence 84901
    Current log# 1 seq# 84901 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO01.LOG
    Current log# 1 seq# 84901 mem# 1: F:\ORA_DATA\REDO01A.LOG
    Thread 1 cannot allocate new log, sequence 84902
    Checkpoint not complete
    Current log# 1 seq# 84901 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO01.LOG
    Current log# 1 seq# 84901 mem# 1: F:\ORA_DATA\REDO01A.LOG
    Thread 1 advanced to log sequence 84902
    Current log# 3 seq# 84902 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO03.LOG
    Current log# 3 seq# 84902 mem# 1: F:\ORA_DATA\REDO03A.LOG
    Thread 1 cannot allocate new log, sequence 84903
    Checkpoint not complete
    Current log# 3 seq# 84902 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO03.LOG
    Current log# 3 seq# 84902 mem# 1: F:\ORA_DATA\REDO03A.LOG
    Sat Jan 29 00:10:18 2011
    Thread 1 advanced to log sequence 84903
    Current log# 2 seq# 84903 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO02.LOG
    Current log# 2 seq# 84903 mem# 1: F:\ORA_DATA\REDO02A.LOG
    Thread 1 advanced to log sequence 84904
    Current log# 1 seq# 84904 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO01.LOG
    Current log# 1 seq# 84904 mem# 1: F:\ORA_DATA\REDO01A.LOG
    Thread 1 cannot allocate new log, sequence 84905
    Checkpoint not complete
    Current log# 1 seq# 84904 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO01.LOG
    Current log# 1 seq# 84904 mem# 1: F:\ORA_DATA\REDO01A.LOG
    Thread 1 advanced to log sequence 84905
    Current log# 3 seq# 84905 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\SYS\REDO03.LOG
    Current log# 3 seq# 84905 mem# 1: F:\ORA_DATA\REDO03A.LOG

  • Physycal Standby archive log gap....

    Archive log gap caused... The reason being before the logs can be shipped to standby location where deleted by rman backup... So restored the archives on primary database site back again... These old logs from the gap are not getting shipped to the standby site, but the new ones generated currently are getting shipped.
    Can some one help what action do I have to take to resolve the gap? And how to know what's causing and not letting this shipping happen?
    Or shall I manually ship these gap archive logs to the standby site?

    1) Yep running 9i.. But still its not shipping...Are FAL_CLIENT & FAL_SERVER parameters are defined at standby level?
    If not, define them at standby level. Those parameter will help to get missing (gap) archives from primary database.
    2) If so shipped manually do have to register the archive logs? Just copy from primary to standby and don't need to register any gap, that was in 8i and when there was no background process MRP (media recovery process). If the standby database is in auto media recovery, then, it will automatically applies all the archived logs.
    Jaffar

  • Online Redolog Archiving on Logical Standby Databases!!

    Hi,
    Can I safely stop archiving on logical standby database? It has been inherited from primary database at the time of creation, but I don't see any reason why to keep it enabled while it's consuming unnecessary disk space on our server!!
    Any body can give a valid reason to keep it enabled??
    Thanks

    hi
    u dont need archiving on standby database as standby database reads archive files that come from primary database.its simply wastage of resource.stand by database was created from primary by taking a full backup.and to apply the archive files in case of failure u need atleast 1 backup of stand by database .and i assume u wont go for taking backup of standby database.
    just worry about the archives of primary database.
    kanchan
    OCP ( 9i,10g)

  • How to apply archive in Standby Database?

    Hello,
    My Database are running in linux plateform. I am seeing that archives which are generating are not copying to standby server & not applying.
    Can anybody suggest me how to copy archive from primary database(ASM file system) to standby Database & apply those archives?
    Thanks

    Hi,
    I am having similar problem. The primary logs are shipping on to standby but are not getting applied.
    Here are the outputs:
    From primary
    SQL > select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
    THREAD# MAX(SEQUENCE#)
    1 27908
    2 28476
    3 31643
    select max(sequence#) from gv$archived_log;
    MAX(SEQUENCE#)
    31643
    From Standby
    SQL > select thread#,max(sequence#) from v$archived_log where applied ='YES' group by thread#;
    THREAD# MAX(SEQUENCE#)
    1 26862
    2 27580
    3 30874
    select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
    31643
    Any help is appreciated.
    Thanks in advance.

  • Primary and standby databases

    Hi all,
    I have I project that consists on creating a standby database to archive the primary database in it, the problem is that I need two machines, witch is not the case for me, so I am asking if I can place the primary database in my machine and the standby database in a virtual machine (vm installed in the same machine as the primary database). This project is network project so I needed something that includes network aspects.
    thanks for help
    best regards Rashid

    REDO LOG wrote:
    Hi all,
    I have I project that consists on creating a standby database to archive the primary database in it, the problem is that I need two machines, witch is not the case for me, so I am asking if I can place the primary database in my machine and the standby database in a virtual machine (vm installed in the same machine as the primary database). This project is network project so I needed something that includes network aspects.
    thanks for help
    best regards Rashiddo it between two virtual machines

  • Need archives earlier than previous backup.

    Hi Experts,
    Here i want to share and to know the reason of one situation.
    DB Version :10.2.0.4
    OS Version: Linux
    I'm preparing Standby database from RAC database(PRIMARY) ----> (STANDBY) on ASM.
    I have taken backup of database on 25-DEC-2010
    taken standby controlfile backup on 26-DEC-2010
    i restore the database on 28-DEC-2010 and i found some archive gaps after staring MRP process.
    I have archive retention of 6 days. so as per retention i have all the archives in primary.
    But when i check alert log file, it was requesting for archive which was generated on *01-DEC-2010*... ]:) i have checked completion_time of archive from v$archived_log
    I got some basic information. can anyone post your views...
    Thanks

    STANDBY:-
    ~~~~
    SQL> select min(checkpoint_change#) from v$datafile;
    MIN(CHECKPOINT_CHANGE#)
    6053066844468
    PRIMARY:-
    ~~~~~~
    SQL> select first_change#,next_change# from v$archived_log where sequence# between 8998 and 9008;
    FIRST_CHANGE# NEXT_CHANGE#
    6052728491241 6052728594833
    6052728594833 6052728720598
    6052728720598 6052728880838
    6052728880838 6052729025406
    6052729025406 6052729207089
    6052729207089 6052729339202
    6052729339202 6052729509994
    6052729509994 6052732075048
    6052732075048 6052751377975
    6052751377975 6052763669833
    6052763669833 6052767026703
    FIRST_CHANGE# NEXT_CHANGE#
    6052840910461 6052847501485
    6052847501485 6052857374219
    6052857374219 6052857920410
    6052857920410 6052858390970
    6052858390970 6052898901735
    6052898901735 6052899018444
    6052899018444 6052906511296
    6052906511296 6052926911168
    6052926911168 6052947295154
    6052947295154 6052947546651
    6052947546651 6052949938434
    FIRST_CHANGE# NEXT_CHANGE#
    6052947546651 6052949938434
    6052947546651 6052949938434
    24 rows selected.
    SQL>

  • UCM - How to setup synchronize between DR and Primary site

    Hi all.
    As mentioned on title, we have a primary UCM site and a clean DR site. I want to ensure that end-users have ability to work with DR site for a short time when the Primary site is unavailable. To make DR site available to serves when the primary is down, we can do:
    - setup auto-export archive on primary site
    - target to destination archive on DR site
    - auto-transfer from primary to DR site
    - with data in Database, we can use Golden Gate to sync Primary and DR site
    So, with these settings, I can ensure that the DR is ready to run when the Primary is down. But, if the primary takes a long time to recovered, the DR site has many new contents. How to transfer it back to primary site when the primary is came back ? In other words, how to synchronize contents (vault and native files) between new Primary (old DR) and new DR (old Primary) site ?
    Thank for your attention.
    Sorry for my bad English.
    Cuong Pham

    Hi Cuong (and guys),
    I'm afraid the issue is not that simple. In fact, I think that the Archiver could be used for DR only by customers who have little data and few changes. Why do I think so?
    a) (Understanding System Migration and Archiving - 11g Release 1 (11.1.1)) "Archiver: A Java applet for transferring and reorganizing Content Server files and information." This means that you will use a Java applet to Export and Import your data. With a lot of items (you will need to transfer all the new and updated items!), or large items it will take time (your DR site will always be "few minutes late"). Besides, the Archiver transfers are based on batches - I don't think you can do continuous archiving - and will have impacts on the performance.
    b) Furthermore, (Exporting Data in Archives - 11g Release 1 (11.1.1)) "You can export revisions that are in the following status: RELEASED, DONE, EXPIRED, and GENWWW. You cannot export revisions that are in an active workflow (REVIEW, EDIT, or PENDING status) or that are DELETED." This means that the Archiver cannot be used for all your items.
    Therefore, together with FMW DR Guide (Recommendations for Fusion Middleware Components) I think other techniques should be considered:
    - Real Application Clusters (RAC), Weblogic Clustering, cluster-ware file system: the first, almost error-free, and relatively cheap option is having your DR site as other nodes in DB and MW clusters. If any of your node goes down, the other(s) will still serve your customers (no extra work needed), plus, you can benefit from "united power" of multiple nodes. RAC is available also in Oracle DB Standard Edition (for max. 2-nodes db cluster). The only disadvantage of this configuration is that it is not available for geo-clustering (distance between RAC nodes must be max. some hundreds meters), so it does not cover DR scenarios like "location goes down" (e.g. due to networking issues)
    - Data Guard and distributed file system: the option mentioned in the guide is actually this one. It is based on Data Guard, a free option of the Oracle Database Enterprise Edition, which can run in both asynchronous (a committed transaction on the primary site is immediately transferred to the DR site) and synchronous (a transaction is not committed on the primary until processed by the DR site - if sites are far, or a lot of data is being sent, this can take quite long) modes. So, if you store your content in the database the Data Guard can resolve a lot. Unfortunately, not everything - the guide also mentions that some artifacts (that change!) are also stored on the file system (again, workflow updates, etc), so you have to use file system sync techniques to send those updates. In theory, you could use file system to send also updates in the database, which is nothing but a file(s) (in this case you will need the Partitioning option to split your database into smaller files), but db guys hate this way since it transfers also inconsistencies, so you could end up with an inconsistent database in the DR site, too.
    This option will require some administrative tasks - you will have to resolve inconsistencies resulting from DG/file system sync, you will need to redirect your users to the DR site, and re-configure the DG to make primary site from your DR one. Note that once your original primary site is up again, you can use DG to transfer (again, immediately) changes done in the meantime.
    As you can see, there is no absolute solution, so you need to evaluate your options, esp. with regards to your needs.
    Jiri

  • Logical standby stuck at initializing SQL apply only coordinator process up

    Hi
    OS: solaris 5.10
    Hardware: sun sparc
    Oracle database: 11.2.0.1.0
    Primary database name: asadmin
    Standby database name: test
    I had been trying to convert a physical standby to logical standby database. Both the primary and standby reside on the same machine.
    The physical standby was created with a hot backup of primary.
    I had been following document id 278371.1 to convert the physical to logical standby and used the following steps:
    Relevant init parameters on primary:
    *.db_name='asadmin'
    *.db_unique_name='asadmin'
    *.log_archive_config='dg_config=(asadmin,test)'
    *.log_archive_dest_1='location=/u01/asadmin/archive valid_for=(all_logfiles,all_roles) db_unique_name=asadmin'
    *.log_archive_dest_2='SERVICE=test async valid_for=(online_logfiles,primary_role) db_unique_name=test'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.fal_client='asadmin'
    *.fal_server='test'
    *.remote_login_passwordfile='EXCLUSIVE'
    Relevant init parameters on standby database:
    *.db_name='test' -- Was asadmin before I renamed the DB during conversion to logical standby
    *.db_unique_name='test'
    *.log_archive_dest_1='location=/u01/test/archive valid_for=(all_logfiles,all_roles) db_unique_name=test'
    *.log_archive_dest_2='service=asadmin async valid_for=(online_logfiles,primary_role) db_unique_name=asadmin'
    *.log_archive_dest_state_1=enable
    *.log_archive_dest_state_2=defer
    *.remote_login_passwordfile='EXCLUSIVE'*.fal_server=test
    *.fal_client=asadmin
    Steps on primary:
    1) alter system set log_archive_dest_state_2=defer;
    2) shutdown immediate;
    3) Made sure that the physical standby has applied all of the redo sent to it following the shutdown.
    4) startup mount;
    5) ALTER DATABASE BACKUP CONTROLFILE to '/home/oracle/control01.ctl';
    6) ALTER SYSTEM ENABLE RESTRICTED SESSION;
    7) ALTER DATABASE OPEN;
    8) Verified that the supplemental logging is on.
    9) ALTER SYSTEM ARCHIVE LOG CURRENT;
    10) Checked for the checkpoint change no. at this point which is 72403818 and is present in archive log file 1_62_775102253.dbf
    11) EXECUTE DBMS_LOGSTDBY.BUILD;
    12) ALTER SYSTEM ARCHIVE LOG CURRENT;
    13) Checked for the archive log containing dictionary build which is 1_64_775102253.dbf
    14) ALTER SYSTEM DISABLE RESTRICTED SESSION;
    Details of archive logs and related checkpoint change nos:
    NAME FIRST_CHANGE# NEXT_CHANGE#
    /u01/asadmin/archive/1_61_775102253.dbf 72402901 72403817
    /u01/asadmin/archive/1_62_775102253.dbf 72403817 72404069
    /u01/asadmin/archive/1_63_775102253.dbf 72404069 72404211
    /u01/asadmin/archive/1_64_775102253.dbf 72404211 72405700
    Steps on standby:
    1) shutdown immediate;
    2) Copy the archivelog file 61(was created at primary after apply stopped at standby), 62(contains checkpoint no. 72403818), 63 and 64(contains dictionary build). Copy the backup controlfile from step 5 above to the controlfile location in standby init.
    3) startup mount;
    4) Rename all datafiles and redo log files (including standby redo log files) to the correct path on standby.
    5) alter database recover automatic from '/u01/test/archive' until change 72405700 using backup controlfile; -- This completed error-free
    6) alter database guard all; -- this completed error free
    7) alter database open resetlogs; -- this completed error free.
    8) nid target=sys/oracle12 dbname=test
    9) Changed the db_name in init file to new name test.
    10) Added a tempfile to temp tablespaces.
    11) ALTER DATABASE REGISTER LOGICAL LOGFILE '/u01/test/archive/1_61_775102253.dbf'; -- ORA-16225: Missing LogMiner session name for Streams
    12) ALTER DATABASE START LOGICAL STANDBY APPLY INITIAL 72405700; -- This completed error free.
    Also enabled the log_archive_dest_state_2 on primary.
    After this output from some views:
    SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE;
    SESSION_ID STATE
    1 INITIALIZING
    SQL> SELECT SID, SERIAL#, SPID, TYPE FROM V$LOGSTDBY_PROCESS;
    SID SERIAL# SPID TYPE
    587 22 15476 COORDINATOR
    SELECT PERCENT_DONE, COMMAND
    FROM V$LOGMNR_DICTIONARY_LOAD
    WHERE SESSION_ID = (SELECT SESSION_ID FROM V$LOGSTDBY_STATE);
    PERCENT_DONE
    COMMAND
    0
    SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN STATUS
    COORDINATOR ORA-16111: log mining and apply setting up
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN
    72405700 72411501
    SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
    WHERE NEXT_CHANGE# NOT IN
    (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
    ORDER BY THREAD#,SEQUENCE#;
    no rows selected
    SQL> SELECT EVENT_TIME, STATUS, EVENT
    FROM DBA_LOGSTDBY_EVENTS
    ORDER BY EVENT_TIMESTAMP, COMMIT_SCN; 2 3
    EVENT_TIME STATUS EVENT
    14-FEB-12 02:00:50 ORA-16111: log mining and apply setting up
    14-FEB-12 02:00:50 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 02:20:11 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 02:20:39 ORA-16111: log mining and apply setting up
    14-FEB-12 02:20:39 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 02:54:15 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 02:57:38 ORA-16111: log mining and apply setting up
    EVENT_TIME STATUS EVENT
    14-FEB-12 02:57:38 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 03:01:36 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 03:13:44 ORA-16111: log mining and apply setting up
    14-FEB-12 03:13:44 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 04:32:23 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 04:34:17 ORA-16111: log mining and apply setting up
    14-FEB-12 04:34:17 Apply LWM 72405699, HWM 72405699, SCN 72405699
    EVENT_TIME STATUS EVENT
    14-FEB-12 04:36:16 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 04:36:21 ORA-16111: log mining and apply setting up
    14-FEB-12 04:36:21 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 05:15:22 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 05:15:29 ORA-16111: log mining and apply setting up
    14-FEB-12 05:15:29 Apply LWM 72405699, HWM 72405699, SCN 72405699
    I also greped for lsp and lcr processes and found that lsp is up but do not see any lcr.
    The logs are getting transported to the archive destination on standby whenever they are archived on primary but are not getting applied to standby.
    Also in case the standby is down while a log is generated on primary it is not automatically transported to standby once the standby is up, means gap resolution is also not working.
    I see the following in alert log every time I try to restart the log apply, everything seems to be stuck at initialization.
    ALTER DATABASE START LOGICAL STANDBY APPLY (test)
    with optional part
    IMMEDIATE
    Attempt to start background Logical Standby process
    Tue Feb 14 05:15:28 2012
    LSP0 started with pid=28, OS id=23391
    Completed: alter database start logical standby apply immediate
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
    LOGMINER: SpillScn 0, ResetLogScn 0
    -- NOTHING AFTER THIS

    Hello;
    I noticed some of your parameters seem to be wrong.
    fal_client - This is Obsolete in 11.2
    You have db_name='test' on the Standby, it should be 'asadmin'
    fal_server=test is set like this on the standby, it should be 'asadmin'
    I might consider changing VALID_FOR to this :
    VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)Would review 4.2 Step-by-Step Instructions for Creating a Logical Standby Database of Oracle Document E10700-02
    Document 278371.1 is showing its age in my humble opinion.
    -----Wait on this until you fix your parameters----------------------
    Try restarting the SQL Apply
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATEI don't see the parameter MAX_SERVERS, try setting it to 8 times the number of cores.
    Use these statements to trouble shoot :
    SELECT NAME, VALUE, UNIT FROM V$DATAGUARD_STATS;
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE ;TRANSACTIONS%';
    SELECT COUNT(1) AS IDLE_PREPARERS FROM V$LOGSTDBY_PROCESS WHERE
    TYPE = 'PREPERER' AND STATUS_CODE = 16166;Best Regards
    mseberg
    Edited by: mseberg on Feb 14, 2012 7:37 AM

  • Logical Standby: WAITING FOR DICTIONARY LOGS status.

    Hi,
    I have just configured a Logical Standby from Physical Standby, I followed all steps explained in section 4.2.1 from the Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2) documentation, but now my logical standby stay on WAITING FOR DICTIONARY LOGS more than 3 days, all archives from primary DB have been replicated and registered, I don't understand whats
    wrong.
    SQL> SELECT * FROM V$LOGSTDBY_STATE;
    PRIMARY_DBID SESSION_ID
    REALTIME_APPLY
    STATE
       144528764          1
    Y
    WAITING FOR DICTIONARY LOGS
    Fri Apr  8 18:05:47 2011
    RFS LogMiner: Registered logfile [/archive/dbp/581440892_1_0000157573.arc] to LogMiner session id [1]
    Fri Apr  8 18:10:55 2011
    RFS LogMiner: Client enabled and ready for notification
    Fri Apr  8 18:10:55 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: Successfully opened standby log 4: '/redo2a/dbp/redo4a_stb.log'
    Fri Apr  8 18:10:58 2011
    RFS LogMiner: Registered logfile [/archive/dbp/581440892_1_0000157574.arc] to LogMiner session id [1]
    Fri Apr  8 18:15:54 2011
    RFS LogMiner: Client enabled and ready for notification
    Fri Apr  8 18:15:54 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: Successfully opened standby log 3: '/redo1a/dbp/redo3a_stb.log'
    Fri Apr  8 18:15:57 2011
    RFS LogMiner: Registered logfile [/archive/dbp/581440892_1_0000157575.arc] to LogMiner session id [1]Thanks in advance.
    My Oracle version is: 10.2.0.4
    My Platform is: AIX 6.1
    Nataly.

    On standby alert no error messages found.
    RFS LogMiner: Client enabled and ready for notification
    Mon Apr 11 18:09:58 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: Successfully opened standby log 4: '/redo2a/mefsf/redo4a_stb.log'
    Mon Apr 11 18:10:01 2011
    RFS LogMiner: Registered logfile [/archive/mefsf/581440892_1_0000157782.arc] to LogMiner session id [1]
    Mon Apr 11 18:23:16 2011
    RFS LogMiner: Client enabled and ready for notification
    Mon Apr 11 18:23:16 2011
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: Successfully opened standby log 3: '/redo1a/mefsf/redo3a_stb.log'
    Mon Apr 11 18:23:18 2011
    RFS LogMiner: Registered logfile [/archive/mefsf/581440892_1_0000157783.arc] to LogMiner session id [1]Continues
    On primary alertlog,
    There is a common message:
    ORACLE Instance bdprod - Archival Error. Archiver continuing.
    Mon Apr 11 18:22:57 2011
    Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc4_2818414.trc:
    ORA-00308: cannot open archived log '/archive/bdprod/581440892_1_0000157545.arc'
    ORA-27037: unable to obtain file status
    IBM AIX RISC System/6000 Error: 2: No such file or directory
    Additional information: 3
    Mon Apr 11 18:22:57 2011
    FAL[server, ARC4]: FAL archive failed, see trace file.
    Mon Apr 11 18:22:57 2011
    Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc4_2818414.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Mon Apr 11 18:22:57 2011
    ORACLE Instance bdprod - Archival Error. Archiver continuing.
    Mon Apr 11 18:22:57 2011
    Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc1_7078038.trc:
    ORA-00308: cannot open archived log '/archive/bdprod/581440892_1_0000157546.arc'
    ORA-27037: unable to obtain file status
    IBM AIX RISC System/6000 Error: 2: No such file or directory
    Additional information: 3
    Mon Apr 11 18:22:57 2011
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Mon Apr 11 18:22:57 2011
    Errors in file /oracle/app/oracle/admin/bdprod/bdump/bdprod_arc1_7078038.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Mon Apr 11 18:22:57 2011
    ORACLE Instance bdprod - Archival Error. Archiver continuing.

  • I do not have a delete option for files or folders in Adobe Creative Cloud

    I do not have a delete option for files or folders in Adobe Creative Cloud
    im looking and looking....
    im stumped.....
    4 weeks now....
    kai

    HOW TO DELETE FILES or FOLDERS or Assets from Adobe Creative Cloud Brouser/Web Portal By: Kai Buskirk rev:130626
    Adobe now burried deleting or trashing unwanted iteams files or folders in the Archive section of your creative cloud brouser/web portal.
    Note!! - No longeer is there a standard TrashCan icon or simple Delete button... its burried in the archive sector: but why i ask?
    An archive is an accumulation of historical records, or the physical place they are located.[1] Archives contain primary source documents that have accumulated over the course of an individual or organization's lifetime, and are kept to show the function of that person or organization. Professional archivists and historians generally understand archives to be records that have been naturally and necessarily generated as a product of regular legal, commercial, administrative or social activities.
    In general, archives consist of records that have been selected for permanent or long-term preservation on grounds of their enduring cultural, historical, or evidentiary value. Archival records are normally unpublished and almost always unique, unlike books or magazines for which many identical copies exist. This means that archives (the places) are quite distinct from libraries with regard to their functions and organization, although archival collections can often be found within library buildings
    A person who works in archives is called an archivist. The study and practice of organizing, preserving, and providing access to information and materials in archives is called archival science. The physical place of storage is sometimes referred to as an archive repository.
    To delete files folders or individual assests in the current incarnation of adobe creative cloud brouser/web portal rev:130626
    1 - Check mark the box on left and select files or folders you would like Deleted/Trashed and move them to the ARCHIVE folder location inside your adobe creative cloud brouser/web portal....Then navigate to the ARCHIVE SECTION
    2 - Once the files or folders you have check marked are moved to the ARCHIVE folder location you can select them for Permanant Deletion (Trash)
    ps: you can also restore them......if you so choose....
    3 - in case you missed this step after selecting/checking the files or folders in the ARCHIVE folder there is a small Triangle Selector drop down that will reviel the Permanently Delete option.... clicking that is the point of no return i think..... so do not be misled my the use of the term ARCHIVE.... DELETING PERMANANTLY IS DELETING YO!
    4 - OK DONE NOW YOU GOT IT .....
    Good Luck Happy House Cleaning.....
    Wamest Blessings,
    Kai Buskirk
    Message was edited by: [email protected] rev: 130626

  • ORA-279 signalled during recovery of standby database

    Hi All,
    I am preparing standby database after taking hot backup and copying those datafiles to standby,
    taken controlfile standby controlfile backup from primary
    mounted the standby database using standby controlfile using
    startup nomount pfile='/u01/stand.ora'
    alter database mount standby database;
    Now I have started applying archives on standby database after copying all archives from primary to standby box the archives generated during backup using..
    recover standby database;
    But I am getting the below warning on the recovery screen...
    ORA-00279: change 51667629050 generated at 07/02/2009 00:59:43 needed for
    thread 1
    ORA-00289: suggestion : /nodal-archive/archive/1_55118_652209172.arc
    ORA-00280: change 51667629050 for thread 1 is in sequence #55118
    ORA-00278: log file '/nodal-archive/archive/1_55117_652209172.arc' no longer
    needed for this recovery
    ORA-00279: change 51667703096 generated at 07/02/2009 01:06:04 needed for
    thread 1
    ORA-00289: suggestion : /nodal-archive/archive/1_55119_652209172.arc
    ORA-00280: change 51667703096 for thread 1 is in sequence #55119
    ORA-00278: log file '/nodal-archive/archive/1_55118_652209172.arc' no longer
    needed for this recovery
    ORA-00279: change 51667767649 generated at 07/02/2009 01:12:28 needed for
    thread 1
    ORA-00289: suggestion : /nodal-archive/archive/1_55120_652209172.arc
    ORA-00280: change 51667767649 for thread 1 is in sequence #55120
    ORA-00278: log file '/nodal-archive/archive/1_55119_652209172.arc' no longer
    needed for this recovery
    ORA-00279: change 51667831821 generated at 07/02/2009 01:19:40 needed for
    thread 1
    ORA-00289: suggestion : /nodal-archive/archive/1_55121_652209172.arc
    ORA-00280: change 51667831821 for thread 1 is in sequence #55121
    ORA-00278: log file '/nodal-archive/archive/1_55120_652209172.arc' no longer
    needed for this recovery
    and I am getting the below warning on the alert log..........
    Sun Jul 5 18:37:36 2009
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Sun Jul 5 18:37:36 2009
    Media Recovery Log /nodal-archive/archive/1_55256_652209172.arc
    Sun Jul 5 18:40:31 2009
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Sun Jul 5 18:40:31 2009
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Sun Jul 5 18:40:31 2009
    Media Recovery Log /nodal-archive/archive/1_55257_652209172.arc
    Please suggest what to do now.............
    Thanks in Advance,
    Sukanta Paul.

    Hi Sukanta,
    I didn't really understand what the problem is...
    Let me explain, hope it will be clear.
    You created a standby control file, and backed up the database (it doesn't matter in which order). Now, you have a standby database that is ready to apply logs. It will be able to apply log forever, until we will stop it.
    The messages you see say the following:
    ORA-00279: change 51667831821 generated at 07/02/2009 01:19:40 needed for thread 1 {color:blue} - this says that the recovery process is now at scn 51667831821 {color}
    ORA-00289: suggestion : /nodal-archive/archive/1_55121_652209172.arc {color:blue} - the information needed for the recover is in this file (this is only the default name, you can choose other name if the file is called differently) {color}
    ORA-00280: change 51667831821 for thread 1 is in sequence #55121 {color:blue} - same here, the needed archive is sequence 55121 {color}
    ORA-00278: log file '/nodal-archive/archive/1_55120_652209172.arc' no longer needed for this recovery {color:blue} - this message is after Oracle has applied the log, and it means that this log is no longer needed {color}
    Now it will wait for the next log. Again, this process is endless, this is how it is made to allow the standby database to be always synched with the primary.
    Liron Amitzi
    Senior DBA consultant
    [www.dbsnaps.com]
    [www.orbiumsoftware.com]

  • Same Sequence# with different applied value in v$archived_log

    Hi everyone,
    I have an issue with one of the dataguard servers here.
    Basically, looking at the v$managed_standby, it is still applying the latest archived log sequence.
    However when I checked for unapplied archived log from v$archived_log, I found at least 1 sequence# which was quite old (around a few days old) not applied.
    my query to check this is:
    SELECT sequence# from v$archived_log where applied = 'NO';result:
    SEQUENCE#
        40154
        40546with a different query I found an interesting result
    select sequence#, recid, stamp, status, applied from v$archived_log where sequence# in (40154, 40546);result:
    SEQUENCE#      RECID      STAMP S APP
        40154       8093  777156019 D NO
        40154       8095  777156053 D YES
        40546       8486  777673729 D NO
        40546       8487  777673734 D YESAt the time I ran this query, the v$managed_standby are as follow:
    select process, status, sequence# from v$managed_standby;result:
    PROCESS   STATUS        SEQUENCE#
    ARCH      CLOSING           40562
    ARCH      CLOSING           40557
    MRP0      APPLYING_LOG      40563
    RFS       IDLE                  0
    RFS       IDLE              40563A simple solution to get those un-applied archived log to be applied is to restart the standby database instance.
    Another finding from the production database:
    select recid, stamp, sequence#, creator, registrar, standby_dest from v$archived_log where sequence# in (40154, 40546);result:
    RECID      STAMP  SEQUENCE# CREATOR REGISTR STA
    45446  777156011      40154 ARCH    ARCH    NO
    45447  777156017      40154 LGWR    LGWR    YES
    45450  777156051      40154 ARCH    ARCH    YES
    46231  777673709      40546 ARCH    ARCH    NO
    46232  777673728      40546 LGWR    LGWR    YES
    46233  777673733      40546 ARCH    ARCH    YESThe question is of course, why is this happening?
    Can this be prevented?
    Thank you,
    Adhika

    CKPT wrote:
    I have an issue with one of the dataguard servers here.
    Basically, looking at the v$managed_standby, it is still applying the latest archived log sequence.
    However when I checked for unapplied archived log from v$archived_log, I found at least 1 sequence# which was quite old (around a few days old) not applied.
    my query to check this is:
    SELECT sequence# from v$archived_log where applied = 'NO';result:
    SEQUENCE#
    40154
    40546
    Even if old archives applied or not, it will keep transfer the archivelogs from primary. Please confirm that you been executed above query in standby.. is it?Yes I ran it from the standby database
    CKPT wrote:
    with a different query I found an interesting result
    select sequence#, recid, stamp, status, applied from v$archived_log where sequence# in (40154, 40546);result:
    SEQUENCE#      RECID      STAMP S APP
    40154       8093  777156019 D NO
    40154       8095  777156053 D YES
    40546       8486  777673729 D NO
    40546       8487  777673734 D YES
    How many remote/standby destinations you have in your DR setup?
    Might this query you have executed in primary, You should always use DEST_ID when executing from primary (or) do check in standby database, Because the applied column you have to check in standby , If you see above the same sequence showing applied in one destination & not applied on other instance, so please do select for dest_2 or in standby.I have only 1 standby destination.
    The query above was executed in the standby database server as well as to show that the same sequence# has different result in the applied column.
    CKPT wrote:
    The question is of course, why is this happening?
    Can this be prevented?What is your Online redo log file size (or) how much average archive log file size?
    This issue happens when Network glitch while transferring archives from primary to standby RFS , So when is big enough in such conditions it will be in status can be WAIT_FOR_LOG.
    BTW, which redo transport you are using? -- recommended to use LGWR
    Is there any network problem?
    check for exact problem either from primary alert log file or from below query,
    SQL> select severity,error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status;
    SQL> select sequence#, to_char(completion_time,'DD-MON-YYYY HH24:MI:SS') from v$archived_log where sequence# in (40154, 40546);
    HTH.The redolog file size is 100MB and so does the archive log file size.
    I'm using LGWR ASYNC
    these are the result of the query which I modified slightly:
    query:
    select severity,error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status where message like '%40154%' or message like '%40546%';result:
    SEVERITY ERROR_CODE MESSAGE                                                                                              TIMESTAMP
    Warning           0 LNS: Standby redo logfile selected for thread 1 sequence 40546 for destination LOG_ARCHIVE_DEST_2    11-MAR-2012 20:28:44
    Warning           0 ARC1: Standby redo logfile selected for thread 1 sequence 40546 for destination LOG_ARCHIVE_DEST_2   11-MAR-2012 20:28:49The message for the 40154 must have been purged.
    It appeared to me that sequence 40546 has been sent twice, by LNS and ARC1.
    query:
    select sequence#, registrar, creator, standby_dest, dest_id, to_char(completion_time,'DD-MON-YYYY HH24:MI:SS') completion_time from v$archived_log where sequence# in (40154, 40546);result:
    SEQUENCE# REGISTR CREATOR STA    DEST_ID COMPLETION_TIME
        40154 ARCH    ARCH    NO           1 05-MAR-2012 20:40:11
        40154 LGWR    LGWR    YES          2 05-MAR-2012 20:40:17
        40154 ARCH    ARCH    YES          2 05-MAR-2012 20:40:51
        40546 ARCH    ARCH    NO           1 11-MAR-2012 20:28:29
        40546 LGWR    LGWR    YES          2 11-MAR-2012 20:28:48
        40546 ARCH    ARCH    YES          2 11-MAR-2012 20:28:53This query proved that the primary database is actually trying to send twice with a different completion time.
    Why does the primary database has to send twice?
    Thanks for replying,
    Adhika

Maybe you are looking for