Slow SQL Apply on Logical Standby Database in Oracle10g

Hi,
We are using Oracle 10g Logical Standby database in our production farm but whenever there is bulk data load (5-6 GB data) on the primary database, the logical standby seems to be hung. It takes days to apply the 5-6 GB data on the logical standby.
Can anybody give me some pointers how can I make my SQL Apply fast on the logical standby for bulk data.
Thanks
Amit

Hi there,
I've a similar problem. I did an insert of 700k on a table. It takes me over 1 1/2 hours to see the data. Notice, I increased the "max_sga" to 300m and "max_servers" to 25" and didn't help the performance at all.
My version is 10.2.0.3 with the patch 6081550.
APPLIED_SCN APPLIED_TIME RESTART_SCN RESTART_TIME LATEST_SCN LATEST_TIME MINING_SCN MINING_TIME
1015618 29-NOV-2007 18:28:51 1009600 29-NOV-2007 18:28:51 1017519 29-NOV-2007 19:54:07 1015656 29-NOV-2007 18:32:14

Similar Messages

  • Real-time apply cascaded logical standby database

    Hi
    I have a primary database orcl
    Pysical standby database orcl_std
    Cascaded logical standby database orcl_tri which receives archivelogs from orcl_std
    Real time apply is enabled both in orcl_std (physical standby) and orcl_tri (logical standby)
    When I create a table in primary orcl, I am unable to see it on orcl_tri (Although real time apply is enabled)
    However, when I switch log in primary, I can see the new table on orcl_tri.
    My question is, why realtime apply is not working in my scenerio ?
    orcl_std : ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION USING CURRENT LOGFILE;
    orcl_tri: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Oracle 11.2.0.3.0

    Hi mseberg,
    Thanks for your reply.
    There is no load or network issue as I`ve just created these databases for the experiement.
    I have the same output from standby and primary databases.
    SQL> select bytes/1024/1024 from  v$standby_log;
    BYTES/1024/1024
                 10
                 10
                 10I can see below output in standby alertlog
    Fri Nov 16 08:39:51 2012
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
    ALTER DATABASE START LOGICAL STANDBY APPLY (orcl)
    with optional part
    IMMEDIATE
    Attempt to start background Logical Standby process
    Fri Nov 16 08:39:51 2012
    LSP0 started with pid=37, OS id=16141
    Completed: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
    LOGMINER: SpillScn 1953318, ResetLogScn 995548
    LOGMINER: summary for session# = 1
    LOGMINER: StartScn: 0 (0x0000.00000000)
    LOGMINER: EndScn: 0 (0x0000.00000000)
    LOGMINER: HighConsumedScn: 1955287 (0x0000.001dd5d7)
    LOGMINER: session_flag: 0x1
    LOGMINER: Read buffers: 16
    Fri Nov 16 08:39:55 2012
    LOGMINER: session#=1 (Logical_Standby$), reader MS00 pid=30 OS id=16145 sid=49 started
    Fri Nov 16 08:39:55 2012
    LOGMINER: session#=1 (Logical_Standby$), builder MS01 pid=39 OS id=16149 sid=44 started
    Fri Nov 16 08:39:55 2012
    LOGMINER: session#=1 (Logical_Standby$), preparer MS02 pid=40 OS id=16153 sid=50 started
    LOGMINER: Turning ON Log Auto Delete
    LOGMINER: Begin mining logfile during commit scan for session 1 thread 1 sequence 202, +DATA/orcl_std/archivelog/2012_11_15/thread_1_seq_202.349.799450179
    LOGMINER: End mining logfiles during commit scan for session 1
    LOGMINER: Turning ON Log Auto Delete
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 202, +DATA/orcl_std/archivelog/2012_11_15/thread_1_seq_202.349.799450179
    LOGMINER: End   mining logfile for session 1 thread 1 sequence 202, +DATA/orcl_std/archivelog/2012_11_15/thread_1_seq_202.349.799450179
    Fri Nov 16 08:40:04 2012
    LOGSTDBY Analyzer process AS00 started with server id=0 pid=41 OS id=16162
    Fri Nov 16 08:40:05 2012
    LOGSTDBY Apply process AS03 started with server id=3 pid=45 OS id=16175
    Fri Nov 16 08:40:05 2012
    LOGSTDBY Apply process AS04 started with server id=4 pid=46 OS id=16179
    Fri Nov 16 08:40:05 2012
    LOGSTDBY Apply process AS01 started with server id=1 pid=42 OS id=16167
    Fri Nov 16 08:40:05 2012
    LOGSTDBY Apply process AS05 started with server id=5 pid=47 OS id=16183
    Fri Nov 16 08:40:05 2012
    LOGSTDBY Apply process AS02 started with server id=2 pid=44 OS id=16171Do you think real-time apply wasnt setup properly ?

  • Redo data not applied on logical standby database 10g

    after a network problem within the primary and the logical standby database. The redo data is not applied on the logical standby even if all the archived log are sent to it.
    The below is the output from v$archive_gap and DBA_LOGSTDBY_LOG
    SQL> select * from v$archive_gap;
    no rows selected
    SQL> SELECT SEQUENCE#, FIRST_TIME, APPLIED
    FROM DBA_LOGSTDBY_LOG
    ORDER BY SEQUENCE#; 2 3
    SEQUENCE# FIRST_TIME APPLIED
    3937 24-FEB-10 01:48:23 CURRENT
    3938 24-FEB-10 10:31:22 NO
    3939 24-FEB-10 10:31:29 NO
    3940 24-FEB-10 10:31:31 NO
    3941 24-FEB-10 10:33:44 NO
    3942 24-FEB-10 11:54:17 NO
    3943 24-FEB-10 12:05:30 NO
    Any help?
    Thanks

    ORA-00600: internal error code, arguments: [krvxgirp], [], [], [], [], [], [], []
    LOGSTDBY Analyzer process P003 pid=48 OS id=8659 stopped
    Wed Feb 24 16:49:04 2010
    Errors in file /oracle/product/10.2.0/admin/umarket/bdump/oradb_lsp0_8651.trc:
    ORA-12801: error signaled in parallel query server P003
    ORA-00600: internal error code, arguments: [krvxgirp], [], [], [], [], [], [], []
    and below an Warning: Apply error received: ORA-26714: User error encountered while applying. Clearing. from oradb_lsp0_8651.trc
    Thanks

  • When to use Real Time Apply for Logical standby..!!

    Hello All,
    I have been trying many ways to speed up the archival on primary and improve sql apply on logical standby, but still we are getting about 45-50 mins of delay between primary and logical standby.
    We wanted to have our transactions applied on logical standby within couple minutes. Which i guess wont be possible in async mode.
    That's why i am planning to implement Real Time apply between primary and logical standby.
    Now since both our databases are too far away from each other (Primary is in US and logical is in India) would it be recommended to implement real time apply in such scenario? And if implemented would it affect Primary DB Performance?
    Also if there might be some packet loss or network hitch would Primary will try again and keep logical DB in Sync with Primary?
    Any help or suggestions would be great.
    Thanks.

    yes, real time apply is recommended in your scenario.
    however due to the geographical distance between your primary and standby; I would suggest to keep your standby in current mode - max performance ; ASYNC- itself. It would not affect the performace of the primary.
    As long as you set the FAL parameters and configure tnsnames properly and ensure proper deletion policy for archivelog cleanup in primary ( so that it's not deleted before shipping if need be), you shouldn't find any problem with primary & standby synching.
    Good Luck.
    Cheers.

  • Sql Apply issue in logical standby database--(10.2.0.5.0) x86 platform

    Hi Friends,
    I am getting the following exception in logical standby database at the time of Sql Apply.
    After run the command alter database start logical standby apply sql apply services start but after few second automatically stop and getting following exception.
    alter database start logical standby apply
    Tue May 17 06:42:00 2011
    No optional part
    Attempt to start background Logical Standby process
    LOGSTDBY Parameter: MAX_SERVERS = 20
    LOGSTDBY Parameter: MAX_SGA = 100
    LOGSTDBY Parameter: APPLY_SERVERS = 10
    LSP0 started with pid=30, OS id=4988
    Tue May 17 06:42:00 2011
    Completed: alter database start logical standby apply
    Tue May 17 06:42:00 2011
    LOGSTDBY status: ORA-16111: log mining and apply setting up
    Tue May 17 06:42:00 2011
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 4, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 100M, Checkpoint interval = 500M
    Tue May 17 06:42:00 2011
    LOGMINER: krvxpsr summary for session# = 1
    LOGMINER: StartScn: 0 (0x0000.00000000)
    LOGMINER: EndScn: 0 (0x0000.00000000)
    LOGMINER: HighConsumedScn: 2660033 (0x0000.002896c1)
    LOGMINER: session_flag 0x1
    LOGMINER: session# = 1, preparer process P002 started with pid=35 OS id=4244
    LOGSTDBY Apply process P014 started with pid=47 OS id=5456
    LOGSTDBY Apply process P010 started with pid=43 OS id=6484
    LOGMINER: session# = 1, reader process P000 started with pid=33 OS id=4732
    Tue May 17 06:42:01 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1417, X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
    Tue May 17 06:42:01 2011
    LOGMINER: Turning ON Log Auto Delete
    Tue May 17 06:42:01 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
    Tue May 17 06:42:01 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1418, X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
    LOGSTDBY Apply process P008 started with pid=41 OS id=4740
    LOGSTDBY Apply process P013 started with pid=46 OS id=7864
    LOGSTDBY Apply process P006 started with pid=39 OS id=5500
    LOGMINER: session# = 1, builder process P001 started with pid=34 OS id=4796
    Tue May 17 06:42:02 2011
    LOGMINER: skipped redo. Thread 1, RBA 0x00058a.00000950.0010, nCV 6
    LOGMINER: op 4.1 (Control File)
    Tue May 17 06:42:02 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1419, X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1420, X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1421, X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
    LOGSTDBY Analyzer process P004 started with pid=37 OS id=5096
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
    LOGSTDBY Apply process P007 started with pid=40 OS id=2760
    Tue May 17 06:42:03 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    LOGSTDBY Apply process P012 started with pid=45 OS id=7152
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1422, X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1423, X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1424, X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
    LOGMINER: session# = 1, preparer process P003 started with pid=36 OS id=5468
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
    Tue May 17 06:42:04 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1425, X:\TANVI\ARCHIVE2\ARC01425_0748170313.001
    LOGSTDBY Apply process P011 started with pid=44 OS id=6816
    LOGSTDBY Apply process P005 started with pid=38 OS id=5792
    LOGSTDBY Apply process P009 started with pid=42 OS id=752
    Tue May 17 06:42:05 2011
    krvxerpt: Errors detected in process 34, role builder.
    Tue May 17 06:42:05 2011
    krvxmrs: Leaving by exception: 600
    Tue May 17 06:42:05 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    LOGSTDBY status: ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    Tue May 17 06:42:06 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_lsp0_4988.trc:
    ORA-12801: error signaled in parallel query server P001
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    Tue May 17 06:42:06 2011
    LogMiner process death detected
    Tue May 17 06:42:06 2011
    logminer process death detected, exiting logical standby
    LOGSTDBY Analyzer process P004 pid=37 OS id=5096 stopped
    LOGSTDBY Apply process P010 pid=43 OS id=6484 stopped
    LOGSTDBY Apply process P008 pid=41 OS id=4740 stopped
    LOGSTDBY Apply process P012 pid=45 OS id=7152 stopped
    LOGSTDBY Apply process P014 pid=47 OS id=5456 stopped
    LOGSTDBY Apply process P005 pid=38 OS id=5792 stopped
    LOGSTDBY Apply process P006 pid=39 OS id=5500 stopped
    LOGSTDBY Apply process P007 pid=40 OS id=2760 stopped
    LOGSTDBY Apply process P011 pid=44 OS id=6816 stopped
    Tue May 17 06:42:10 2011

    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []submit an SR to ORACLE SUPPORT.
    refer these too
    *ORA-600/ORA-7445 Error Look-up Tool [ID 153788.1]*
    *Bug 6022014: ORA-600 [KRVXBPX20] ON LOGICAL STANDBY*

  • How to monitor SQL Apply for 10.2.0.3 logical standby database

    We have a logical standby database setup for reporting purposes. Users want to monitor whether sql apply is working or failed closely as it has reporting repercations.
    In case of 9i databases , there was "Data Not Applied (logs)" metric which we used for alerting and paging, in case a backlog of more than 5 log files developed.
    With 10.2.0.3 onwards, the metric no more exists.
    I would like to learn from other, how to monitor the setup, so that if the backlog in logs shipping or applying develops, we get page.
    Regards.

    regather the statistics on the table with method_opt=>for all columns or for all indexed columns or whatever size 1
    The 'size 1' directive will remove the histogram statistics.
    Sorry, didn't read ur post in a hurry. Below article (http://www.freelists.org/post/oracle-l/Any-quick-way-to-remove-histograms,13) removes histogram without re-analyzing the table. Hope that helps!!!
    On 3/16/07, Wolfgang Breitling <breitliw@xxxxxxxxxxxxx> wrote:
    I also did a quick check and just using
    exec
    dbms_stats.set_column_stats(user,'table_name',colname=>'column_name',d
    istcnt=>
    <num_distinct>);
    will remove the histogram without removing the low_value andhigh_value.
    At 01:40 PM 3/16/2007, Alberto Dell'Era wrote:
    On 3/16/07, Allen, Brandon <Brandon.Allen@xxxxxxxxxxx> wrote:
    Is there any faster way to remove histograms other than
    re-analyzing
    >
    the table? I want to keep the existing table, index & columnstats,
    >
    but with only 1 bucket (i.e. no histograms).You might try the attached script, that reads the stats using
    dbms_stats.get_column_stats and re-sets them, minus the histogram,
    using dbms_stats.set_column_stats.
    I haven't fully tested it - it's only 10 minutes old, even if Ihave
    slightly modified for you another script I've used for quite some
    time - and the spool on 10.2.0.3 seems to confirmthat the histogram
    is, indeed, removed, while all the other statistics are preserved.I
    have also reset density to 1/num_distinct, that is the value youget
    if no histogram is collected.regards,
    naren
    Edited by: fuzzydba on Oct 25, 2010 10:52 AM

  • How to apply the changes in logical standby database

    Hi,
    I am new to Dataguard. I am now using 10.2.0.3 and followed the steps from Oracle Data Guard Concepts and Administration Guide to setup a logical standby database.
    When I insert a record to a table from the primary database side, when i query the same table from the logical standby database, it doesn't show the new records.
    Did I miss something? What I want to do is when I insert a record in the primary db, then the corresponding record will be inserted in the standby db.
    Or I totally misunderstand what Oracle data guard is? Any help are appreciated.
    Denis

    Hi
    Can anyone help to answer me is my logical standby db have a archive gap?
    SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME, NEWEST_SCN, NEWEST_T
    IME FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN APPLIED_TIME READ_SCN READ_TIME NEWEST_SCN
    NEWEST_TIME
    851821 29-JUL -08 17:58:29 851822 29-JUL -08 17:58:29 1551238
    08-AUG -08 08:43:29
    SQL> select pid, type, status, high_scn from v$logstdby;
    no rows selected
    SQL> alter database start logical standby apply;
    Database altered.
    SQL> select pid, type, status, high_scn from v$logstdby;
    PID
    TYPE
    STATUS HIGH_SCN
    2472
    COORDINATOR
    ORA-16116: no work available
    3380
    READER
    ORA-16127: stalled waiting for additiona 852063
    l transactions to be applied
    2480
    BUILDER
    ORA-16116: no work available
    2492
    ANALYZER
    ORA-16111: log mining and apply setting
    up
    2496
    APPLIER
    ORA-16116: no work available
    2500
    APPLIER
    ORA-16116: no work available
    3700
    APPLIER
    ORA-16116: no work available
    940
    APPLIER
    ORA-16116: no work available
    2504
    APPLIER
    ORA-16116: no work available
    9 rows selected.
    Thanks a lot.
    Message was edited by:
    Denis Chan

  • Apply Patches on Oracle Database with Logical Standby Database

    Here I am:
    I got a primary database with a logical standby database running Oracle 11g. I got two client applications, one is the production site pointing to the primary one, another one is just a backup site pointing to the logical one.Things will only be written into the primary database every mid night and client applications can only query the database but not add, update nor delete.And now, I want to apply the latest patch on both of my databases. I am also the DNS administrator, I can make the name server pointing to the backup site instead of the production one.I want to firstly apply the patch on the logical one, and then the physical one.
    I found some reference which explains how to apply patches by adopting "Rolling Upgrade Method". however, I want to avoid doing any "switch over" mentioned in the reference because I can make use of name server. Can I just apply patches as the following way?
    1)Stop SQL apply
    2)Apply patches on logical standby database
    3)let the name server point to the backup site
    4)Apply patches on the primary database
    5)Start SQL apply
    6)Let the name server point back to the production site
    Thanks in advance.

    Pl follow the steps in MOS Doc 437276.1 ( Upgrading Oracle Database with a Logical Standby Database In Place )
    HTH
    Srini

  • Slow apply on logical standby

    Hi ,
    oracle 10.2.0.3 enterprise edition logical standby
    We performed heavy updates on our production database due to which logical standby gone lots of logs behind with primary database and log are getting apply on logical standby very slowly.
    Kindly suggest how to speed up apply process on logical standby ....

    Santosh Pradhan wrote:
    Hi ,
    oracle 10.2.0.3 enterprise edition logical standby
    We performed heavy updates on our production database due to which logical standby gone lots of logs behind with primary database and log are getting apply on logical standby very slowly.
    Kindly suggest how to speed up apply process on logical standby ....Hope you are using "ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;" command
    Here please check the below note for Adjusting the Number of APPLIER Processes , also if redo transport is slow check the settings for "LOG_ARCHIVE_MAX_PROCESSES"
    http://docs.oracle.com/cd/B28359_01/server.111/b28294/manage_ls.htm#CHDBGBFC

  • Logical Standby databases

    Hi,
    When we say "Logical Standby Databases are logically identical to primary databases although the physical organization and structure of the data can be different." what does it exactly means?
    Does it mean that in logical standby tablespace name, schema name, table name, column names etc can be different and still has the same data as primary?
    Does it mean that we can exclude indexes and constraints as present in primary?
    Only the data should match with primary word by word, value by value?
    I am asking this as i have never worked in a logical standby database but i seriously want to know.
    Please answer.
    Regards,
    SID

    Physical standby differs from logical standby:
    Physical standby schema matches exactly the source database.
    Archived redo logs and FTP'ed directly to the standby database which is always running in "recover" mode. Upon arrival, the archived redo logs are applied directly to the standby database.
    Logical standby is different from physical standby:
    Logical standby database does not have to match the schema structure of the source database.
    Logical standby uses LogMiner techniques to transform the archived redo logs into native DML statements (insert, update, delete). This DML is transported and applied to the standby database.
    Logical standby tables can be open for SQL queries (read only), and all other standby tables can be open for updates.
    Logical standby database can have additional materialized views and indexes added for faster performance.
    Installing Physical standbys offers these benefits:
    An identical physical copy of the primary database
    Disaster recovery and high availability
    High Data protection
    Reduction in primary database workload
    Performance Faster
    Installing Logical standbys offer:
    Simultaneous use for reporting, summations and queries
    Efficient use of standby hardware resources
    Reduction in primary database workload
    Some limitations on the use of certain datatypes

  • Logical standby database problem

    i have setup logical standby database on my pc . Everything was working fine . Logs were applied.
    But i tried testing few things on standby and issued few commands on that.
    After that the logs were not applied. i tried to restart that also dint work.
    what should i do so that things r back to normal or i need to create the standby again .
    Thanks

    The output is as follows .
    SQL> SELECT APPLIED_SCN, NEWEST_SCN, READ_SCN, NEWEST_SCN-APPLIED_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN READ_SCN NEWEST_SCN-APPLIED_SCN
    179493 194423 179497 14930
    SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    no rows selected
    SQL> select operation, options, object_name, cost
    2 from v$sql_plan, v$session, v$logstdby
    3 where v$sql_plan.hash_value = v$session.sql_hash_value
    4 and v$session.serial# = v$logstdby.serial#
    5 and v$logstdby.status_code=16113;
    no rows selected

  • MV Logs not getting purged in a Logical Standby Database

    We are trying to replicate a few tables in a logical standby database to another database. Both the source ( The Logical Standby) and the target database are in Oracle 11g R1.
    The materialized views are refreshed using FAST REFRESH.
    The Materialized View Logs created on the source ( the Logical Standby Database) are not getting purged when the MV in the target database is refreshed.
    We checked the entries in the following Tables: SYS.SNAP$, SYS.SLOG$, SYS.MLOG$
    When a materialized view is created on the target database, a record is not inserted into the SYS.SLOG$ table and it seems like that's why the MV Logs are not getting purged.
    Why are we using a Logical Standby Database instead of the Primary ? Because, the load on the Primary Database is too much and the machine doesn't have enough resources to support MV based replication. The CPU usage is 95% all the time. The appplication owner won't allow us to go against the Primary database.
    Do we have to do anything different in terms of Configuration/Privileges etc. because we are using a Logical Standby Database as a source ?
    Thanks in Advance.

    We have a 11g RAC database in solaris OS where there is huge gap in archive log apply.
    Thread Last Sequence Received Last Sequence Applied Difference
    1 132581 129916 2665
    2 108253 106229 2024
    3 107452 104975 2477
    The MRP0 process seems not to be working also.Almost 7000+ archives lag in standby if compared with primary database.
    i suggest you to go with Incremental rollforward backups to make it SYNC, use this below link for step by step procedure.
    http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/
    Here questions.
    1) Whether those archives are transported & just not applied?
    2) Is in production do you have archives or backup of archives?
    3) What you have found errors in alert log file?
    post
    SQL> select severity,message,error_code,timestamp from v$dataguard_status where dest_id=2;
    4) What errors in primary database alert log file?
    Also post
    select     ds.dest_id id
    ,     ad.status
    ,     ds.database_mode db_mode
    ,     ad.archiver type
    ,     ds.recovery_mode
    ,     ds.protection_mode
    ,     ds.standby_logfile_count "SRLs"
    ,     ds.standby_logfile_active active
    ,     ds.archived_seq#
    from     v$archive_dest_status     ds
    ,     v$archive_dest          ad
    where     ds.dest_id = ad.dest_id
    and     ad.status != 'INACTIVE'
    order by
         ds.dest_id
    /Also check errors from standby database.

  • Creating new tables in Logical Standby database

    Hi
    I have a requirement to create new tables in logical standby database. These tables will not be present on primary database. Is it possible to do this ?
    I have a new schema already created which has the privilege to CREATE new table.
    I have stopped the logical standby apply.
    When I am now trying to create a new table but it is failing with error : insufficient privileges.
    When I am trying to run below statement on new schema it is also failing with error of insufficient privileges.
    alter session disable dataguard;
    Please help.

    user8819121 wrote:
    Thanks Mahir,
    I was able to create the table after logging in as sysdba.
    But I need my user on that table to execute DML statements. My user has privileges to insert,delete and update any table.
    I tried the following statements to disable the guard but it is sill not working
    ALTER DATABASE GUARD STANDBY.
    Do I need to skip the tables created using dbms_logstdby package to not making it part of SQL Apply ? I guess not since the table is not in primary database.
    Amit
    You can skip only on primary, your created schema on Standby is not in primary.
    Then you must change Status of data guard to NONE. NONE is means is not any security on your data.
    In Guard status NONE can change all schema data.
    Please check link: http://docs.oracle.com/cd/E11882_01/server.112/e10700/manage_ls.htm#CHDGFGHG
    Following tests on user created before guard status is change from ALL to STANDBY.
    C:\Users\Administrator>sqlplus / as sysdba
    SQL> conn test/test
    Connected.
    SQL> select table_name from user_tables;
    TABLE_NAME
    T
    SQL> insert into t values(22);
    insert into t values(22)
    ERROR at line 1:
    ORA-16224: Database Guard is enabled
    SQL> conn / as sysdba
    Connected.
    SQL> select guard_Status from  v$database;
    GUARD_S
    ALL
    SQL> alter  database guard standby;
    Database altered.
    SQL> conn test/test
    Connected.
    SQL> insert into t values(1);
    insert into t values(1)
    ERROR at line 1:
    ORA-16224: Database Guard is enabled
    SQL> conn / as sysdba
    Connected.
    SQL> select guard_Status from  v$database;
    GUARD_S
    STANDBY
    SQL> alter  database guard none;
    Database altered.
    SQL> select guard_Status from  v$database;
    GUARD_S
    NONE
    SQL> conn test/test
    Connected.
    SQL> insert into t values(1);
    1 row created.
    SQL> commit;
    Commit complete.And Now I want share with you new tests :)
    Now user creating when after guard status change
    SQL> drop  user test cascade;
    User dropped.
    SQL> select guard_status from v$database;
    GUARD_S
    STANDBY
    SQL> create user test identified by test;
    User created.
    SQL> grant create session,  resource, create table to test;
    Grant succeeded.
    SQL> conn test/test
    Connected.
    SQL> create table t (n number);
    Table created.
    SQL> insert into t values(1);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>It means when guard status is ALL then all of user created guarding.
    When you changed status to STANDBY then Logical Standby guard only primary schema and created schema before change.
    NONE is not guard any schema. it means you can delete standby schema data too.
    Regards
    Mahir M. Quluzade
    Edited by: Mahir M. Quluzade on Apr 19, 2013 4:07 PM

  • Add Datafile in Logical Standby Database

    Hi,
    I have add one datafile in our primary RAC DB. We had logical standby database with file management is equal to manual. Both the primary RAC and logical standby db have the different storage structure. When the archive applied on the logical standby database its throws the error " error in creating datafile 'path'";
    Would appreciate if come to know the steps to add the datafile in this kind of environment. and how can i overcome from this problem now except skip transaction for that ddl.
    Thanks in advance.
    Dewan

    When the archive applied on the logical standby database its throws the error " error in creating datafile 'path'";Can you post the error message with the number.
    From Manual..
    8.3.1.2 Adding a Tablespace and a Datafile When STANDBY_FILE_MANAGEMENT Is Set to MANUAL
    The following example shows the steps required to add a new datafile to the primary and standby database when the STANDBY_FILE_MANAGEMENT initialization parameter is set to MANUAL. You must set the STANDBY_FILE_MANAGEMENT initialization parameter to MANUAL when the standby datafiles reside on raw devices.
    Add a new tablespace to the primary database:
    SQL> CREATE TABLESPACE new_ts DATAFILE '/disk1/oracle/oradata/payroll/t_db2.dbf'
    2> SIZE 1m AUTOEXTEND ON MAXSIZE UNLIMITED;
    Verify the new datafile was added to the primary database:
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    /disk1/oracle/oradata/payroll/t_db1.dbf
    /disk1/oracle/oradata/payroll/t_db2.dbf
    Perform the following steps to copy the tablespace to a remote standby location:
    Place the new tablespace offline:
    SQL> ALTER TABLESPACE new_ts OFFLINE;
    Copy the new tablespace to a local temporary location using an operating system utility copy command. Copying the files to a temporary location will reduce the amount of time the tablespace must remain offline. The following example copies the tablespace using the UNIX cp command:
    % cp /disk1/oracle/oradata/payroll/t_db2.dbf
    /disk1/oracle/oradata/payroll/s2t_db2.dbf
    Place the new tablespace back online:
    SQL> ALTER TABLESPACE new_ts ONLINE;
    Copy the local copy of the tablespace to a remote standby location using an operating system utility command. The following example uses the UNIX rcp command:
    %rcp /disk1/oracle/oradata/payroll/s2t_db2.dbf standby_location
    Archive the current online redo log file on the primary database so it will get transmitted to the standby database:
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    Use the following query to make sure that Redo Apply is running. If the MRP or MRP0 process is returned, Redo Apply is being performed.
    SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;
    Verify the datafile was added to the standby database after the archived redo log file was applied to the standby database:
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    /disk1/oracle/oradata/payroll/s2t_db1.dbf
    /disk1/oracle/oradata/payroll/s2t_db2.dbf

  • Logical Standby Database - Doubts

    Hi everyone,
    I have a doubt about this view: dba_logstdby_unsupported;
    This view shows up me 400 tables, so i understand that an operation DML on these tables won't be replicated in the other node?, or only the fields allowed will be replicated in the news record? or new record on these tables will be replicated if these records have NULL value in the fields doesn't allow.
    Thank you very much if someone can help me with this doubts.
    Regards

    This is what the documentation says:
    If the primary database contains unsupported tables, SQL Apply automatically excludes these tables when applying redo data to the logical standby database.Source: Unsupported Tables

Maybe you are looking for

  • Help with most practical format/settings please!

    Hi Everyone out there! I'd really appreciate some spelled-out advice as I have only ever used FCP with a DV Pal camera for high end stuff and thus also known how to set a sequence up and capture stuff but only for that format! Right! I have been supp

  • Material Requirements Planning

    Hi all,     I fixed in 90 days in Planned Delivery Time field, in MRP2 View of Material Master for Material with Material Type "Raw Material" and strategy group "Make To Stock,"so that Plan Order generated by MRP Run for such Material can have Order

  • Using SCM with 10g developer suite

    i am using Oracle SCM utilities with 10g developer suite. can any body tell me that is there any way-out, or mechansim exists so that i can add existing coded objects in the database (e.g. procedures, functions, or table definitions) into the reposit

  • Reason for change name

    hi, what is the reason behind HTMLDB had change it name to Applicaiton Express? thanks.

  • Reg : BAM connection

    Hi Experts , I am new to BAM Reports. I installed SOA server along with that i got BAM too. I up both the servers SOA & BAM. Now from my J Developer i want to create a BAM connection. But when i am clicking on Connections in application Resources and