Sql Apply on standby stops on error

hi all,
I have run into problem I can't figure out.
There are test and production databases, for both there are logical standby databases.
For example, when materialized view is created on primary database, I see error "ORA-16226: DDL skipped due to lack of support", which is ok.
When I try to create index on test database, I got message "ORA-16227: DDL skipped due to missing object", so far OK.
However, when I try to create index on this materialized view, on production database SQL Apply stops at this point, because materialized view does not exist on standby.
Any clues why SQL apply doesn't stop on one standby but on the another standby database it stops after the same statement?
Are there any parameters or smth for this?
Thanks

yes, they are both same version and compatible parameter is the same - 11.1.0, they should be created same way on both.
only difference - prod is on solaris, test is on linux.
Problem in this case is not with mviews, but in fact, that if we have some table which is skipped and some index or other object is added on primary, SQL apply stops on standby.
Anyway, what should be the default action if SQL apply encounters an error - is it stopping apply ?

Similar Messages

  • Adv Compression & Logical Standby DB (SQL Apply)

    Hi,
    Is use of AC on a primary database supported with the SQL Apply (logical standby) version of Data Guard in release 11.1.0.7 ?
    thanks !
    Curt Lukenbill
    Paychex, Inc

    Dan, Thanks for the replies. The source of my confusion is the 11g Data Guard Documentation --
    In the Appendix, it states --
    C.7 Unsupported Table Storage Types
    Logical standby databases do not support the following table storage types:
    Tables stored with segment compression enabled
    We are going to test it out here, and we'll share the results...
    I'd hate not to be able to use AC on my primary because we have a SQL*Apply standby associated with it....If this is the case (and we can't use AC on the primary), I hope Oracle is addressing it in 11gR2....
    Curt Lukenbill

  • Logical standby stuck at initializing SQL apply only coordinator process up

    Hi
    OS: solaris 5.10
    Hardware: sun sparc
    Oracle database: 11.2.0.1.0
    Primary database name: asadmin
    Standby database name: test
    I had been trying to convert a physical standby to logical standby database. Both the primary and standby reside on the same machine.
    The physical standby was created with a hot backup of primary.
    I had been following document id 278371.1 to convert the physical to logical standby and used the following steps:
    Relevant init parameters on primary:
    *.db_name='asadmin'
    *.db_unique_name='asadmin'
    *.log_archive_config='dg_config=(asadmin,test)'
    *.log_archive_dest_1='location=/u01/asadmin/archive valid_for=(all_logfiles,all_roles) db_unique_name=asadmin'
    *.log_archive_dest_2='SERVICE=test async valid_for=(online_logfiles,primary_role) db_unique_name=test'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.fal_client='asadmin'
    *.fal_server='test'
    *.remote_login_passwordfile='EXCLUSIVE'
    Relevant init parameters on standby database:
    *.db_name='test' -- Was asadmin before I renamed the DB during conversion to logical standby
    *.db_unique_name='test'
    *.log_archive_dest_1='location=/u01/test/archive valid_for=(all_logfiles,all_roles) db_unique_name=test'
    *.log_archive_dest_2='service=asadmin async valid_for=(online_logfiles,primary_role) db_unique_name=asadmin'
    *.log_archive_dest_state_1=enable
    *.log_archive_dest_state_2=defer
    *.remote_login_passwordfile='EXCLUSIVE'*.fal_server=test
    *.fal_client=asadmin
    Steps on primary:
    1) alter system set log_archive_dest_state_2=defer;
    2) shutdown immediate;
    3) Made sure that the physical standby has applied all of the redo sent to it following the shutdown.
    4) startup mount;
    5) ALTER DATABASE BACKUP CONTROLFILE to '/home/oracle/control01.ctl';
    6) ALTER SYSTEM ENABLE RESTRICTED SESSION;
    7) ALTER DATABASE OPEN;
    8) Verified that the supplemental logging is on.
    9) ALTER SYSTEM ARCHIVE LOG CURRENT;
    10) Checked for the checkpoint change no. at this point which is 72403818 and is present in archive log file 1_62_775102253.dbf
    11) EXECUTE DBMS_LOGSTDBY.BUILD;
    12) ALTER SYSTEM ARCHIVE LOG CURRENT;
    13) Checked for the archive log containing dictionary build which is 1_64_775102253.dbf
    14) ALTER SYSTEM DISABLE RESTRICTED SESSION;
    Details of archive logs and related checkpoint change nos:
    NAME FIRST_CHANGE# NEXT_CHANGE#
    /u01/asadmin/archive/1_61_775102253.dbf 72402901 72403817
    /u01/asadmin/archive/1_62_775102253.dbf 72403817 72404069
    /u01/asadmin/archive/1_63_775102253.dbf 72404069 72404211
    /u01/asadmin/archive/1_64_775102253.dbf 72404211 72405700
    Steps on standby:
    1) shutdown immediate;
    2) Copy the archivelog file 61(was created at primary after apply stopped at standby), 62(contains checkpoint no. 72403818), 63 and 64(contains dictionary build). Copy the backup controlfile from step 5 above to the controlfile location in standby init.
    3) startup mount;
    4) Rename all datafiles and redo log files (including standby redo log files) to the correct path on standby.
    5) alter database recover automatic from '/u01/test/archive' until change 72405700 using backup controlfile; -- This completed error-free
    6) alter database guard all; -- this completed error free
    7) alter database open resetlogs; -- this completed error free.
    8) nid target=sys/oracle12 dbname=test
    9) Changed the db_name in init file to new name test.
    10) Added a tempfile to temp tablespaces.
    11) ALTER DATABASE REGISTER LOGICAL LOGFILE '/u01/test/archive/1_61_775102253.dbf'; -- ORA-16225: Missing LogMiner session name for Streams
    12) ALTER DATABASE START LOGICAL STANDBY APPLY INITIAL 72405700; -- This completed error free.
    Also enabled the log_archive_dest_state_2 on primary.
    After this output from some views:
    SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE;
    SESSION_ID STATE
    1 INITIALIZING
    SQL> SELECT SID, SERIAL#, SPID, TYPE FROM V$LOGSTDBY_PROCESS;
    SID SERIAL# SPID TYPE
    587 22 15476 COORDINATOR
    SELECT PERCENT_DONE, COMMAND
    FROM V$LOGMNR_DICTIONARY_LOAD
    WHERE SESSION_ID = (SELECT SESSION_ID FROM V$LOGSTDBY_STATE);
    PERCENT_DONE
    COMMAND
    0
    SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN STATUS
    COORDINATOR ORA-16111: log mining and apply setting up
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN
    72405700 72411501
    SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
    WHERE NEXT_CHANGE# NOT IN
    (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
    ORDER BY THREAD#,SEQUENCE#;
    no rows selected
    SQL> SELECT EVENT_TIME, STATUS, EVENT
    FROM DBA_LOGSTDBY_EVENTS
    ORDER BY EVENT_TIMESTAMP, COMMIT_SCN; 2 3
    EVENT_TIME STATUS EVENT
    14-FEB-12 02:00:50 ORA-16111: log mining and apply setting up
    14-FEB-12 02:00:50 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 02:20:11 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 02:20:39 ORA-16111: log mining and apply setting up
    14-FEB-12 02:20:39 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 02:54:15 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 02:57:38 ORA-16111: log mining and apply setting up
    EVENT_TIME STATUS EVENT
    14-FEB-12 02:57:38 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 03:01:36 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 03:13:44 ORA-16111: log mining and apply setting up
    14-FEB-12 03:13:44 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 04:32:23 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 04:34:17 ORA-16111: log mining and apply setting up
    14-FEB-12 04:34:17 Apply LWM 72405699, HWM 72405699, SCN 72405699
    EVENT_TIME STATUS EVENT
    14-FEB-12 04:36:16 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 04:36:21 ORA-16111: log mining and apply setting up
    14-FEB-12 04:36:21 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 05:15:22 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 05:15:29 ORA-16111: log mining and apply setting up
    14-FEB-12 05:15:29 Apply LWM 72405699, HWM 72405699, SCN 72405699
    I also greped for lsp and lcr processes and found that lsp is up but do not see any lcr.
    The logs are getting transported to the archive destination on standby whenever they are archived on primary but are not getting applied to standby.
    Also in case the standby is down while a log is generated on primary it is not automatically transported to standby once the standby is up, means gap resolution is also not working.
    I see the following in alert log every time I try to restart the log apply, everything seems to be stuck at initialization.
    ALTER DATABASE START LOGICAL STANDBY APPLY (test)
    with optional part
    IMMEDIATE
    Attempt to start background Logical Standby process
    Tue Feb 14 05:15:28 2012
    LSP0 started with pid=28, OS id=23391
    Completed: alter database start logical standby apply immediate
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
    LOGMINER: SpillScn 0, ResetLogScn 0
    -- NOTHING AFTER THIS

    Hello;
    I noticed some of your parameters seem to be wrong.
    fal_client - This is Obsolete in 11.2
    You have db_name='test' on the Standby, it should be 'asadmin'
    fal_server=test is set like this on the standby, it should be 'asadmin'
    I might consider changing VALID_FOR to this :
    VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)Would review 4.2 Step-by-Step Instructions for Creating a Logical Standby Database of Oracle Document E10700-02
    Document 278371.1 is showing its age in my humble opinion.
    -----Wait on this until you fix your parameters----------------------
    Try restarting the SQL Apply
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATEI don't see the parameter MAX_SERVERS, try setting it to 8 times the number of cores.
    Use these statements to trouble shoot :
    SELECT NAME, VALUE, UNIT FROM V$DATAGUARD_STATS;
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE ;TRANSACTIONS%';
    SELECT COUNT(1) AS IDLE_PREPARERS FROM V$LOGSTDBY_PROCESS WHERE
    TYPE = 'PREPERER' AND STATUS_CODE = 16166;Best Regards
    mseberg
    Edited by: mseberg on Feb 14, 2012 7:37 AM

  • Sql Apply issue in logical standby database--(10.2.0.5.0) x86 platform

    Hi Friends,
    I am getting the following exception in logical standby database at the time of Sql Apply.
    After run the command alter database start logical standby apply sql apply services start but after few second automatically stop and getting following exception.
    alter database start logical standby apply
    Tue May 17 06:42:00 2011
    No optional part
    Attempt to start background Logical Standby process
    LOGSTDBY Parameter: MAX_SERVERS = 20
    LOGSTDBY Parameter: MAX_SGA = 100
    LOGSTDBY Parameter: APPLY_SERVERS = 10
    LSP0 started with pid=30, OS id=4988
    Tue May 17 06:42:00 2011
    Completed: alter database start logical standby apply
    Tue May 17 06:42:00 2011
    LOGSTDBY status: ORA-16111: log mining and apply setting up
    Tue May 17 06:42:00 2011
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 4, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 100M, Checkpoint interval = 500M
    Tue May 17 06:42:00 2011
    LOGMINER: krvxpsr summary for session# = 1
    LOGMINER: StartScn: 0 (0x0000.00000000)
    LOGMINER: EndScn: 0 (0x0000.00000000)
    LOGMINER: HighConsumedScn: 2660033 (0x0000.002896c1)
    LOGMINER: session_flag 0x1
    LOGMINER: session# = 1, preparer process P002 started with pid=35 OS id=4244
    LOGSTDBY Apply process P014 started with pid=47 OS id=5456
    LOGSTDBY Apply process P010 started with pid=43 OS id=6484
    LOGMINER: session# = 1, reader process P000 started with pid=33 OS id=4732
    Tue May 17 06:42:01 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1417, X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
    Tue May 17 06:42:01 2011
    LOGMINER: Turning ON Log Auto Delete
    Tue May 17 06:42:01 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
    Tue May 17 06:42:01 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1418, X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
    LOGSTDBY Apply process P008 started with pid=41 OS id=4740
    LOGSTDBY Apply process P013 started with pid=46 OS id=7864
    LOGSTDBY Apply process P006 started with pid=39 OS id=5500
    LOGMINER: session# = 1, builder process P001 started with pid=34 OS id=4796
    Tue May 17 06:42:02 2011
    LOGMINER: skipped redo. Thread 1, RBA 0x00058a.00000950.0010, nCV 6
    LOGMINER: op 4.1 (Control File)
    Tue May 17 06:42:02 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1419, X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1420, X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1421, X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
    LOGSTDBY Analyzer process P004 started with pid=37 OS id=5096
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
    LOGSTDBY Apply process P007 started with pid=40 OS id=2760
    Tue May 17 06:42:03 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    LOGSTDBY Apply process P012 started with pid=45 OS id=7152
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1422, X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1423, X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1424, X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
    LOGMINER: session# = 1, preparer process P003 started with pid=36 OS id=5468
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
    Tue May 17 06:42:04 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1425, X:\TANVI\ARCHIVE2\ARC01425_0748170313.001
    LOGSTDBY Apply process P011 started with pid=44 OS id=6816
    LOGSTDBY Apply process P005 started with pid=38 OS id=5792
    LOGSTDBY Apply process P009 started with pid=42 OS id=752
    Tue May 17 06:42:05 2011
    krvxerpt: Errors detected in process 34, role builder.
    Tue May 17 06:42:05 2011
    krvxmrs: Leaving by exception: 600
    Tue May 17 06:42:05 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    LOGSTDBY status: ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    Tue May 17 06:42:06 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_lsp0_4988.trc:
    ORA-12801: error signaled in parallel query server P001
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    Tue May 17 06:42:06 2011
    LogMiner process death detected
    Tue May 17 06:42:06 2011
    logminer process death detected, exiting logical standby
    LOGSTDBY Analyzer process P004 pid=37 OS id=5096 stopped
    LOGSTDBY Apply process P010 pid=43 OS id=6484 stopped
    LOGSTDBY Apply process P008 pid=41 OS id=4740 stopped
    LOGSTDBY Apply process P012 pid=45 OS id=7152 stopped
    LOGSTDBY Apply process P014 pid=47 OS id=5456 stopped
    LOGSTDBY Apply process P005 pid=38 OS id=5792 stopped
    LOGSTDBY Apply process P006 pid=39 OS id=5500 stopped
    LOGSTDBY Apply process P007 pid=40 OS id=2760 stopped
    LOGSTDBY Apply process P011 pid=44 OS id=6816 stopped
    Tue May 17 06:42:10 2011

    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []submit an SR to ORACLE SUPPORT.
    refer these too
    *ORA-600/ORA-7445 Error Look-up Tool [ID 153788.1]*
    *Bug 6022014: ORA-600 [KRVXBPX20] ON LOGICAL STANDBY*

  • Logical Standby SQL Apply Using Incorrect Decode Statement

    We are seeing statements erroring out on our logical standby that have been rewritten (presumably by sql apply) with decode statements that don't appear to be correct. For example, here is one of the rewritten statements.
    update /*+ streams restrict_all_ref_cons */ "CADPROD"."OMS_SQL_STATEMENT" p
    set *"APPLICATION"=decode(:1,'N',"APPLICATION",:2)*,
    "STATEMENT"=dbms_reputil2.get_final_lob(:3,"STATEMENT",:4)
    where (:5='N' or(1=1 and (:6='N' or(dbms_lob.compare(:7,"STATEMENT")=0)or(:7 is null and "STATEMENT" is null)))) and(:8="APPLICATION")
    The problem comes in, we believe, with the attempt to write the value "APPLICATION" to the application column which is only a 10 character field. the value for the :1 bind variable is "N" and the value for :2 is null.
    We see the following error on the logical standby:
    ORA-00600: internal error code, arguments: [kgh_heap_sizes:ds], [0x01FCDBE60], [], [], [], [], [], []
    ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [kxtoedu+54] [PC:0x2542308] [ADDR:0xFFFFFFFFFFFFFFFF] [UNABLE_TO_READ] []
    ORA-12899: value too large for column "CADPROD"."OMS_SQL_STATEMENT"."APPLICATION" (actual: 19576, maximum: 10)
    Is this a configuration issue or is it normal for SQL Apply to convert statements from logminer into decode statements?
    We have an Oracle 10.2.0.4 database running on windows 2003 R2 64-bit os. We have 3 physical and 2 logical standby's, no problems on the physical standbys.

    Hello;
    I noticed some of your parameters seem to be wrong.
    fal_client - This is Obsolete in 11.2
    You have db_name='test' on the Standby, it should be 'asadmin'
    fal_server=test is set like this on the standby, it should be 'asadmin'
    I might consider changing VALID_FOR to this :
    VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)Would review 4.2 Step-by-Step Instructions for Creating a Logical Standby Database of Oracle Document E10700-02
    Document 278371.1 is showing its age in my humble opinion.
    -----Wait on this until you fix your parameters----------------------
    Try restarting the SQL Apply
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATEI don't see the parameter MAX_SERVERS, try setting it to 8 times the number of cores.
    Use these statements to trouble shoot :
    SELECT NAME, VALUE, UNIT FROM V$DATAGUARD_STATS;
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE ;TRANSACTIONS%';
    SELECT COUNT(1) AS IDLE_PREPARERS FROM V$LOGSTDBY_PROCESS WHERE
    TYPE = 'PREPERER' AND STATUS_CODE = 16166;Best Regards
    mseberg
    Edited by: mseberg on Feb 14, 2012 7:37 AM

  • Logical standby: SQL Apply too slow

    Hi all,
    I have a question regarding SQL Apply performance in logical standby. There are two kind of operations that are remarkably slow when applying them on logical standby. These are "truncate table" and "delete from table" operations.
    When logical standby pick up one of mentioned statements from logs one of appliers start working whereas rest others are waiting. It looks like standby hang and very slow sql apply is moving on gradually and finally when operation completes standby is behind primary for 4 or 5 or even 8 hours.
    What can be done in this regard to speed up sql apply and alleviate this situation?
    Best Regards,
    Alex

    Are you absolutely sure that the truncate is the problem (and deletes). How did you check it?
    You can use LogMiner to check what are most of the commands in the log currently applied. I use this:
    BEGIN
    sys.DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/home/oracle/arc_43547_1_595785865.arc', OPTIONS => sys.DBMS_LOGMNR.ADDFILE);
    END;
    BEGIN
    sys.DBMS_LOGMNR.START_LOGMNR( OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
    END;
    SELECT seg_owner,seg_name,table_name,operation,COUNT(1) FROM V$LOGMNR_CONTENTS
    GROUP BY seg_owner,seg_name,table_name,operation
    ORDER BY COUNT(1) DESC
    BEGIN
    sys.DBMS_LOGMNR.END_LOGMNR();
    END;
    Most of the times in our cases when SQL Apply is slow is because of high activity on particular object. This can be detected by high number of DMLs for that object using LogMiner. If this object is not needed on the logical standby you can skip it and thus SQL Apply will be faster because it will not apply changes for this particular one. If it's needed and this is not a regular rate, then you can skip it temporarily, turn on SQL Apply , after problematic logs are applied, turn off SQL Apply, instantiate the object and unskip it, turn on sql apply again.
    Another thing that can drastically slowdown SQL Apply is the size of memory available for SQL Apply(Alert log shows that max is ~4.5GB or something like this, I'm not sure )
    You can increase it with something like this:
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    BEGIN
    DBMS_LOGSTDBY.APPLY_SET('MAX_SGA', 3000); -- set to 3000 MB
    END;
    ALTER DATABASE START LOGICAL STANDBY APPLY;
    You have to increase it if the following reports:
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS
    WHERE NAME LIKE '%page%' OR
    NAME LIKE '%uptime%' or name like '%idle%';
    that 'bytes paged out' increases if run every few seconds during slow SQL Apply.
    I hope that it's something that can be fixed using the above info. If no, please comment and share your investigations.
    Thanks

  • Slow SQL Apply on Logical Standby Database in Oracle10g

    Hi,
    We are using Oracle 10g Logical Standby database in our production farm but whenever there is bulk data load (5-6 GB data) on the primary database, the logical standby seems to be hung. It takes days to apply the 5-6 GB data on the logical standby.
    Can anybody give me some pointers how can I make my SQL Apply fast on the logical standby for bulk data.
    Thanks
    Amit

    Hi there,
    I've a similar problem. I did an insert of 700k on a table. It takes me over 1 1/2 hours to see the data. Notice, I increased the "max_sga" to 300m and "max_servers" to 25" and didn't help the performance at all.
    My version is 10.2.0.3 with the patch 6081550.
    APPLIED_SCN APPLIED_TIME RESTART_SCN RESTART_TIME LATEST_SCN LATEST_TIME MINING_SCN MINING_TIME
    1015618 29-NOV-2007 18:28:51 1009600 29-NOV-2007 18:28:51 1017519 29-NOV-2007 19:54:07 1015656 29-NOV-2007 18:32:14

  • SQL apply is very slow on Logical Standby..!!

    Hello all,
    We are having Data Guard setup in our environment where we are having Primary, Physical Standby as well as logical standby databases..
    DB Version : 10.2.0.1 in all databases (Pri, Phy and Logical)
    OS : RHEL4
    Only Oracle is running on this Box..
    Since last month we are facing problems in Logical Standby database where it seems SQL apply has become very slow..
    Archive log files are successfully transferring from Primary but since SQL apply has become slow logical standby is lagging behind primary by two days..
    How do i speed up this SQL apply..?? Any ideas and suggestions are most welcome..
    I checked TOP command to find out which oracle processes are consuming maximum CPU and i have found ora_p000_oracle, ora_p001_oracle, ora_p002_oracle, ora_p003_oracle, ora_p004_oracle, ora_p005_oracle, etc processes are consuming highest CPU and Load Average has always been above 1..
    Any help would be greatly appreciated..
    Thanks - HP

    Hello;
    These Oracle notes might help :
    Slow Performance In Logical Standby Database Due To Lots Of Activity On Sys.Aud$ [ID 862173.1]
    Oracle10g Data Guard SQL Apply Troubleshooting [ID 312434.1]
    Developer and DBA Tips to Optimize SQL Apply [ID 603361.1]
    Best Regards
    mseberg

  • How to monitor SQL Apply for 10.2.0.3 logical standby database

    We have a logical standby database setup for reporting purposes. Users want to monitor whether sql apply is working or failed closely as it has reporting repercations.
    In case of 9i databases , there was "Data Not Applied (logs)" metric which we used for alerting and paging, in case a backlog of more than 5 log files developed.
    With 10.2.0.3 onwards, the metric no more exists.
    I would like to learn from other, how to monitor the setup, so that if the backlog in logs shipping or applying develops, we get page.
    Regards.

    regather the statistics on the table with method_opt=>for all columns or for all indexed columns or whatever size 1
    The 'size 1' directive will remove the histogram statistics.
    Sorry, didn't read ur post in a hurry. Below article (http://www.freelists.org/post/oracle-l/Any-quick-way-to-remove-histograms,13) removes histogram without re-analyzing the table. Hope that helps!!!
    On 3/16/07, Wolfgang Breitling <breitliw@xxxxxxxxxxxxx> wrote:
    I also did a quick check and just using
    exec
    dbms_stats.set_column_stats(user,'table_name',colname=>'column_name',d
    istcnt=>
    <num_distinct>);
    will remove the histogram without removing the low_value andhigh_value.
    At 01:40 PM 3/16/2007, Alberto Dell'Era wrote:
    On 3/16/07, Allen, Brandon <Brandon.Allen@xxxxxxxxxxx> wrote:
    Is there any faster way to remove histograms other than
    re-analyzing
    >
    the table? I want to keep the existing table, index & columnstats,
    >
    but with only 1 bucket (i.e. no histograms).You might try the attached script, that reads the stats using
    dbms_stats.get_column_stats and re-sets them, minus the histogram,
    using dbms_stats.set_column_stats.
    I haven't fully tested it - it's only 10 minutes old, even if Ihave
    slightly modified for you another script I've used for quite some
    time - and the spool on 10.2.0.3 seems to confirmthat the histogram
    is, indeed, removed, while all the other statistics are preserved.I
    have also reset density to 1/num_distinct, that is the value youget
    if no histogram is collected.regards,
    naren
    Edited by: fuzzydba on Oct 25, 2010 10:52 AM

  • Logical standby stopped when trying to create partitions on primary(Urgent

    RDBMS Version: 10.2.0.3
    Operating System and Version: Solaris 5.9
    Error Number (if applicable): ORA-1119
    Product (i.e. SQL*Loader, Import, etc.): Data Guard on RAC
    Product Version: 10.2.0.3
    logical standby stopped when trying to create partitions on primary(Urgent)
    Primary is a 2node RAC ON ASM, we implemented partitions on primar.
    Logical standby stopped appling logs.
    Below is the alert.log for logical stdby:
    Current log# 4 seq# 860 mem# 0: +RT06_DATA/rt06/onlinelog/group_4.477.635601281
    Current log# 4 seq# 860 mem# 1: +RECO/rt06/onlinelog/group_4.280.635601287
    Fri Oct 19 10:41:34 2007
    create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL
    Fri Oct 19 10:41:34 2007
    ORA-1119 signalled during: create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL...
    LOGSTDBY status: ORA-01119: error in creating database file '+OT06_DATA'
    ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
    ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
    ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
    LOGSTDBY Apply process P004 pid=49 OS id=16403 stopped
    Fri Oct 19 10:41:34 2007
    Errors in file /u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc:
    ORA-12801: error signaled in parallel query server P004
    ORA-01119: error in creating database file '+OT06_DATA'
    ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
    ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
    ORA-15001: diskgroup "OT06_DATA" does not exist or
    Here is the trace file info:
    /u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /u01/app/oracle/product/10.2.0
    System name: SunOS
    Node name: iscsv341.newbreed.com
    Release: 5.9
    Version: Generic_118558-28
    Machine: sun4u
    Instance name: RT06
    Redo thread mounted by this instance: 1
    Oracle process number: 16
    Unix process pid: 16387, image: [email protected] (LSP0)
    *** 2007-10-19 10:41:34.804
    *** SERVICE NAME:(SYS$BACKGROUND) 2007-10-19 10:41:34.802
    *** SESSION ID:(1614.205) 2007-10-19 10:41:34.802
    knahcapplymain: encountered error=12801
    *** 2007-10-19 10:41:34.804
    ksedmp: internal or fatal error
    ORA-12801: error signaled in parallel query server P004
    ORA-01119: error in creating database file '+OT06_DATA'
    ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
    ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
    ORA-15001: diskgroup "OT06_DATA" does not exist or
    KNACDMP: *******************************************************
    KNACDMP: Dumping apply coordinator's context at 7fffd9e8
    KNACDMP: Apply Engine # 0
    KNACDMP: Apply Engine name
    KNACDMP: Coordinator's Watermarks ------------------------------
    KNACDMP: Apply High Watermark = 0x0000.0132b0bc
    Sorry our primary database file structure is different from stdby, we used db_file_name_convert in the init.ora, it look like this:
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='+OT06_DATA/OT06TSG001/','+RT06_DATA/RT06/','+RECO/OT06TSG001','+RECO/RT06'
    *.db_files=2000
    *.db_name='OT06'
    *.db_recovery_file_dest='+RECO'
    Is there any thing wrong in this parameter.
    I tried this parameter before for cloning using rman backup. This din't work.
    What exactly must be done? for db_file_name_convert to work.
    Even in this case i think this is the problem its not converting the location and the logical halts.
    help me out.....
    let me know if you have any questions.
    Thanks Regards
    Raghavendra rao Yella.

    Hi reega,
    Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
    and primary has '+OT06_DATA/OT06TSG001'
    so we are using db_file_name_convert init parameter but it doesn't work.
    Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
    Let me know if you have any questions.
    Thanks in advance.

  • Logical dataguard SQL apply fails during import on primary database

    I have created logical dataguard using GRID, initially every things works fine.
    One time we had to do import of new data on primary database, that is where the problem started.
    log apply is lagging big time, and i got this error
    StatusRedo apply services stopped due to failed transaction (1,33,8478)
    ReasonORA-16227: DDL skipped due to missing object
    Failed SQLDROP TABLE "USA"."SYS_IMPORT_SCHEMA_01" PURGE
    This table exists on logical dataguard...
    How do we deal with import on logical dataguard, since import generates a lot of redlogs

    Hello;
    These Oracle notes might help :
    Slow Performance In Logical Standby Database Due To Lots Of Activity On Sys.Aud$ [ID 862173.1]
    Oracle10g Data Guard SQL Apply Troubleshooting [ID 312434.1]
    Developer and DBA Tips to Optimize SQL Apply [ID 603361.1]
    Best Regards
    mseberg

  • Logical standby stopped lastnight

    Subject: Logical standby stopped lastnight
    Author: raghavendra rao yella, United States
    Date: Nov 14, 2007, 0 minutes ago
    Os info: solaris 5.9
    Oracle info: 10.2.0.3
    Error info: ORA-16120: dependencies being computed for transaction at SC
    N 0x0002.c8f6f182
    Message: Our logical standby stopped last night. we tried to stop and start the standby but no help.
    Below are some of the queries to get the status:
    APPLIED_SCN LATEST_SCN MINING_SCN RESTART_SCN
    11962328446 11981014649 11961580453 11961536228
    APPLIED_TIME LATEST_TIME MINING_TIME RESTART_TIME
    07-11-13 09:09:41 07-11-14 10:26:26 07-11-13 08:57:53 07-11-13 08:56:36
    sys@RP06>SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN STATUS
    COORDINATOR 1.1962E+10 ORA-16116: no work available
    READER 1.1962E+10 ORA-16127: stalled waiting for additional transact
    ions to be applied
    BUILDER 1.1962E+10 ORA-16127: stalled waiting for additional transact
    ions to be applied
    PREPARER 1.1962E+10 ORA-16127: stalled waiting for additional transact
    ions to be applied
    ANALYZER 1.1962E+10 ORA-16120: dependencies being computed for transac
    tion at SCN 0x0002.c8f6c002
    APPLIER ORA-16116: no work available
    APPLIER ORA-16116: no work available
    APPLIER ORA-16116: no work available
    APPLIER ORA-16116: no work available
    APPLIER ORA-16116: no work available
    10 rows selected.
    Select PID,
    TYPE,
    STATUS
    From
    V$LOGSTDBY
    Order by
    HIGH_SCN; 2 3 4 5 6 7 8
    PID TYPE STATUS
    17896 ANALYZER ORA-16120: dependencies being computed for transaction at SC
    N 0x0002.c8f6f182
    17892 PREPARER ORA-16127: stalled waiting for additional transactions to be
    applied
    17890 BUILDER ORA-16243: paging out 8144 bytes of memory to disk
    17888 READER ORA-16127: stalled waiting for additional transactions to be
    applied
    28523 COORDINATOR ORA-16116: no work available
    17904 APPLIER ORA-16116: no work available
    17906 APPLIER ORA-16116: no work available
    17898 APPLIER ORA-16116: no work available
    17900 APPLIER ORA-16116: no work available
    17902 APPLIER ORA-16116: no work available
    10 rows selected.
    How can i get this transaction information, which log miner is looking for dependencies?
    Let me know if you have any questions.
    Thanks in advance.
    Message was edited by:
    raghu559

    Hi reega,
    Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
    and primary has '+OT06_DATA/OT06TSG001'
    so we are using db_file_name_convert init parameter but it doesn't work.
    Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
    Let me know if you have any questions.
    Thanks in advance.

  • Redo log files are not applying to standby database

    Hi everyone!!
    I have created standby database on same server ( windows XP) and using oracle 11g . I want to synchronize my standby database with primary database . So I tried to apply redo logs from primary to standby database as follow .
    My standby database is open and Primary database is not started (instance not started) because only one database can run in Exclusive Mode as DB_NAME is same for both database.  I run the following command on the standby database.
                SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It returns "Database altered" . But when I checked the last archive log on primary database, its sequence is 189 while on standby database it is 177. That mean archived redo logs are not applied on standby database.
    The tnsnames.ora file contains entry for both service primary & standby database and same service has been used to transmit and receive redo logs.
    1. How to resolve this issue ?
    2.Is it compulsory to have Primary database open ?
    3. I have created standby  control file by using  command
              SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘D:\APP\ORACLE\ORADATA\TESTCAT\CONTROLFILE\CONTROL_STAND1.CTL‘;
    So database name in the standby control file is same as primary database name (PRIM). And hence init.ora file of standby database also contains DB_NAME = 'PRIM' parameter. I can't change it because it returns error of mismatch database name on startup. Should  I have different database name for both or existing one is correct ?
    Can anybody help me to come out from this stuck ?
    Thanks & Regards
    Tushar Lapani

    Thank you Girish. It solved  my redo apply problem. I set log_archive_dest parameter again and then I checked archive redo log sequence number. It was same for both primary and standby database. But still table on standby database is not being refresh.
    I did following scenario.
    1.  Inserted 200000 rows in emp table of Scott user on Primary database and commit changes.
    2. Then I synchronized standby database by using Alter database command. And I also verify that archive log sequence number is same for both database. It mean archived logs from primary database has been applied to standby database.
    3. But when I count number of rows in emp table of scott user on standby database, it returns only 14 rows even of redo log has been applied.
    So my question is why changes made to primary database is not reflected on standby database although redo logs has been applied ?
    Thanks

  • [823] [HY000] [Microsoft][ODBC SQL Server Driver][SQL Server]The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server.

    Hi,
    I am facing an issue while loading fresh data into SQL server database.
    we are able to load data into staging area, but while processing stored procedures system face bellow error message :
    [823] [HY000] [Microsoft][ODBC SQL Server Driver][SQL Server]The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x00000243bd0000 in file 'E:\SQLDB\DBName.mdf'.
    Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC
    CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
    we use windows 2008 R2 with SQL server managment studio 2008.
    Appreciate your suggestion.
    Thanks in advance.
    Best Regards,
    Manorath 
    Manorath

    Hi Manorath,
    Based on my research, this issue can be occurred in the following scenarios:
    You are running a DBCC command on a large database. For example, you are running the DBCC CHECKDB statement on the database. At the same time, you run many DML statements on the database.
    You create a database snapshot for a large database. At the same time, you are running many DML statements on the database.
    In these scenarios, the SQL Server service stops responding. The DBCC CHECK statement is never completed, and you receive the error message repeatedly.
    This issue occurs because the sparse file for a database snapshot file has exceeded the file size limitation in NTFS file system. When the operating system error 1450 occurs, SQL Server enters an infinite retry loop.
    To fix this issue, please download and install Cumulative update package 1 for SQL Server 2008 Service Pack 1. Besides, because the fixes are cumulative, each new release contains all the hotfixes and all the security fixes that were included with the previous
    SQL Server 2008 fix release. We recommend that you consider applying the most recent fix release that contains this hotfix.
    Reference:
    http://support.microsoft.com/kb/967164
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Logs not applying on Standby Database

    I am afraid I am back.
    I now have a primary database and a standby database and logs are being shipped but not applied. Done some research and I have got to the point where in my alert log I find the following when I try to start MRP:
    Fri Jun 29 15:03:40 2012
    alter database recover managed standby database disconnect from session
    Attempt to start background Managed Standby Recovery process (SAPDS)
    Fri Jun 29 15:03:40 2012
    MRP0 started with pid=29, OS id=23272
    MRP0: Background Managed Standby Recovery process started (SAPDS)
    started logmerger process
    Fri Jun 29 15:03:45 2012
    Managed Standby Recovery not using Real Time Apply
    Read of datafile '/var/hpsrp/drforp03/oradata/u04/SAPDS/DRFSAPDS/sysaux01.dbf' (fno 2) header failed with ORA-01206
    Rereading datafile 2 header failed with ORA-01206
    MRP0: Background Media Recovery terminated with error 1110
    Errors in file /var/hpsrp/drforp03/oradata/u04/SAPDS/admin/diag/rdbms/drfsapds/SAPDS/trace/SAPDS_pr00_23308.trc:
    ORA-01110: data file 2: '/var/hpsrp/drforp03/oradata/u04/SAPDS/DRFSAPDS/sysaux01.dbf'
    ORA-01122: database file 2 failed verification check
    ORA-01110: data file 2: '/var/hpsrp/drforp03/oradata/u04/SAPDS/DRFSAPDS/sysaux01.dbf'
    ORA-01206: file is not part of this database - wrong database id
    Recovery Slave PR00 previously exited with exception 1110
    Errors in file /var/hpsrp/drforp03/oradata/u04/SAPDS/admin/diag/rdbms/drfsapds/SAPDS/trace/SAPDS_mrp0_23272.trc:
    ORA-01110: data file 2: '/var/hpsrp/drforp03/oradata/u04/SAPDS/DRFSAPDS/sysaux01.dbf'
    ORA-01122: database file 2 failed verification check
    ORA-01110: data file 2: '/var/hpsrp/drforp03/oradata/u04/SAPDS/DRFSAPDS/sysaux01.dbf'
    ORA-01206: file is not part of this database - wrong database id
    MRP0: Background Media Recovery process shutdown (SAPDS)
    Completed: alter database recover managed standby database disconnect from session
    My question therefore is how can I get out of this and get my logs to apply on standby?

    Hello again;
    Do you think if I tried to repeat the duplicate?
    Yes I think you should do this.
    Give me a few minutes and I will review my Word document and provide a step by step overview here as insurance.
    I always cleanup the standby before trying another dup
    OVERVIEW
    Step 1 - Password file fro standby - Copy from primary and rename
    Step 2 - Directory Structure on the remote server - Make sure noting is missing
    Step 3 - Oracle Net Setup - entry for the CLONE in your TNSNAMES.ORA on both servers
    Step 4 - SID_LIST_LISTENER addition ( assumes listener named LISTENER )
    Step 5 - Timeouts set in listener.ora and sqlnet.ora Both Servers
    Step 6 - Initialization Parameter File for the Auxiliary Instance
    Step 7 - Set SID for Auxiliary Instance
    Step 8 - Create an SPFILE for the new database by using a pfile with the INIT settings
    Step 9 - Shutdown and startup nomount on new Spfile ( Auxiliary Instance )
    Step 10 - Start RMAN and run the DUPLICATE Command
    SID_LIST_LISTENER Example from mine
    SID_LIST_LISTENER =
       (SID_LIST =
           (SID_DESC =
           (SID_NAME = PLSExtProc)
           (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
           (PROGRAM = extproc)
           (SID_DESC =
           (global_dbname = CLONE.hostname)
           (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
           (sid_name = CLONE)
    Prevent Timeouts
    Add these to both servers
    To listener.ora
    INBOUND_CONNECT_TIMEOUT_ = 120
    To sqlnet.ora
    SQLNET.INBOUND_CONNECT_TIMEOUT = 120
    Then stop and start the listener.
    RMAN
    $ORACLE_HOME/bin/rman target=sys/@primary auxiliary=sys/@standby
    Connect should return something like this
    connected to target database: RECOVER9 (DBID=3806912436)
    connected to auxiliary database: CLONE (not mounted)RMAN>duplicate target database for standby from active database NOFILENAMECHECK;
    INIT Extras
    To avoid ORA-09925 make sure the PFILE has audit_file_dest and core_dump_dest set
    Best Regards
    mseberg
    Edited by: mseberg on Jun 29, 2012 11:12 AM

Maybe you are looking for

  • E65 Call Divert Error Message - Need Help

    Hello, I've been trying to divert all voice calls to my voicemail but the phone keeps giving me this message "Invalid Phone Number". My service operator has *555 as voicemail number- Any suggestions? Thanks again! Martin

  • Need help understanding PC vid card options in MP

    Hi all, I have a 2008 MP with the 2600 XT video card, but I just installed Windows 7 via Boot Camp and decided I'd like to get a second video card just to play games in Windows. I do know that a PC video card won't work in OSX unless flashed, but I'm

  • Problem with package in Eclipse

    I am doing in Java codes. There is big problem with those error: package org.eclipse.emf.ecore.xml.type.internal;What I should to do that? Do you think I need to download some package from somewhere for Eclipse? I have already Eclipse. Please help me

  • Problem with selection screen input

    my problem is when i enter l2_gr or l1_vbeln or l2_vbeln, in the selection screen, the output should come according to that.I am not getting the correct result. form summ_report.   data: l2gr like ekbe-belnr.    select fs_plant  fv_plant    f~r_plant

  • Send IDOC Messages In Order to XI

    Hi, I am using Job Scheduling for sending the list of Vendors(CREMAS IDOCs) to SAP XI from SAP R/3. I am generating a unique ID for each vendor received. Just because all the messages reach XI at the same time, I am getting the same ID for many Vendo