Rat on logical standby

can we run RAT to do DB Capture over a logical standby database, will it also capture the workload due to Applier process?

Yes, you can capture the SQL into an STS . Using a separate SPA system you can connect remotely to the standby

Similar Messages

  • Logical standby: SQL Apply too slow

    Hi all,
    I have a question regarding SQL Apply performance in logical standby. There are two kind of operations that are remarkably slow when applying them on logical standby. These are "truncate table" and "delete from table" operations.
    When logical standby pick up one of mentioned statements from logs one of appliers start working whereas rest others are waiting. It looks like standby hang and very slow sql apply is moving on gradually and finally when operation completes standby is behind primary for 4 or 5 or even 8 hours.
    What can be done in this regard to speed up sql apply and alleviate this situation?
    Best Regards,
    Alex

    Are you absolutely sure that the truncate is the problem (and deletes). How did you check it?
    You can use LogMiner to check what are most of the commands in the log currently applied. I use this:
    BEGIN
    sys.DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/home/oracle/arc_43547_1_595785865.arc', OPTIONS => sys.DBMS_LOGMNR.ADDFILE);
    END;
    BEGIN
    sys.DBMS_LOGMNR.START_LOGMNR( OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
    END;
    SELECT seg_owner,seg_name,table_name,operation,COUNT(1) FROM V$LOGMNR_CONTENTS
    GROUP BY seg_owner,seg_name,table_name,operation
    ORDER BY COUNT(1) DESC
    BEGIN
    sys.DBMS_LOGMNR.END_LOGMNR();
    END;
    Most of the times in our cases when SQL Apply is slow is because of high activity on particular object. This can be detected by high number of DMLs for that object using LogMiner. If this object is not needed on the logical standby you can skip it and thus SQL Apply will be faster because it will not apply changes for this particular one. If it's needed and this is not a regular rate, then you can skip it temporarily, turn on SQL Apply , after problematic logs are applied, turn off SQL Apply, instantiate the object and unskip it, turn on sql apply again.
    Another thing that can drastically slowdown SQL Apply is the size of memory available for SQL Apply(Alert log shows that max is ~4.5GB or something like this, I'm not sure )
    You can increase it with something like this:
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    BEGIN
    DBMS_LOGSTDBY.APPLY_SET('MAX_SGA', 3000); -- set to 3000 MB
    END;
    ALTER DATABASE START LOGICAL STANDBY APPLY;
    You have to increase it if the following reports:
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS
    WHERE NAME LIKE '%page%' OR
    NAME LIKE '%uptime%' or name like '%idle%';
    that 'bytes paged out' increases if run every few seconds during slow SQL Apply.
    I hope that it's something that can be fixed using the above info. If no, please comment and share your investigations.
    Thanks

  • Logical Standby Apply Process Performance

    Hello,
    We are testing our logical standby database for sql apply process.We run batch jobs in our active database and monitor the standby database for the time it takes to bring the database in sync following are the steps we follow:
    1) Insure active and standby are in sync.
    2) Stop sql apply on standby database.
    3) Run Batch job on active database.
    4) After completion of the job on active,start sql apply on standby.
    Following are the details of the time taken by sql apply,based on the previous runs:
    1st. 654K volume = 4 hrs (2727 records per min)
    2nd. 810K volume = 8 hrs 45 mins (1543 records per min)
    3rd. 744K volume = 7 hrs 17 mins (1704 records per min)
    Following are the details of the logical stdby parameters :
    MAX_SGA 100
    MAX_SERVERS 15
    PREPARE_SERVERS 4
    APPLY_SERVERS 8
    MAX_EVENTS_RECORDED 10000
    RECORD_SKIP_ERRORS TRUE
    RECORD_SKIP_DDL TRUE
    RECORD_APPLIED_DDL FALSE
    RECORD_UNSUPPORTED_OPERATIONS FALSE
    EVENT_LOG_DEST DEST_EVENTS_TABLE
    LOG_AUTO_DELETE TRUE
    LOG_AUTO_DEL_RETENTION_TARGET 1440
    PRESERVE_COMMIT_ORDER TRUE
    ALLOW_TRANSFORMATION FALSE
    can we ensure SQL apply process to apply data in consistent volume,Is it okay for a sql apply process to take same amount of time what the actual batch takes in active instance,can we further tweak apply process to get better performance.
    Please help.
    Thank you !!

    Following are the details of the time taken by sql apply,based on the previous runs:
    1st. 654K volume = 4 hrs (2727 records per min)
    2nd. 810K volume = 8 hrs 45 mins (1543 records per min)
    3rd. 744K volume = 7 hrs 17 mins (1704 records per min)
    Following are the details of the logical stdby parameters :
    Hi,
    By looking at the above apply rate, the apply process is working normally and not having issues.
    Since it's a bulk batch data update in PRIMARY, it's obvious and quite normal that it will take time in STANDBY database to get applied and in sync with PRIMARY.
    Still, if you need to consider improving the performance, look out for adjusting the APPLIER & PREPARER process. (parameteres, APPLY_SERVERS & PREPAR_SERVERS).

  • Creating a new schema in a Logical Standby Database

    Hi All,
    I am experimenting with logical standby databases for the purpose of reporting, and have not been able to create a new schema in the logical standby database - one of the key features of logical standbys.
    I have setup primary and logical standby databases, and they seem to be running just fine - changes are moved from the primary to the standby and queries on the standby seem to run ok.
    However, If I try to create a new schema on the logical standby, that does not exist on the primary, I get "ORA-01031: insufficient privileges" errors when I try to create new objects.
    Show below are the steps I have taken to create the new schema on the logical standby. Any help would be greatly appreciated.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_new
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user new
    2 identified by new
    3 default tablespace ts_new
    4 temporary tablespace temp
    5 quota unlimited on ts_new
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to new
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to new
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to new
    SYS@UATDR> select * from dba_sys_privs where grantee='NEW'
    2 /
    GRANTEE PRIVILEGE ADM
    NEW CREATE ANY TABLE NO
    NEW CREATE TABLE NO
    NEW UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect new/new
    Connected.
    NEW@UATDR>
    NEW@UATDR> -- prove ability to create tables
    NEW@UATDR> create table new
    2 (col1 number not null)
    3 tablespace ts_new
    4 /
    create table new
    ERROR at line 1:
    ORA-01031: insufficient privileges
    NEW@UATDR>
    NEW@UATDR>

    HI Daniel,
    I appreciate your quick response.
    My choice of name may not have been ideal, however changing new to another name - like gav - does not solve the problem.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_gav
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user gav
    2 identified by gav
    3 default tablespace ts_gav
    4 temporary tablespace temp
    5 quota unlimited on ts_gav
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to gav
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to gav
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to gav
    SYS@UATDR> select * from dba_sys_privs where grantee='GAV'
    2 /
    GRANTEE PRIVILEGE ADM
    GAV CREATE TABLE NO
    GAV CREATE ANY TABLE NO
    GAV UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect gav/gav
    Connected.
    GAV@UATDR>
    GAV@UATDR> -- prove ability to create tables
    GAV@UATDR> create table gav
    2 (col1 number not null)
    3 tablespace ts_gav
    4 /
    create table gav
    ERROR at line 1:
    ORA-01031: insufficient privileges
    GAV@UATDR>

  • Logical standby and Primary keys

    Hi All,
    Why primary keys are essential for creating logical standby database? I have created a logical standby database on testing basis without having primary keys on most of the tables and it's working fine. I have not event put my main DB in force logging mode.

    I have not event put my main DB in force logging mode. This is because, redo log files or standby redo logfiles transforms into set of sql statements to update logical standby.
    Have you done any DML operations with nologging options and do you notice any errors in the alert.log? I just curious to know.
    But I wanted to know that, while system tablespace in hot backup mode,In the absence of both a primary key and a nonnull unique constraint/index, all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. In other words, all columns except those with the following types are logged: LONG, LOB, LONG RAW, object type, and collections.
    Jaffar

  • Logical standby and truncate partition

    Hi,
    I'm evaluation whether a logical standby database would meet our needs.
    We have a live database and want a reporting database that is identical to the live one, just minutes behind in time to the live one, plus we want to create other summary tables etc on the reporting db.
    Logical standby seems to meet our needs but I have one query.
    - On the live db the most of the tables are organized by date partitions and only 5 days are kept with new partitions being created every night for the forthcoming days and the oldest date partitions being truncated.
    On the reporting database we want to keep 30 days partitions.
    Can we have all the DDL and DML from live being applied to the standby APART from the specific truncate partition statements?
    Many Thanks,
    Kailas

    Addendum.
    If you can , it is better to truncate the partition but not drop it, after exporting the partition for archive purposes. If you ever need to bring the data back, the data will go into the correct partition . If the table structure has been changed by adding or lengthening columns, this will still work OK where column names match.
    Test ease of restoration of archived data to existing and non-existing partitions with altered structure (dropped columns,added columns, renamed columns) for yourself though.
    Regards, Vin.
    PS. When splitting maxdata partition to create new 'highest' partition, use a small initial extent then the 'next' being the expected real size needed. When the partition is truncated, the only space that should remain un-claimable will be that allocated for the initial extent.

  • Logical Standby Database in NOARCHIVE Mode

    Hi,
    I have configured a Logical Standby Database for Reporting purposes. A Physical Standby Database is running for MAA. i.e. in case of Role Transition (switch/Failover) the Physical Stdby Db will get the role of the Primary.
    The logical standby database is creating a lot of Archive Redologs files, nearly every minute. Redolog files are 50MB and there is no work done in db during the time. I'm NOT using Standby Redolog files.
    Is there a need for logical standby database to be in NOARCHIVELOG mode? The Primary is definatley in ARCHIVELOG mode.
    Thanks for any responses.
    regards
    Sahba

    hi,
    well there are two things to the above:-
    1. there was an archive file nearly every minute:
    this is due to a db recovery. for some reason, the db was in inconsistent state, after a sudden shutdownof the OS. I was on a test environment, on windows vista, unfortunately. unimportant ... a reboot solved it.
    2. Logical standby db in NOARCHIVE MODE when setup for the purpose of Reporting.
    As long as the MAA configured for the primary db, such as physical standby db, and a second, the logical standby db setup purely for the purpose of reporting, which then can run with NOARCHIVELOG mode, after converting the physical standby db to logical.
    logical standby db uses Streams architecture, so this method brings cost, time and performance advantages to the customer.
    regards
    Sahba

  • Best practice on using Flashback and Logical Standby

    Hello,
    I'm testing a fail-back scenario where I first need to activate a logical standby, then do some dummy transactions before I flashback this db and resme the redo apply. Here is what the steps look like:
    1)     Ensure logical standby is in-sync with primary
    2)     Enable flashback on standby
    3)     Create a flashback guaranteed restore point
    4)     Defer log shipping from primary
    5)     Activate the logical standby so it’s fully open to read-write
    6)     Dummy activities against the standby (which is now fully open)
    7)     Flashback the database to the guaranteed checkpoint
    8)     Resume log shipping on primary
    9)     Resume redo apply on secondary
    In the end, i can see the log shipping is happening but the logical standby does not apply any of these..and there is no error in the alert log on Standby side. But the following query could explains why the standby is idle:
    SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN STATUS
    COORDINATOR ORA-16240: Waiting for log file (thread# 2, sequence# 0)
    ORA-16240: Waiting for log file (thread# string, sequence# string)
    Cause: Process is idle waiting for additional log file to be available.
    Action: No action necessary. This informational statement is provided to record the event for diagnostic purposes.
    I dont understand why it's looking for sequence #0 after the flashback.
    Thanks for the help.

    Hello;
    I hesitate to answer your question because you are not doing a good job of keeping the forum clean :
    Total Questions: 13 (13 unresolved)
    Please consider closing some of you old answered questions and rewarding those who helped you.
    No action necessary.
    Do you really have a thread 2? ( Redo thread number )
    Quick check
    select applied_scn,latest_scn from v$logstdby_progress;Use the DBA_LOGSTDBY_LOG View if you don't have a thread 2 then the sequence# is meaningless.
    COLUMN DICT_BEGIN FORMAT A10;
    SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,
    TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# AS THR# FROM DBA_LOGSTDBY_LOG
    ORDER BY SEQUENCE#;Logical Standby questions are difficult, not a lot of them out there I'm thinking.
    Check
    http://docs.oracle.com/cd/E14072_01/server.112/e10700/manage_ls.htm
    "Waiting On Gap State" ( However I still believe you don't have a 2nd thread# )
    OR
    http://psilt.wordpress.com/2009/04/29/simple-logical-standby/
    Best Regards
    mseberg
    Edited by: mseberg on Apr 26, 2012 5:13 PM

  • Logical standby in noarchive mode

    Hello,
    does anybody know if it is possible to run a logical standby (10gR2) in noarchivelog mode?
    For my understanding it should be possible, why not... But I can't find a piece of documentation which prove this.
    Thanks in advance,
    Boris...

    I also was curious if setting up a Logical Standby in noarchivelog mode was possible. It looks like it is...
    We use this simply as a reporting copy of our data. Our analysts do not change any of this data. They just need it to be fairly current (~3 days). At this point, there isn't any need to generate archivelogs so we want to disable the feature to minimize our maintenance and our storage requirements.
    STANDBY DB:
    SQL> select log_mode, open_mode from v$database;
    LOG_MODE     OPEN_MODE
    ARCHIVELOG   READ WRITE
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    Database mounted.
    SQL>  alter database noarchivelog;
    Database altered.
    SQL> alter database open;
    Database altered.
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Database altered.Then I created an object on the production server:
    SQL> create table testuser.noarchivelog_testing as select * from dba_objects;
    Table created.
    SQL> alter system switch logfile;
    System altered.
    Mon Jun 22 11:13:26 2009
    Beginning log switch checkpoint up to RBA [0xe6.2.10], SCN: 1031235046
    Mon Jun 22 11:13:26 2009Now I see it on the standby:
    Mon Jun 22 11:13:43 2009
    RFS LogMiner: RFS id [25920] assigned as thread [1] PING handler
    RFS[1]: Archived Log: '/oracle/LH1/oraarch/LH1arch_1_229_680639127.arc'
    Mon Jun 22 11:13:46 2009
    RFS LogMiner: Registered logfile [/oracle/LH1/oraarch/LH1arch_1_229_680639127.arc] to LogMiner session id [1]
    Mon Jun 22 11:13:48 2009
    LOGSTDBY status: ORA-16204: DDL successfully applied
    Mon Jun 22 11:13:51 2009
    LOGMINER: End mining logfile: /oracle/LH1/oraarch/LH1arch_1_229_680639127.arc
    SQL> select count(*) from testuser.noarchivelog_testing;
      COUNT(*)
         24444

  • Logical standby stuck at initializing SQL apply only coordinator process up

    Hi
    OS: solaris 5.10
    Hardware: sun sparc
    Oracle database: 11.2.0.1.0
    Primary database name: asadmin
    Standby database name: test
    I had been trying to convert a physical standby to logical standby database. Both the primary and standby reside on the same machine.
    The physical standby was created with a hot backup of primary.
    I had been following document id 278371.1 to convert the physical to logical standby and used the following steps:
    Relevant init parameters on primary:
    *.db_name='asadmin'
    *.db_unique_name='asadmin'
    *.log_archive_config='dg_config=(asadmin,test)'
    *.log_archive_dest_1='location=/u01/asadmin/archive valid_for=(all_logfiles,all_roles) db_unique_name=asadmin'
    *.log_archive_dest_2='SERVICE=test async valid_for=(online_logfiles,primary_role) db_unique_name=test'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.fal_client='asadmin'
    *.fal_server='test'
    *.remote_login_passwordfile='EXCLUSIVE'
    Relevant init parameters on standby database:
    *.db_name='test' -- Was asadmin before I renamed the DB during conversion to logical standby
    *.db_unique_name='test'
    *.log_archive_dest_1='location=/u01/test/archive valid_for=(all_logfiles,all_roles) db_unique_name=test'
    *.log_archive_dest_2='service=asadmin async valid_for=(online_logfiles,primary_role) db_unique_name=asadmin'
    *.log_archive_dest_state_1=enable
    *.log_archive_dest_state_2=defer
    *.remote_login_passwordfile='EXCLUSIVE'*.fal_server=test
    *.fal_client=asadmin
    Steps on primary:
    1) alter system set log_archive_dest_state_2=defer;
    2) shutdown immediate;
    3) Made sure that the physical standby has applied all of the redo sent to it following the shutdown.
    4) startup mount;
    5) ALTER DATABASE BACKUP CONTROLFILE to '/home/oracle/control01.ctl';
    6) ALTER SYSTEM ENABLE RESTRICTED SESSION;
    7) ALTER DATABASE OPEN;
    8) Verified that the supplemental logging is on.
    9) ALTER SYSTEM ARCHIVE LOG CURRENT;
    10) Checked for the checkpoint change no. at this point which is 72403818 and is present in archive log file 1_62_775102253.dbf
    11) EXECUTE DBMS_LOGSTDBY.BUILD;
    12) ALTER SYSTEM ARCHIVE LOG CURRENT;
    13) Checked for the archive log containing dictionary build which is 1_64_775102253.dbf
    14) ALTER SYSTEM DISABLE RESTRICTED SESSION;
    Details of archive logs and related checkpoint change nos:
    NAME FIRST_CHANGE# NEXT_CHANGE#
    /u01/asadmin/archive/1_61_775102253.dbf 72402901 72403817
    /u01/asadmin/archive/1_62_775102253.dbf 72403817 72404069
    /u01/asadmin/archive/1_63_775102253.dbf 72404069 72404211
    /u01/asadmin/archive/1_64_775102253.dbf 72404211 72405700
    Steps on standby:
    1) shutdown immediate;
    2) Copy the archivelog file 61(was created at primary after apply stopped at standby), 62(contains checkpoint no. 72403818), 63 and 64(contains dictionary build). Copy the backup controlfile from step 5 above to the controlfile location in standby init.
    3) startup mount;
    4) Rename all datafiles and redo log files (including standby redo log files) to the correct path on standby.
    5) alter database recover automatic from '/u01/test/archive' until change 72405700 using backup controlfile; -- This completed error-free
    6) alter database guard all; -- this completed error free
    7) alter database open resetlogs; -- this completed error free.
    8) nid target=sys/oracle12 dbname=test
    9) Changed the db_name in init file to new name test.
    10) Added a tempfile to temp tablespaces.
    11) ALTER DATABASE REGISTER LOGICAL LOGFILE '/u01/test/archive/1_61_775102253.dbf'; -- ORA-16225: Missing LogMiner session name for Streams
    12) ALTER DATABASE START LOGICAL STANDBY APPLY INITIAL 72405700; -- This completed error free.
    Also enabled the log_archive_dest_state_2 on primary.
    After this output from some views:
    SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE;
    SESSION_ID STATE
    1 INITIALIZING
    SQL> SELECT SID, SERIAL#, SPID, TYPE FROM V$LOGSTDBY_PROCESS;
    SID SERIAL# SPID TYPE
    587 22 15476 COORDINATOR
    SELECT PERCENT_DONE, COMMAND
    FROM V$LOGMNR_DICTIONARY_LOAD
    WHERE SESSION_ID = (SELECT SESSION_ID FROM V$LOGSTDBY_STATE);
    PERCENT_DONE
    COMMAND
    0
    SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN STATUS
    COORDINATOR ORA-16111: log mining and apply setting up
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN
    72405700 72411501
    SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
    WHERE NEXT_CHANGE# NOT IN
    (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
    ORDER BY THREAD#,SEQUENCE#;
    no rows selected
    SQL> SELECT EVENT_TIME, STATUS, EVENT
    FROM DBA_LOGSTDBY_EVENTS
    ORDER BY EVENT_TIMESTAMP, COMMIT_SCN; 2 3
    EVENT_TIME STATUS EVENT
    14-FEB-12 02:00:50 ORA-16111: log mining and apply setting up
    14-FEB-12 02:00:50 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 02:20:11 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 02:20:39 ORA-16111: log mining and apply setting up
    14-FEB-12 02:20:39 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 02:54:15 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 02:57:38 ORA-16111: log mining and apply setting up
    EVENT_TIME STATUS EVENT
    14-FEB-12 02:57:38 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 03:01:36 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 03:13:44 ORA-16111: log mining and apply setting up
    14-FEB-12 03:13:44 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 04:32:23 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 04:34:17 ORA-16111: log mining and apply setting up
    14-FEB-12 04:34:17 Apply LWM 72405699, HWM 72405699, SCN 72405699
    EVENT_TIME STATUS EVENT
    14-FEB-12 04:36:16 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 04:36:21 ORA-16111: log mining and apply setting up
    14-FEB-12 04:36:21 Apply LWM 72405699, HWM 72405699, SCN 72405699
    14-FEB-12 05:15:22 ORA-16128: User initiated stop apply successfully
    completed
    14-FEB-12 05:15:29 ORA-16111: log mining and apply setting up
    14-FEB-12 05:15:29 Apply LWM 72405699, HWM 72405699, SCN 72405699
    I also greped for lsp and lcr processes and found that lsp is up but do not see any lcr.
    The logs are getting transported to the archive destination on standby whenever they are archived on primary but are not getting applied to standby.
    Also in case the standby is down while a log is generated on primary it is not automatically transported to standby once the standby is up, means gap resolution is also not working.
    I see the following in alert log every time I try to restart the log apply, everything seems to be stuck at initialization.
    ALTER DATABASE START LOGICAL STANDBY APPLY (test)
    with optional part
    IMMEDIATE
    Attempt to start background Logical Standby process
    Tue Feb 14 05:15:28 2012
    LSP0 started with pid=28, OS id=23391
    Completed: alter database start logical standby apply immediate
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
    LOGMINER: SpillScn 0, ResetLogScn 0
    -- NOTHING AFTER THIS

    Hello;
    I noticed some of your parameters seem to be wrong.
    fal_client - This is Obsolete in 11.2
    You have db_name='test' on the Standby, it should be 'asadmin'
    fal_server=test is set like this on the standby, it should be 'asadmin'
    I might consider changing VALID_FOR to this :
    VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)Would review 4.2 Step-by-Step Instructions for Creating a Logical Standby Database of Oracle Document E10700-02
    Document 278371.1 is showing its age in my humble opinion.
    -----Wait on this until you fix your parameters----------------------
    Try restarting the SQL Apply
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATEI don't see the parameter MAX_SERVERS, try setting it to 8 times the number of cores.
    Use these statements to trouble shoot :
    SELECT NAME, VALUE, UNIT FROM V$DATAGUARD_STATS;
    SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE ;TRANSACTIONS%';
    SELECT COUNT(1) AS IDLE_PREPARERS FROM V$LOGSTDBY_PROCESS WHERE
    TYPE = 'PREPERER' AND STATUS_CODE = 16166;Best Regards
    mseberg
    Edited by: mseberg on Feb 14, 2012 7:37 AM

  • How to delete the foreign archivelogs in a Logical Standby database

    How do I remove the foreign archive logs that are being sent to my logical standby database. I have files in the FRA of ASM going back weeks ago. I thought RMAN would delete them.
    I am doing hot backups of the databases to FRA for both databases. Using ASM, FRA, in a Data Guard environment.
    I am not backing up anything to tape yet.
    The ASM FRA foreign_archivelog directory on the logical standby FRA keeps growing and nothing is get deleted when
    I run the following command every day.
    delete expired backup;
    delete noprompt force obsolete;
    Primary database RMAN settings (Not all of them)
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE DB_UNIQUE_NAME 'WMRTPRD' CONNECT IDENTIFIER 'WMRTPRD_CWY';
    CONFIGURE DB_UNIQUE_NAME 'WMRTPRD2' CONNECT IDENTIFIER 'WMRTPRD2_CWY';
    CONFIGURE DB_UNIQUE_NAME 'WMRTPRD3' CONNECT IDENTIFIER 'WMRTPRD3_DG';
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    Logical standby database RMAN setting (not all of them)
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    How do I cleanup/delete the old ASM foreign_archivelog files?

    OK, the default is TRUE which is what it is now
    from DBA_LOGSTDBY_PARAMETERS
    LOG_AUTO_DELETE     TRUE          SYSTEM     YES
    I am not talking about deleting the Archive logs files for the Logical database that it is creating, but the Standby archive log files being sent to the Logical Database after they have been applied.
    They are in the alert log as follows under RFS LogMiner: Registered logfile
    RFS[1]: Selected log 4 for thread 1 sequence 159 dbid -86802306 branch 763744382
    Thu Jan 12 15:44:57 2012
    *RFS LogMiner: Registered logfile [+FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297] to LogM*
    iner session id [1]
    Thu Jan 12 15:44:58 2012
    LOGMINER: Alternate logfile found. Transition to mining archived logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/
    foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297
    LOGMINER: End mining logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/threa
    d_1_seq_158.322.772386297
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 159, +DG1/wmrtprd2/onlinelog/group_4.284.771760923                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Error while Creating a Logical Standby Database

    Dears
    I setup physical standby db. Now trying to convert it to logical standby db by following this link:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ls.htm#i92346
    When i issue this statement:
    ALTER DATABASE RECOVER TO LOGICAL STANDBY orcl2;
    I am getting following error:
    ORA-00905: missing keyword.
    I do not find anything in ALERT.LOG file. The syntax seems to be correct as i carefully copied from the document itself.
    Please reply.

    Any ideas?

  • Sql Apply issue in logical standby database--(10.2.0.5.0) x86 platform

    Hi Friends,
    I am getting the following exception in logical standby database at the time of Sql Apply.
    After run the command alter database start logical standby apply sql apply services start but after few second automatically stop and getting following exception.
    alter database start logical standby apply
    Tue May 17 06:42:00 2011
    No optional part
    Attempt to start background Logical Standby process
    LOGSTDBY Parameter: MAX_SERVERS = 20
    LOGSTDBY Parameter: MAX_SGA = 100
    LOGSTDBY Parameter: APPLY_SERVERS = 10
    LSP0 started with pid=30, OS id=4988
    Tue May 17 06:42:00 2011
    Completed: alter database start logical standby apply
    Tue May 17 06:42:00 2011
    LOGSTDBY status: ORA-16111: log mining and apply setting up
    Tue May 17 06:42:00 2011
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 4, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 100M, Checkpoint interval = 500M
    Tue May 17 06:42:00 2011
    LOGMINER: krvxpsr summary for session# = 1
    LOGMINER: StartScn: 0 (0x0000.00000000)
    LOGMINER: EndScn: 0 (0x0000.00000000)
    LOGMINER: HighConsumedScn: 2660033 (0x0000.002896c1)
    LOGMINER: session_flag 0x1
    LOGMINER: session# = 1, preparer process P002 started with pid=35 OS id=4244
    LOGSTDBY Apply process P014 started with pid=47 OS id=5456
    LOGSTDBY Apply process P010 started with pid=43 OS id=6484
    LOGMINER: session# = 1, reader process P000 started with pid=33 OS id=4732
    Tue May 17 06:42:01 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1417, X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
    Tue May 17 06:42:01 2011
    LOGMINER: Turning ON Log Auto Delete
    Tue May 17 06:42:01 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
    Tue May 17 06:42:01 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1418, X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
    LOGSTDBY Apply process P008 started with pid=41 OS id=4740
    LOGSTDBY Apply process P013 started with pid=46 OS id=7864
    LOGSTDBY Apply process P006 started with pid=39 OS id=5500
    LOGMINER: session# = 1, builder process P001 started with pid=34 OS id=4796
    Tue May 17 06:42:02 2011
    LOGMINER: skipped redo. Thread 1, RBA 0x00058a.00000950.0010, nCV 6
    LOGMINER: op 4.1 (Control File)
    Tue May 17 06:42:02 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1419, X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1420, X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1421, X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
    LOGSTDBY Analyzer process P004 started with pid=37 OS id=5096
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
    LOGSTDBY Apply process P007 started with pid=40 OS id=2760
    Tue May 17 06:42:03 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    LOGSTDBY Apply process P012 started with pid=45 OS id=7152
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1422, X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1423, X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
    Tue May 17 06:42:03 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1424, X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
    LOGMINER: session# = 1, preparer process P003 started with pid=36 OS id=5468
    Tue May 17 06:42:03 2011
    LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
    Tue May 17 06:42:04 2011
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1425, X:\TANVI\ARCHIVE2\ARC01425_0748170313.001
    LOGSTDBY Apply process P011 started with pid=44 OS id=6816
    LOGSTDBY Apply process P005 started with pid=38 OS id=5792
    LOGSTDBY Apply process P009 started with pid=42 OS id=752
    Tue May 17 06:42:05 2011
    krvxerpt: Errors detected in process 34, role builder.
    Tue May 17 06:42:05 2011
    krvxmrs: Leaving by exception: 600
    Tue May 17 06:42:05 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    LOGSTDBY status: ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    Tue May 17 06:42:06 2011
    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_lsp0_4988.trc:
    ORA-12801: error signaled in parallel query server P001
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
    Tue May 17 06:42:06 2011
    LogMiner process death detected
    Tue May 17 06:42:06 2011
    logminer process death detected, exiting logical standby
    LOGSTDBY Analyzer process P004 pid=37 OS id=5096 stopped
    LOGSTDBY Apply process P010 pid=43 OS id=6484 stopped
    LOGSTDBY Apply process P008 pid=41 OS id=4740 stopped
    LOGSTDBY Apply process P012 pid=45 OS id=7152 stopped
    LOGSTDBY Apply process P014 pid=47 OS id=5456 stopped
    LOGSTDBY Apply process P005 pid=38 OS id=5792 stopped
    LOGSTDBY Apply process P006 pid=39 OS id=5500 stopped
    LOGSTDBY Apply process P007 pid=40 OS id=2760 stopped
    LOGSTDBY Apply process P011 pid=44 OS id=6816 stopped
    Tue May 17 06:42:10 2011

    Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
    ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []submit an SR to ORACLE SUPPORT.
    refer these too
    *ORA-600/ORA-7445 Error Look-up Tool [ID 153788.1]*
    *Bug 6022014: ORA-600 [KRVXBPX20] ON LOGICAL STANDBY*

  • Creating logical standby database

    Hi all,
    10.2.0.1
    Following this link
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ls.htm
    Where do i need to issue these statements:
    SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
    SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY db_name;
    on physical standby or primary database.
    I issued the first stament on primary and second on physical standby .
    IN the alert log of standby,i have the following entries.
    Wed Jan 20 15:34:28 2010
    Converting standby mount to primary mount.
    Wed Jan 20 15:34:28 2010
    ACTIVATE STANDBY: Complete - Database mounted as primary (treasury)
    *** DBNEWID utility started ***
    DBID will be changed from 306589979 to new DBID of 330710340 for database
    .........................I am trying to create a logical standby database after creating a physical standby database.
    It seems standby changed to primary which was not desired.
    Thanks

    Not tried myself, but you might want to have a look at this URL.
    It appears to suggest that you need to change names of datafiles as well as value db_name explcitly on standby.

  • Creation of Logical Standby Database Using RMAN ACTIVE DATABASE COMMAND

    Hi All,
    I am in confusion how to create logical standby database from primary database using rman active database command.
    What i did:-
    Create primary database on machine 1 on RHEL 5 with Oracle 11gR2
    Create standby database on machine 2 on RHEL 5 With Oracle 11gR2 from primary using RMAN active database command
    Trying to create logical standby database on machine 3 on RHEL 5 with Oracle 11gR2 using RMAN active database command from primary.
    The point which confuse me is to start the logical standby in nomount mode on machine 3 with which pfile like i create the pfile for standby database do i need to create the pfile for logical standby db.
    I done the creation of logical standby database by converting physical standby to logical standby database
    I am following the below mentioned doc for the same:
    Creating a physical and a logical standby database in a DR environment | Chen Guang's Blog
    Kindly guide me how to work over the same or please provide me the steps of the same.
    Thanks in advance.

    Thanks for your reply
    I already started the logical standby database with pfile in nomount mode. And successfully completed the duplication of database. by mentioning the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameter.
    But i am not able to receive the logs on the above mentioned blog i run the sql command to check the logs but getting "no rows selected"
    My primary database pfile is:
    pc01prmy.__db_cache_size=83886080
    pc01prmy.__java_pool_size=12582912
    pc01prmy.__large_pool_size=4194304
    pc01prmy.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    pc01prmy.__pga_aggregate_target=79691776
    pc01prmy.__sga_target=239075328
    pc01prmy.__shared_io_pool_size=0
    pc01prmy.__shared_pool_size=134217728
    pc01prmy.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/pc01prmy/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/PC01PRMY/controlfile/o1_mf_91g3mdtr_.ctl','/u01/app/oracle/flash_recovery_area/PC01PRMY/controlfile/o1_mf_91g3mf6v_.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_domain=''
    *.db_file_name_convert='/u01/app/oracle/oradata/PC01SBY/datafile','/u01/app/oracle/oradata/PC01PRMY/datafile'
    *.db_name='pc01prmy'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=pc01prmyXDB)'
    *.fal_client='PC01PRMY'
    *.fal_server='PC01SBY'
    *.log_archive_config='DG_CONFIG=(pc01prmy,pc01sby,pc01ls)'
    *.log_archive_dest_1='LOCATION=/u01/app/oracle/flash_recovery_area/PC01PRMY/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pc01prmy'
    *.log_archive_dest_2='SERVICE=pc01sby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=pc01sby'
    *.log_archive_dest_3='SERVICE=pc01ls LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='DEFER'
    *.log_archive_dest_state_3='DEFER'
    *.log_archive_max_processes=30
    *.log_file_name_convert='/u01/app/oracle/oradata/PC01SBY/onlinelog','/u01/app/oracle/oradata/PC01PRMY/onlinelog'
    *.open_cursors=300
    *.pga_aggregate_target=78643200
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=236978176
    *.undo_tablespace='UNDOTBS1'
    My logical standby pfile is:-
    pc01ls.__db_cache_size=92274688
    pc01ls.__java_pool_size=12582912
    pc01ls.__large_pool_size=4194304
    pc01ls.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    pc01ls.__pga_aggregate_target=79691776
    pc01ls.__sga_target=239075328
    pc01ls.__shared_io_pool_size=0
    pc01ls.__shared_pool_size=125829120
    pc01ls.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/pc01ls/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/PC01LS/controlfile/o1_mf_91g3mdtr_.ctl','/u01/app/oracle/flash_recovery_area/PC01LS/controlfile/o1_mf_91g3mf6v_.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_domain=''
    *.db_file_name_convert='/u01/app/oracle/oradata/PC01SBY/datafile','/u01/app/oracle/oradata/PC01PRMY/datafile'
    *.db_name='pc01prmy'
    *.db_unique_name='pc01ls'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=pc01prmyXDB)'
    *.log_archive_config='DG_CONFIG=(pc01prmy,pc01sby,pc01ls)'
    *.log_archive_dest_1='LOCATION=/u01/app/oracle/flash_recovery_area/PC01PRMY/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pc01prmy'
    *.log_archive_dest_2='LOCATION=/u01/app/oracle/flash_recovery_area/PC01LS/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_3='SERVICE=pc01ls LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='/u01/app/oracle/oradata/PC01SBY/onlinelog','/u01/app/oracle/oradata/PC01PRMY/onlinelog'
    *.open_cursors=300
    *.pga_aggregate_target=78643200
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=236978176
    *.undo_tablespace='UNDOTBS1'
    Kindly advice over the same

Maybe you are looking for