Logical standby DB with unsupported datatype

Hello,
Has anyone setup the following Data Guard Configuration, although it is not recommended by Oracle:-
Setup a logical standby DB from a Primary DB with unsupported Data Types. I'm need (and want) to save time and effort to configure a database for the purpose of Reporting and avoid Oracle Streams or Change Data Capture or even the new middleware Product Data Integrator
Data is sent from the Productive DB, and with logical standby DB, I (hope) can still get it to run even with unsupported data types using DBMS_LOGSTDBY.SKIP procedure, to skip unwanted and unsupported attributes in the primary DB.
I would appreciate any comments and experience using Logical Standby DB for Reporting Purposes.
regards
Sahba

Hello Anantha,
thanks for your reply.
unfortunately the Productive database uses dartatypes such as DMSYS.SDO_GEOMERTY.
these datatypes are not supported by the Logical standby DB.
I thought, i could skip those tables with such datatypes when applying redodata in the logical standby db, dbms_logstdby.skip procedure allow such mechanism.
the reason why i need logical stndby db ist due to READ/WRITE during apply process. as I mentioned, i need an up-to-date replica of the productive db for reporting purposes, without such tables with unsupported datatpes. because these are tables are not part of the reporting procedure, i thought of skipping them from appling in logical stndby db .
so, has anyone have had any complications with Logical stndby db even when using dbms_logstdby.skip procedure?
are there any other problems encountered with logical stndby db to share here?
thanks
regards
Sahba

Similar Messages

  • Logical Standby Database with 10g+ASM on both sides??

    Hi out there,
    is there a known way to establish a logical standby database on 10g, if both
    sides are running with an ASM setup?
    I've tried to create one out of a physical standby database (which is set up
    and running w/o any problems), like a book suggested me to do.
    The procedure was:
    1. switch on supplemental logging
    2. prepare initiation parameters (for archive logging etc.) on both sides for
    logical stb.
    3. shut down the phyiscal standby
    4. alter database create logical standby controlfile as '<path>'; on the
    primary, transfer the controlfile to the standby db. Here I had to use RMAN
    to copy the controlfile into the ASM System, and modify the initfile/spfile
    in order to use the controlfile. No problem so far.
    5. mount the standby database, alter database recover managed standby database
    disconnect; -> At this point, the alert log complained about non-available
    datafiles.
    6. alter database activate standby database; --> fails ("needs recovery") due
    to last point.
    The trouble is, the controlfile created at point 4 cointains wrong paths to
    the datafiles. Since I can not have the same disk group name on the standby
    system, and since ASM renames the stored datafiles by its own, the complaints
    of point 5 are comprehensible, but nevertheless annoying.
    I tried to backup a controlfile to trace and change the paths, but at after
    mounting the standby with this controlfile and proceeding at point 5, the
    system says "<path> is not a standby controlfile"
    Is there a different way of creating a "Logical Standby Database with 10g+ASM
    on both sides"? Metalink said nothing about LogStby and ASM.
    Best regards and thanks in advance,
    Martin

    I'm not sure if this will work but try:
    1. create trace control file (you did it)
    2. change paths (you did it)
    3. recrate control file (you did it)
    ... there was error occured during mount before
    so mount database (not as standby)
    4. create standby control file (from recreated control file)
    5. shutdown instance, replace control file with new standby control file or replace the control filename in parameter file.
    6. mount as standby
    What happend?
    Update: Tested on my side and it has worked fine... How about you?
    Message was edited by:
    Ivan Kartik

  • Logical Standby recover with RMAN

    Hi All,
    I have a test environment with Primary DB Server, Physical Standby and Logical Standby.
    The Logical Standby DB (cur_log_stdb) is backed up every evening by RMAN and I have a question:
    If I recover my Logical Standby DB from backup and switch replication to new Logical Standby DB (new_log_stdb) it will work or not?
    My steps e.g.:
    1. Make a new server for my new_log_stdb and repair structure of catalogs;
    2. Repair listener.ora and tnsnames.ora files from cur_log_stdb to new_log_stdb;
    3. Restore DB with RMAN from backup to new_log_stdb;
    4. On cur_log_stdb execute "alter database stop logical standby apply";
    5. Change a DNS name from cur_log_stdb to new_log_stdb;
    6. On new_log_stdb execute "alter database start logical standby apply immediate";
    I'm not sure that archivelogs will apply to the new_log_stdb for period since rman backup was created.
    But if this plan won't work how can I restore Logical Standby DB from RMAN backup and resume replication from Primary?
    Configuration:
    Oracle Linux 6.4
    Oracle Database 11.2.0.3
    Primary and Physical with Data Guard

    Hello;
    The only way to know for sure is to test it. You are asking somebody you don't know to confirm a recovery test for you. You have to perform the test yourself to be certain.
    If your plan does not work you can always rebuild the Standby.
    Best Regards
    mseberg

  • Logical standby failed with LOGSTDBY status: ORA-01281: SCN range specified

    My logical standby datbase on 11gR2 failed with the above message.
    A search on the error says
    Error: ORA-01281 (ORA-1281)
    Text: SCN range specified is invalid
    Cause: StartSCN may be greater than EndSCN, or the SCN specified may be
    invalid.
    Action: Specify a valid SCN range.
    does any one have an idea as to how to specify a Valid SCN Range
    appreciate you help.

    When you got to DBMS_LOGSTDBY
    http://www.morganslibrary.org/reference/pkgs/dbms_logstdby.html
    What did you do when you saw the procedure MAP_PRIMARY_SCN?

  • Logical standby error with export dump

    oracal 10g have a setup logical standby and when i am running export dump from logical i got this error.
    EXP-00008: ORACLE error 16224 encountered
    ORA-16224: Database Guard is enabled
    EXP-00000: Export terminated unsuccessfully
    can someone help me out how can it posibal to take export dump and datapump from logical standby.
    thanks

    16224, 00000, "Database Guard is enabled"
    // *Cause: Operation could not be performed because database guard is enabled
    // *Action: Verify operation is correct and disable database guard                                                                                                                                                                                                                                                                                                                                                                                       

  • Discoverer and Logical Standby

    We don't wish to run Discoverer against our production database. To that end, we have set up a logical standby on another db server. We plan to allow our users to run their Discoverer reports using the data from the standby database.
    Q: Does anyone have any 'lessons learned' or comments regarding this type of setup? Any 'gotchas'?
    thanks, all.....

    a physical standby is a byte-exact copy.
    If you would overwrite all files of the primary by the corresponding standby files, Oracle won't notice. That said, the database isn't available for normal operations, you can not create any segments in it.
    A logical standby database is a duplicate database in which all INSERT, UPDATE etc statements are re-executed.
    There are limitations with respect to datatypes: not all datatypes are supported.
    Tables with unsupported datatypes are automagically suppressed from the standby. The database is open and can be used.
    However, as it is not a byte-exact copy, you can not use it for Disaster Recovery purposes.
    Sybrand Bakker
    Senior Oracle DBA

  • Logical Standby Database - Doubts

    Hi everyone,
    I have a doubt about this view: dba_logstdby_unsupported;
    This view shows up me 400 tables, so i understand that an operation DML on these tables won't be replicated in the other node?, or only the fields allowed will be replicated in the news record? or new record on these tables will be replicated if these records have NULL value in the fields doesn't allow.
    Thank you very much if someone can help me with this doubts.
    Regards

    This is what the documentation says:
    If the primary database contains unsupported tables, SQL Apply automatically excludes these tables when applying redo data to the logical standby database.Source: Unsupported Tables

  • Replicate from primary to logical standby

    We are considering a logical standby, but have three tables with an XMLTYPE datatype. We have successfully set up a bi directional streams environment, with advanced replication also handling those tables that streams cannot (such as those already noted).
    However, when we try to setup advanced replication on the primary and logical standby, adding a master database at the logical standby hangs with AWAIT_CALLBACK from the standby. We turned off the guard, SKIPPED apply on the replication schema, but it still hangs.
    Is this by design (it does seem odd to replicate to a standby database, I know :)), or is this completely impossible, or are we missing something? Will we have to use Streams to accomplish this, rather than a logical standby (built on top of streams?)
    Thanks!
    Steve

    sequences are stored and administrated in a datadictionary table.
    The dictionary is 'replicated'.
    What is your exact problem?
    Sybrand Bakker
    Senior Oracle DBA

  • Creation of Logical Standby Database Using RMAN ACTIVE DATABASE COMMAND

    Hi All,
    I am in confusion how to create logical standby database from primary database using rman active database command.
    What i did:-
    Create primary database on machine 1 on RHEL 5 with Oracle 11gR2
    Create standby database on machine 2 on RHEL 5 With Oracle 11gR2 from primary using RMAN active database command
    Trying to create logical standby database on machine 3 on RHEL 5 with Oracle 11gR2 using RMAN active database command from primary.
    The point which confuse me is to start the logical standby in nomount mode on machine 3 with which pfile like i create the pfile for standby database do i need to create the pfile for logical standby db.
    I done the creation of logical standby database by converting physical standby to logical standby database
    I am following the below mentioned doc for the same:
    Creating a physical and a logical standby database in a DR environment | Chen Guang&amp;#039;s Blog
    Kindly guide me how to work over the same or please provide me the steps of the same.
    Thanks in advance.

    Thanks for your reply
    I already started the logical standby database with pfile in nomount mode. And successfully completed the duplication of database. by mentioning the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameter.
    But i am not able to receive the logs on the above mentioned blog i run the sql command to check the logs but getting "no rows selected"
    My primary database pfile is:
    pc01prmy.__db_cache_size=83886080
    pc01prmy.__java_pool_size=12582912
    pc01prmy.__large_pool_size=4194304
    pc01prmy.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    pc01prmy.__pga_aggregate_target=79691776
    pc01prmy.__sga_target=239075328
    pc01prmy.__shared_io_pool_size=0
    pc01prmy.__shared_pool_size=134217728
    pc01prmy.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/pc01prmy/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/PC01PRMY/controlfile/o1_mf_91g3mdtr_.ctl','/u01/app/oracle/flash_recovery_area/PC01PRMY/controlfile/o1_mf_91g3mf6v_.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_domain=''
    *.db_file_name_convert='/u01/app/oracle/oradata/PC01SBY/datafile','/u01/app/oracle/oradata/PC01PRMY/datafile'
    *.db_name='pc01prmy'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=pc01prmyXDB)'
    *.fal_client='PC01PRMY'
    *.fal_server='PC01SBY'
    *.log_archive_config='DG_CONFIG=(pc01prmy,pc01sby,pc01ls)'
    *.log_archive_dest_1='LOCATION=/u01/app/oracle/flash_recovery_area/PC01PRMY/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pc01prmy'
    *.log_archive_dest_2='SERVICE=pc01sby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=pc01sby'
    *.log_archive_dest_3='SERVICE=pc01ls LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='DEFER'
    *.log_archive_dest_state_3='DEFER'
    *.log_archive_max_processes=30
    *.log_file_name_convert='/u01/app/oracle/oradata/PC01SBY/onlinelog','/u01/app/oracle/oradata/PC01PRMY/onlinelog'
    *.open_cursors=300
    *.pga_aggregate_target=78643200
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=236978176
    *.undo_tablespace='UNDOTBS1'
    My logical standby pfile is:-
    pc01ls.__db_cache_size=92274688
    pc01ls.__java_pool_size=12582912
    pc01ls.__large_pool_size=4194304
    pc01ls.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    pc01ls.__pga_aggregate_target=79691776
    pc01ls.__sga_target=239075328
    pc01ls.__shared_io_pool_size=0
    pc01ls.__shared_pool_size=125829120
    pc01ls.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/pc01ls/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/PC01LS/controlfile/o1_mf_91g3mdtr_.ctl','/u01/app/oracle/flash_recovery_area/PC01LS/controlfile/o1_mf_91g3mf6v_.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_domain=''
    *.db_file_name_convert='/u01/app/oracle/oradata/PC01SBY/datafile','/u01/app/oracle/oradata/PC01PRMY/datafile'
    *.db_name='pc01prmy'
    *.db_unique_name='pc01ls'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=pc01prmyXDB)'
    *.log_archive_config='DG_CONFIG=(pc01prmy,pc01sby,pc01ls)'
    *.log_archive_dest_1='LOCATION=/u01/app/oracle/flash_recovery_area/PC01PRMY/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pc01prmy'
    *.log_archive_dest_2='LOCATION=/u01/app/oracle/flash_recovery_area/PC01LS/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_3='SERVICE=pc01ls LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='/u01/app/oracle/oradata/PC01SBY/onlinelog','/u01/app/oracle/oradata/PC01PRMY/onlinelog'
    *.open_cursors=300
    *.pga_aggregate_target=78643200
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=236978176
    *.undo_tablespace='UNDOTBS1'
    Kindly advice over the same

  • Logical standby server stopped applying changes

    Hi
    I set up a logical standby database with database guard and it worked fine for some time. But recently I had to use it again and discovered that applying changes from primary database to secondary database just stopped working. I see in V$ARCHIVED_LOG one entry per day. If I restart the logical standby then the changes from primary server are also applied. But if I just make a change on primary server and even call 'alter system switch logfile' then I see an entry in V$ARCHIVED_LOG on primary server but not on standby server (BTW in general there are much more entries in this view on the primary server). I checked pairs of log files indicated by the parameter *.log_file_name_convert in standby server's spfile: their last changed date is always the same.
    I will paste spfile of my standby server (dh5). Primary server name is dh2.
    dh2.__db_cache_size=79691776
    dh5.__db_cache_size=96468992
    dh2.__java_pool_size=4194304
    dh5.__java_pool_size=4194304
    dh2.__large_pool_size=4194304
    dh5.__large_pool_size=4194304
    dh2.__shared_pool_size=71303168
    dh5.__shared_pool_size=54525952
    dh2.__streams_pool_size=0
    dh5.__streams_pool_size=0
    *.audit_file_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/adump'
    *.background_dump_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='/var/lib/oracle/oracle/product/10.2.0/db_1/oradata/dh5/control01.ctl'
    *.core_dump_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='dh2','dh5'
    *.db_name='dh7'
    *.db_recovery_file_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.db_unique_name='dh5'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=dh2XDB)'
    *.fal_client='dh5'
    *.fal_server='dh2'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(dh2,dh5)'
    *.log_archive_dest_1='LOCATION=/var/lib/oracle/oracle/product/10.2.0/db_1/oradata/dh5_local
    VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=dh5'
    *.log_archive_dest_2='SERVICE=dh2 LGWR ASYNC
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=dh2'
    *.log_archive_dest_3='LOCATION=/var/lib/oracle/oracle/product/10.2.0/db_1/oradata/dh5
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES)
    DB_UNIQUE_NAME=dh5'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_dest_state_3='ENABLE'
    *.log_archive_format='%t_%s_%r.arc'
    *.log_archive_max_processes=30
    *.log_file_name_convert='oradata/dh2/redo01.log','flash_recovery_area/DH5/onlinelog/o1_mf_4_5x0o5grc_.log','oradata/dh2/r
    edo02.log','flash_recovery_area/DH5/onlinelog/o1_mf_5_5x0o61mw_.log','oradata/dh2/redo03.log','flash_recovery_area/DH5/on
    linelog/o1_mf_6_5x0o63gj_.log'
    *.nls_language='AMERICAN'
    *.open_cursors=300
    *.pga_aggregate_target=311427072
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.undo_management='AUTO'
    *.undo_retention=3600
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/udump'
    Thanks in advance for any help.
    JM

    Hi,
    Nice to hear you issue got resolved.
    It is good practice to keep monitoring the progress of SQL apply on the logical standby on a regular basis.
    You can mark my response as helpful if it has helped you.
    Regards
    Anudeep

  • Add Datafile in Logical Standby Database

    Hi,
    I have add one datafile in our primary RAC DB. We had logical standby database with file management is equal to manual. Both the primary RAC and logical standby db have the different storage structure. When the archive applied on the logical standby database its throws the error " error in creating datafile 'path'";
    Would appreciate if come to know the steps to add the datafile in this kind of environment. and how can i overcome from this problem now except skip transaction for that ddl.
    Thanks in advance.
    Dewan

    When the archive applied on the logical standby database its throws the error " error in creating datafile 'path'";Can you post the error message with the number.
    From Manual..
    8.3.1.2 Adding a Tablespace and a Datafile When STANDBY_FILE_MANAGEMENT Is Set to MANUAL
    The following example shows the steps required to add a new datafile to the primary and standby database when the STANDBY_FILE_MANAGEMENT initialization parameter is set to MANUAL. You must set the STANDBY_FILE_MANAGEMENT initialization parameter to MANUAL when the standby datafiles reside on raw devices.
    Add a new tablespace to the primary database:
    SQL> CREATE TABLESPACE new_ts DATAFILE '/disk1/oracle/oradata/payroll/t_db2.dbf'
    2> SIZE 1m AUTOEXTEND ON MAXSIZE UNLIMITED;
    Verify the new datafile was added to the primary database:
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    /disk1/oracle/oradata/payroll/t_db1.dbf
    /disk1/oracle/oradata/payroll/t_db2.dbf
    Perform the following steps to copy the tablespace to a remote standby location:
    Place the new tablespace offline:
    SQL> ALTER TABLESPACE new_ts OFFLINE;
    Copy the new tablespace to a local temporary location using an operating system utility copy command. Copying the files to a temporary location will reduce the amount of time the tablespace must remain offline. The following example copies the tablespace using the UNIX cp command:
    % cp /disk1/oracle/oradata/payroll/t_db2.dbf
    /disk1/oracle/oradata/payroll/s2t_db2.dbf
    Place the new tablespace back online:
    SQL> ALTER TABLESPACE new_ts ONLINE;
    Copy the local copy of the tablespace to a remote standby location using an operating system utility command. The following example uses the UNIX rcp command:
    %rcp /disk1/oracle/oradata/payroll/s2t_db2.dbf standby_location
    Archive the current online redo log file on the primary database so it will get transmitted to the standby database:
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    Use the following query to make sure that Redo Apply is running. If the MRP or MRP0 process is returned, Redo Apply is being performed.
    SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;
    Verify the datafile was added to the standby database after the archived redo log file was applied to the standby database:
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    /disk1/oracle/oradata/payroll/s2t_db1.dbf
    /disk1/oracle/oradata/payroll/s2t_db2.dbf

  • Logical Standby Database Not Getting Sync With Primary Database

    Hi All,
    I am using a Primary DB and Logical Standby DB configuration in Oracle 10g:-
    Version Name:-
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    PL/SQL Release 10.2.0.5.0 - Production
    CORE 10.2.0.5.0 Production
    TNS for Solaris: Version 10.2.0.5.0 - Production
    NLSRTL Version 10.2.0.5.0 - Production
    We have build the logical standby last week and till date the Logical DB is not sync. I have checked the init parameters and I wont see any problems with it. Also archive log destinations are also fine enough.
    We have a important table named "HPD_HELPDESK" where record count is growing gradually whereas in logical standby it's not growing. There are some 19K record difference in the both the tables.
    I have checked the alert log but it is also not giving any error message. Please find the last few lines of the alert log in logical Database:-
    RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1703_790996778.arc] to LogMiner session id [1]
    Tue Aug 28 14:56:52 GMT 2012
    RFS[2853]: Successfully opened standby log 5: '/oracle_data/oradata/remedy/stbyredo01.log'
    Tue Aug 28 14:56:58 GMT 2012
    RFS LogMiner: Client enabled and ready for notification
    Tue Aug 28 14:57:00 GMT 2012
    RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1704_790996778.arc] to LogMiner session id [1]
    Tue Aug 28 15:06:40 GMT 2012
    RFS[2854]: Successfully opened standby log 5: '/oracle_data/oradata/remedy/stbyredo01.log'
    Tue Aug 28 15:06:47 GMT 2012
    RFS LogMiner: Client enabled and ready for notification
    Tue Aug 28 15:06:49 GMT 2012
    RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1705_790996778.arc] to LogMiner session id [1]
    I am not able to trace the issue that why the records are not growing in logical DB. Please provide your inputs.
    Regards,
    Arijit

    How do you know that there's such a gap between the tables?
    If your standby db is a physical standby, then it is not open and you can't query your table without cancelling the recovery of the managed standby database.
    What does it say if you execute this sql?
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;The ARCH processes should be connected and MRP waiting for a file.
    If you query for the archive_gaps, do you get any hits?
    select * from gv$archive_gapIf you're not working in a RAC environment you need to query v$archive_gap, instead!
    Did you check whether the archives generated from the primary instance are transferred and present in the file system of your standby database?
    I believe your standby is not in recovery_mode anymore or has an archive_gap, which is the reason why it doesn't catch up anymore.
    Hope it helps a little,
    Regards,
    Sebastian
    PS: I'm working on 11g, so unfortunately I'm not quite sure if the views are exist in 10gR2. It's worth a try though!
    Edited by: skahlert on 31.08.2012 13:46

  • Interesting issue with Logical Standby and database triggers

    We have a Logical Standby that each month we export (expdp) a schema (CSPAN) that is being maintained by SQL Apply and import (impdp)it to a 'frozen copy' (eg CSPAN201104) using REMAP_SCHEMA.
    This works fine although we've noticed that because triggers on the original schema being exported have the original schema (CSPAN) hard-referenced in the definition are imported into and owned by the new 'frozen' schema but are still 'attached' to the original schema's tables.
    This is currently causing the issue where the frozen schema trigger is INVALID and causing the SQL Apply to fail. This is the error:
    'CSPAN201104.AUD_R_TRG_PEOPLE' is
    invalid and failed re-validation
    Failed SQL update "CSPAN"."PEOPLE" set "ORG_ID" = 2, "ACTIVE_IND" = 'Y', "CREATE_DT" = TO_DATE('22-JUL-08','DD-MON-RR'),"CREATOR_NM" = 'LC', "FIRST_NM" = 'Test', "LAST_PERSON" ='log'...
    Note: this trigger references the CSPAN schema (...AFTER INSERT ON CSPAN.PEOPLE...)
    I suspect that triggers on a SQL Apply Maintained schema in a Logical Standby do not need to be valid (since they do not fire) but what if they reference a SQL Apply schema but are 'owned' by a non-SQL Apply schema? This trigger references a SQL Apply table so it should not fire
    This is 10gR2 (10.2.0.4) on 64 bit Windows.
    Regards
    Graeme King

    OK, I've finally got around to actually test this and it looks like you are not quite correct Larry in this statement...
    'Since this trigger belongs to a new schema that is not controlled by SQL Apply (CSPAN201105) it will fire. But the trigger references a schema that is controlled by SQL Apply (CSPAN) so it will fail because it has to be validated.'
    My testing concludes that even though the trigger belongs to a schema CSPAN201105 (not controlled by SQL Apply) and references a schema controlled by SQL Apply - it does not fire. However it DOES need to be valid or it breaks SQL Apply.
    My testing was as follows:
    Primary DB
    Create new EMP table in CSPAN schema on Primary
    Create new table TRIGGER_LOG in CSPAN schema on Primary
    Create AFTER INSERT/UPDATE trigger on CSPAN.EMP table (that inserts into TRIGGER_LOG table)
    **All of the above replicates to Standby**
    Standby DB
    Create new table TRIGGER_LOG_STNDBY in CSPAN201105 schema on Primary
    Create new trigger in CSPAN201105 schema that fires on INSERT/UPDATE on CSPAN.EMP but that inserts into CSPAN201105.TRIGGER_LOG_STNDBY table)
    Primary DB
    Insert 4 rows into CSPAN.EMP
    Update 2 rows in CSPAN.EMP
    TRIGGER_LOG table has 6 rows as expected
    Standby DB
    TRIGGER_LOG table has 6 rows as expected
    TRIGGER_LOG_STNDBY table has **0 rows**
    Re-create trigger in CSPAN201105 schema that fires on INSERT/UPDATE on CSPAN.EMP but that inserts into CSPAN201105.TRIGGER_LOG_STNDBY table) **but with syntax error**
    Primary DB
    Update 1 row in CSPAN.EMP
    TRIGGER_LOG table has 7 rows as expected
    Standby DB
    SQL Apply is broken - ORA-04098: trigger 'CSPAN201105.TEST_TRIGGER_TRG' is invalid and failed re-validation

  • CPU patch procedure with physical and logical standby database in place

    Hello All,
    I've also placed this in the Upgrades forum, but perhaps this is the best place to have put it.
    I'm trying to compile a decent set of steps for applying the CPUOCT2008 patch to our production RAC cluster which has both a logical and physical standby in place. I've read a tonne of documentation, including the CPU readme, DOCID 437276.1 and 278641.1. I''ve also read through the Upgrading Databases in a Data Guard Configuration chapter of Dataguard Concepts and Administration. The last doc mentioned is really for upgrading a full version of Oracle rather than applying a CPU (at least I think that's the case). DocID 437276.1 is rather sparse on details.
    I guess what I'm trying to understand is the proper method for applying the patch with the logical standby in place. The physical standby looks pretty straightforward. After running opatch on it as well, it will basically have all of the changes applied to the primary shipped over and applied as per the normal primary/standby relationship. Will the same be true for the logical (having applied the patch, and then re-enabling SQL apply)? Should I aim to have it work that way? By that I mean start it up and re-enable sql apply and then upgrade the primary. Or, am I to apply the catcpu.sql script to it as well before re-enabling the sql apply? Am I wrong in regards to the physical standby as well i.e. should the catcpu also be applied directly to it?
    Thanks very much in advance.
    Cheers,
    Chris
    Edited by: chris.baron on Dec 12, 2008 11:38 AM

    Given the fact that your system is far from main-stream I'd recommend opening an SR with Oracle Support Services (metalink) and asking them.
    If you would like to publish a White Paper on your experience after you have successfully completed the project let me know off-line.

  • Apply CPUOCT2008 with both a physical and logical standby in place

    Hello All,
    I'm trying to compile a decent set of steps for applying the CPUOCT2008 patch to our production RAC cluster which has both a logical and physical standby in place. I've read a tonne of documentation, including the CPU readme, DOCID 437276.1 and 278641.1. I''ve also read through the Upgrading Databases in a Data Guard Configuration chapter of Dataguard Concepts and Administration. The last doc mentioned is really for upgrading a full version of Oracle rather than applying a CPU (at least I think that's the case). DocID 437276.1 is rather sparse on details.
    I guess what I'm trying to understand is the proper method for applying the patch with the logical standby in place. The physical standby looks pretty straightforward. After running opatch on it as well, it will basically have all of the changes applied to the primary shipped over and applied as per the normal primary/standby relationship. Will the same be true for the logical (having applied the patch, and then re-enabling SQL apply)? Should I aim to have it work that way? By that I mean start it up and re-enable sql apply and then upgrade the primary. Or, am I to apply the catcpu.sql script to it as well before re-enabling the sql apply? Am I wrong in regards to the physical standby as well i.e. should the catcpu also be applied directly to it?
    Thanks very much in advance.
    Cheers,
    Chris

    Given the fact that your system is far from main-stream I'd recommend opening an SR with Oracle Support Services (metalink) and asking them.
    If you would like to publish a White Paper on your experience after you have successfully completed the project let me know off-line.

Maybe you are looking for