MV Logs not getting purged in a Logical Standby Database

We are trying to replicate a few tables in a logical standby database to another database. Both the source ( The Logical Standby) and the target database are in Oracle 11g R1.
The materialized views are refreshed using FAST REFRESH.
The Materialized View Logs created on the source ( the Logical Standby Database) are not getting purged when the MV in the target database is refreshed.
We checked the entries in the following Tables: SYS.SNAP$, SYS.SLOG$, SYS.MLOG$
When a materialized view is created on the target database, a record is not inserted into the SYS.SLOG$ table and it seems like that's why the MV Logs are not getting purged.
Why are we using a Logical Standby Database instead of the Primary ? Because, the load on the Primary Database is too much and the machine doesn't have enough resources to support MV based replication. The CPU usage is 95% all the time. The appplication owner won't allow us to go against the Primary database.
Do we have to do anything different in terms of Configuration/Privileges etc. because we are using a Logical Standby Database as a source ?
Thanks in Advance.

We have a 11g RAC database in solaris OS where there is huge gap in archive log apply.
Thread Last Sequence Received Last Sequence Applied Difference
1 132581 129916 2665
2 108253 106229 2024
3 107452 104975 2477
The MRP0 process seems not to be working also.Almost 7000+ archives lag in standby if compared with primary database.
i suggest you to go with Incremental rollforward backups to make it SYNC, use this below link for step by step procedure.
http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/
Here questions.
1) Whether those archives are transported & just not applied?
2) Is in production do you have archives or backup of archives?
3) What you have found errors in alert log file?
post
SQL> select severity,message,error_code,timestamp from v$dataguard_status where dest_id=2;
4) What errors in primary database alert log file?
Also post
select     ds.dest_id id
,     ad.status
,     ds.database_mode db_mode
,     ad.archiver type
,     ds.recovery_mode
,     ds.protection_mode
,     ds.standby_logfile_count "SRLs"
,     ds.standby_logfile_active active
,     ds.archived_seq#
from     v$archive_dest_status     ds
,     v$archive_dest          ad
where     ds.dest_id = ad.dest_id
and     ad.status != 'INACTIVE'
order by
     ds.dest_id
/Also check errors from standby database.

Similar Messages

  • Scheduled job not getting executed on a logical standby

    Hello,
    We have created a job(through dbms_scheduler API). The job is enabled and shows up in the SCHEDULERJOBS view also.
    However the job does not get executed. I looked into the following tables there was no relevant entry found for the aforesaid job:
    select * from all_scheduler_job_log
    select * from dba_scheduler_running_jobs
    select * from DBA_SCHEDULER_JOB_RUN_DETAILS order by log_date desc
    Is there any limitation that we cannot execute scheduled jobs on a logical standby database. If i execute the relevant program (that is configured to be run as job in this scenario) as an individual procedure from SQL plus, it gets executed successfully implying there is no errors/problem in the subprogram that the job is going to invoke.
    Appreciate your thoughts in this regard.
    Thanks.

    Hi Justin,
    Thanks for your response.
    As per the app design, the job invokes a stored program(that maps to a stored procedure present in standby db itself) that reads the data from standby and populates the relevant tables/entities in another database(third db, not primary or standby) which acts as a repository. No write operations are to be performed on standby.
    So, i have two doubts:
    -- Can scheduled jobs execute on logical standby db[Oracle release 10g(R2)]
    I was going through few of the oracle docs and it is mentioned that this is a known limitation in 10g
    R2 release and has been corrected in 11g. Now we have something called database_role
    attribute that needs to be set to 'LOGICAL STANDBY' if you need to execute a job on
    standby. However it is available in 11g onwards.
    -- If there is no workaround for the above mentioned problem in 10g-R2 release.
    Then we may have to schedule a job from third db instance that shall invoke the program(residing on the standby db). Can we have a scheduled job which executes a program that maps to a remote stored procedure instead of local stored procedure?
    Appreciate your thoughts.
    Thanks

  • Job not getting invoked on a logical standby instance

    Hello,
    We have created a job(through dbms_scheduler API). The job is enabled and shows up in the SCHEDULERJOBS view also.
    However the job does not get executed. I looked into the following tables there was no relevant entry found for the aforesaid job:
    select * from all_scheduler_job_log
    select * from dba_scheduler_running_jobs
    select * from DBA_SCHEDULER_JOB_RUN_DETAILS order by log_date desc
    Is there any limitation that we cannot execute scheduled jobs on a logical standby database. If i execute the relevant program (that is configured to be run as job in this scenario) as an individual procedure from SQL plus, it gets executed successfully implying there is no errors/problem in the subprogram that the job is going to invoke.
    Appreciate your thoughts in this regard.
    Thanks.

    I think then we need to think about an alternate way to tackle the problem.
    Anyways thanks for your timely help Ravi.
    Just one query, can we invoke remote stored procedures(i.e. using datalinks) from dbms_scheduler jobs on another db.
    i.e. something like this:
    DB0(primary db instance); DB1(is a logical standby db); DB2(centralized repository db)
    DB1 contains certain packaged application procedures that need to be invoked as a scheduled activity.
    Previously we had the scheduled jobs residing on the logical standby db itself but because of this known issue we cannot proceed with that design.
    So can we have jobs,programs(mapping to DB1s stored procedure) scheduled on DB2?
    Thanks.

  • Recordings are not getting Purged in MediaSense

    Dear All,
    I hope you all are in good health, the call records are not getting purged we have set the purging state at 2 days, the Cisco MediaSense Media Service was restarted. Grateful to have a guidance in this regard, if there are anyother settings needed to be changed. The screenshot is attached.
    Best Regards,
    Durraze Khan 

    Hi Durraze,
    The prune policy is AND and not OR. So based on the screenshot when the recordings are 2 days old and there is no disk space, then auto pruning starts.
    Hope this helps!
    Regards,
    Arundeep

  • Materialized View Logs in a logical standby database

    I am trying to create materialized views based on a few tables in a logical standby database.
    The target database (11g R2) where the MVs will be created is a stand-alone database.
    The DB where the base tables reside is a logical standby database (11g R2).
    The requirement is to do a "FAST REFRESH" of the Materialized Views.
    My questions are :
    1. Can I create MV logs in the logical standby DB?
    2. If the answer to question no. 1 is "Yes", do I need to do anything different or configure the logical standby DB in a specific manner in order to create MV logs. From what I understand, the objects in the logical standby database are in a locked state. Is that going to be a problem ?
    Any other information that might be relevant is greatly appreciated.
    Thanks in advance.

    HI Daniel,
    I appreciate your quick response.
    My choice of name may not have been ideal, however changing new to another name - like gav - does not solve the problem.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_gav
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user gav
    2 identified by gav
    3 default tablespace ts_gav
    4 temporary tablespace temp
    5 quota unlimited on ts_gav
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to gav
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to gav
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to gav
    SYS@UATDR> select * from dba_sys_privs where grantee='GAV'
    2 /
    GRANTEE PRIVILEGE ADM
    GAV CREATE TABLE NO
    GAV CREATE ANY TABLE NO
    GAV UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect gav/gav
    Connected.
    GAV@UATDR>
    GAV@UATDR> -- prove ability to create tables
    GAV@UATDR> create table gav
    2 (col1 number not null)
    3 tablespace ts_gav
    4 /
    create table gav
    ERROR at line 1:
    ORA-01031: insufficient privileges
    GAV@UATDR>

  • Log mining is taking too much time in logical standby database

    dear DBAs,
    today i found a gap between the production database and the logical standby database and i found that the log mining is taking more than 1 hour to complete an archivelog (size: 500M)
    note that the MAX_SGA is 1500M and the MAX_SERVERS=45
    the databases is 10gR2 (10.2.0.5.0) running on a linux machine RHEL 4
    please your help.
    thx in advance
    Elie

    hi,
    can you check metalink id [ID 241512.1]
    thanks

  • Redo data not applied on logical standby database 10g

    after a network problem within the primary and the logical standby database. The redo data is not applied on the logical standby even if all the archived log are sent to it.
    The below is the output from v$archive_gap and DBA_LOGSTDBY_LOG
    SQL> select * from v$archive_gap;
    no rows selected
    SQL> SELECT SEQUENCE#, FIRST_TIME, APPLIED
    FROM DBA_LOGSTDBY_LOG
    ORDER BY SEQUENCE#; 2 3
    SEQUENCE# FIRST_TIME APPLIED
    3937 24-FEB-10 01:48:23 CURRENT
    3938 24-FEB-10 10:31:22 NO
    3939 24-FEB-10 10:31:29 NO
    3940 24-FEB-10 10:31:31 NO
    3941 24-FEB-10 10:33:44 NO
    3942 24-FEB-10 11:54:17 NO
    3943 24-FEB-10 12:05:30 NO
    Any help?
    Thanks

    ORA-00600: internal error code, arguments: [krvxgirp], [], [], [], [], [], [], []
    LOGSTDBY Analyzer process P003 pid=48 OS id=8659 stopped
    Wed Feb 24 16:49:04 2010
    Errors in file /oracle/product/10.2.0/admin/umarket/bdump/oradb_lsp0_8651.trc:
    ORA-12801: error signaled in parallel query server P003
    ORA-00600: internal error code, arguments: [krvxgirp], [], [], [], [], [], [], []
    and below an Warning: Apply error received: ORA-26714: User error encountered while applying. Clearing. from oradb_lsp0_8651.trc
    Thanks

  • ORA-16821: logical standby database dictionary not yet loaded

    Dear all,
    I have a dataguard architecture with a primary and a standby database (for reporting stuffs). Since I upgraded physical standby to logical standby, I receive this error !
    ORA-16821: logical standby database dictionary not yet loaded
    If someone has an idea, should be great !!
    Thanks
    oldschool

    Hi,
    Ok I applied :
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered.
    SQL> alter database start logical standby apply immediate;
    Database altered.
    SQL>
    And now I received this :
    ORA-16825: Fast-Start Failover and other errors or warnings detected for the database
    Cause: The broker has detected multiple errors or warnings for the database. At least one of the detected errors or warnings may prevent a Fast-Start Failover from occurring.
    Action: Check the StatusReport monitorable property of the database specified.
    What does it mean to check the status report.....
    I found this about monitorable status report !!
    DGMGRL> show database 'M3RPT' 'StatusReport';
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    * WARNING ORA-16821: logical standby database dictionary not yet loaded
    DGMGRL>
    What can I do ?
    Thanks a lot
    oldschool
    Edited by: oldschool on Jun 4, 2009 2:37 AM

  • Add new datafile to logical standby database but not in primary

    Hi,
    Is it ok to add a new datafile to the SYSAUX tablespace on the logical standby database but not on primary? We are running out of disk space on the partition where SYSAUX01.dbf resides so we want to add a new SYSAUX02.dbf in another partition which has space. but this will only be on the logical standby not on primary, there is still lots of space in primary. standby_file_management is MANUAL and this is LOGICAL STANDBY not PHYSICAL.
    Is this possible or where there be any issues?
    Thanks.

    Logical Standby can differ from Primary, it can have extra tablespaces, datafiles, tables, indexes, users ...
    HTH
    Enrique

  • How to delete the foreign archivelogs in a Logical Standby database

    How do I remove the foreign archive logs that are being sent to my logical standby database. I have files in the FRA of ASM going back weeks ago. I thought RMAN would delete them.
    I am doing hot backups of the databases to FRA for both databases. Using ASM, FRA, in a Data Guard environment.
    I am not backing up anything to tape yet.
    The ASM FRA foreign_archivelog directory on the logical standby FRA keeps growing and nothing is get deleted when
    I run the following command every day.
    delete expired backup;
    delete noprompt force obsolete;
    Primary database RMAN settings (Not all of them)
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE DB_UNIQUE_NAME 'WMRTPRD' CONNECT IDENTIFIER 'WMRTPRD_CWY';
    CONFIGURE DB_UNIQUE_NAME 'WMRTPRD2' CONNECT IDENTIFIER 'WMRTPRD2_CWY';
    CONFIGURE DB_UNIQUE_NAME 'WMRTPRD3' CONNECT IDENTIFIER 'WMRTPRD3_DG';
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    Logical standby database RMAN setting (not all of them)
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 9 DAYS;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    How do I cleanup/delete the old ASM foreign_archivelog files?

    OK, the default is TRUE which is what it is now
    from DBA_LOGSTDBY_PARAMETERS
    LOG_AUTO_DELETE     TRUE          SYSTEM     YES
    I am not talking about deleting the Archive logs files for the Logical database that it is creating, but the Standby archive log files being sent to the Logical Database after they have been applied.
    They are in the alert log as follows under RFS LogMiner: Registered logfile
    RFS[1]: Selected log 4 for thread 1 sequence 159 dbid -86802306 branch 763744382
    Thu Jan 12 15:44:57 2012
    *RFS LogMiner: Registered logfile [+FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297] to LogM*
    iner session id [1]
    Thu Jan 12 15:44:58 2012
    LOGMINER: Alternate logfile found. Transition to mining archived logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/
    foreign_archivelog/wmrtprd/2012_01_12/thread_1_seq_158.322.772386297
    LOGMINER: End mining logfile for session 1 thread 1 sequence 158, +FRA/wmrtprd2/foreign_archivelog/wmrtprd/2012_01_12/threa
    d_1_seq_158.322.772386297
    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 159, +DG1/wmrtprd2/onlinelog/group_4.284.771760923                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Creation of Logical Standby Database Using RMAN ACTIVE DATABASE COMMAND

    Hi All,
    I am in confusion how to create logical standby database from primary database using rman active database command.
    What i did:-
    Create primary database on machine 1 on RHEL 5 with Oracle 11gR2
    Create standby database on machine 2 on RHEL 5 With Oracle 11gR2 from primary using RMAN active database command
    Trying to create logical standby database on machine 3 on RHEL 5 with Oracle 11gR2 using RMAN active database command from primary.
    The point which confuse me is to start the logical standby in nomount mode on machine 3 with which pfile like i create the pfile for standby database do i need to create the pfile for logical standby db.
    I done the creation of logical standby database by converting physical standby to logical standby database
    I am following the below mentioned doc for the same:
    Creating a physical and a logical standby database in a DR environment | Chen Guang's Blog
    Kindly guide me how to work over the same or please provide me the steps of the same.
    Thanks in advance.

    Thanks for your reply
    I already started the logical standby database with pfile in nomount mode. And successfully completed the duplication of database. by mentioning the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameter.
    But i am not able to receive the logs on the above mentioned blog i run the sql command to check the logs but getting "no rows selected"
    My primary database pfile is:
    pc01prmy.__db_cache_size=83886080
    pc01prmy.__java_pool_size=12582912
    pc01prmy.__large_pool_size=4194304
    pc01prmy.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    pc01prmy.__pga_aggregate_target=79691776
    pc01prmy.__sga_target=239075328
    pc01prmy.__shared_io_pool_size=0
    pc01prmy.__shared_pool_size=134217728
    pc01prmy.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/pc01prmy/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/PC01PRMY/controlfile/o1_mf_91g3mdtr_.ctl','/u01/app/oracle/flash_recovery_area/PC01PRMY/controlfile/o1_mf_91g3mf6v_.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_domain=''
    *.db_file_name_convert='/u01/app/oracle/oradata/PC01SBY/datafile','/u01/app/oracle/oradata/PC01PRMY/datafile'
    *.db_name='pc01prmy'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=pc01prmyXDB)'
    *.fal_client='PC01PRMY'
    *.fal_server='PC01SBY'
    *.log_archive_config='DG_CONFIG=(pc01prmy,pc01sby,pc01ls)'
    *.log_archive_dest_1='LOCATION=/u01/app/oracle/flash_recovery_area/PC01PRMY/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pc01prmy'
    *.log_archive_dest_2='SERVICE=pc01sby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=pc01sby'
    *.log_archive_dest_3='SERVICE=pc01ls LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='DEFER'
    *.log_archive_dest_state_3='DEFER'
    *.log_archive_max_processes=30
    *.log_file_name_convert='/u01/app/oracle/oradata/PC01SBY/onlinelog','/u01/app/oracle/oradata/PC01PRMY/onlinelog'
    *.open_cursors=300
    *.pga_aggregate_target=78643200
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=236978176
    *.undo_tablespace='UNDOTBS1'
    My logical standby pfile is:-
    pc01ls.__db_cache_size=92274688
    pc01ls.__java_pool_size=12582912
    pc01ls.__large_pool_size=4194304
    pc01ls.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    pc01ls.__pga_aggregate_target=79691776
    pc01ls.__sga_target=239075328
    pc01ls.__shared_io_pool_size=0
    pc01ls.__shared_pool_size=125829120
    pc01ls.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/pc01ls/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/PC01LS/controlfile/o1_mf_91g3mdtr_.ctl','/u01/app/oracle/flash_recovery_area/PC01LS/controlfile/o1_mf_91g3mf6v_.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_domain=''
    *.db_file_name_convert='/u01/app/oracle/oradata/PC01SBY/datafile','/u01/app/oracle/oradata/PC01PRMY/datafile'
    *.db_name='pc01prmy'
    *.db_unique_name='pc01ls'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=pc01prmyXDB)'
    *.log_archive_config='DG_CONFIG=(pc01prmy,pc01sby,pc01ls)'
    *.log_archive_dest_1='LOCATION=/u01/app/oracle/flash_recovery_area/PC01PRMY/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pc01prmy'
    *.log_archive_dest_2='LOCATION=/u01/app/oracle/flash_recovery_area/PC01LS/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_3='SERVICE=pc01ls LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=pc01ls'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='/u01/app/oracle/oradata/PC01SBY/onlinelog','/u01/app/oracle/oradata/PC01PRMY/onlinelog'
    *.open_cursors=300
    *.pga_aggregate_target=78643200
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=236978176
    *.undo_tablespace='UNDOTBS1'
    Kindly advice over the same

  • Logical standby database keeps displaying ORA-16768

    I have a logical standby database setup on a remote host to accept cascaded logs from a physical standby. Everything was working fine until yesterday when I get the above error and log apply services stopped. When I go into Grid Control to Data Guard I see the error and it displays the failed SQL statement which is a create index statement. I have the option of clicking the SKIP button. When I click SKIP it appears to have processed it as the error goes away but then a few minutes later the error comes back, log apply stops and the failed SQL statement is the same create index statement. I guess the Skip doesn't mean Skip, it means Retry... My question is how do I get past this? Logs have been piling up.

    Hi,
    Ok I applied :
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered.
    SQL> alter database start logical standby apply immediate;
    Database altered.
    SQL>
    And now I received this :
    ORA-16825: Fast-Start Failover and other errors or warnings detected for the database
    Cause: The broker has detected multiple errors or warnings for the database. At least one of the detected errors or warnings may prevent a Fast-Start Failover from occurring.
    Action: Check the StatusReport monitorable property of the database specified.
    What does it mean to check the status report.....
    I found this about monitorable status report !!
    DGMGRL> show database 'M3RPT' 'StatusReport';
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    * WARNING ORA-16821: logical standby database dictionary not yet loaded
    DGMGRL>
    What can I do ?
    Thanks a lot
    oldschool
    Edited by: oldschool on Jun 4, 2009 2:37 AM

  • Physical Standby database Vs. Logical Standby database

    I have few questions regarding capability of Logical Standby Database against Physical Standby database.
    1. How efficient is Logical Standby database in terms of Physical Standby Database?? How both differ from each other and can I use Logical Standby Database for disaster Recovery?? Can It be use for recovering the failed Primary Instance?? If yes how efficient and reliable it is??
    2. What are the known bugs and roadblocks for logical standby database on Oracle 10.2.0.1 on Soalris X86-64?
    3.As logical standby database not going to replicate each and every schema of Primary database?? how is the change management effects to the logical standby from primary?? I mean there are some parameters and job that we create on primary how can it be transferred over to the logical standby??

    1. How efficient is Logical Standby database in terms
    of Physical Standby Database?? How both differ from
    each other and can I use Logical Standby Database for
    disaster Recovery?? Can It be use for recovering the
    failed Primary Instance?? If yes how efficient and
    reliable it is??I'm not sure what sort of "efficiency" you're talking about here...
    Physical standby is just the old, tried and true application of archived logs to recover a database. Very solid, very old school.
    Logical standby, on the other hand, is parsing the redo log, extracting logical change records, and applying them to the standby database. This obviously takes a bit more processing effort, it's newer technology, it doesn't have quite the level of support that physical standby does (i.e. certain data types are excluded), etc. You certainly can use it for failover, but it isn't quite as robust as a physical standby. Of course, this is getting better and better all the time and is definitely a focus of Oracle's development efforts.
    On the other hand, logical standby systems can do things other than act as a warm standby. They can be open serving reports, for example. You can create additional structures (i.e. new materialized views) to support reporting. A physical standby is pretty much always going to be in managed recovery mode, so it cannot be queried.
    2. What are the known bugs and roadblocks for logical
    standby database on Oracle 10.2.0.1 on Soalris
    X86-64?a) You'll want to do a Metalink search
    b) If you're talking about a high-availability solution, why are you looking at a base release of the database? Why wouldn't you apply the latest patchset?
    3.As logical standby database not going to replicate
    each and every schema of Primary database?? how is
    the change management effects to the logical standby
    from primary?? I mean there are some parameters and
    job that we create on primary how can it be
    transferred over to the logical standby??I'm not sure I understand... Changes made to the primary generate redo. Oracle parses that redo, generates a LCR, and sends that to the standby database where that change record gets applied.
    Justin

  • How to monitor SQL Apply for 10.2.0.3 logical standby database

    We have a logical standby database setup for reporting purposes. Users want to monitor whether sql apply is working or failed closely as it has reporting repercations.
    In case of 9i databases , there was "Data Not Applied (logs)" metric which we used for alerting and paging, in case a backlog of more than 5 log files developed.
    With 10.2.0.3 onwards, the metric no more exists.
    I would like to learn from other, how to monitor the setup, so that if the backlog in logs shipping or applying develops, we get page.
    Regards.

    regather the statistics on the table with method_opt=>for all columns or for all indexed columns or whatever size 1
    The 'size 1' directive will remove the histogram statistics.
    Sorry, didn't read ur post in a hurry. Below article (http://www.freelists.org/post/oracle-l/Any-quick-way-to-remove-histograms,13) removes histogram without re-analyzing the table. Hope that helps!!!
    On 3/16/07, Wolfgang Breitling <breitliw@xxxxxxxxxxxxx> wrote:
    I also did a quick check and just using
    exec
    dbms_stats.set_column_stats(user,'table_name',colname=>'column_name',d
    istcnt=>
    <num_distinct>);
    will remove the histogram without removing the low_value andhigh_value.
    At 01:40 PM 3/16/2007, Alberto Dell'Era wrote:
    On 3/16/07, Allen, Brandon <Brandon.Allen@xxxxxxxxxxx> wrote:
    Is there any faster way to remove histograms other than
    re-analyzing
    >
    the table? I want to keep the existing table, index & columnstats,
    >
    but with only 1 bucket (i.e. no histograms).You might try the attached script, that reads the stats using
    dbms_stats.get_column_stats and re-sets them, minus the histogram,
    using dbms_stats.set_column_stats.
    I haven't fully tested it - it's only 10 minutes old, even if Ihave
    slightly modified for you another script I've used for quite some
    time - and the spool on 10.2.0.3 seems to confirmthat the histogram
    is, indeed, removed, while all the other statistics are preserved.I
    have also reset density to 1/num_distinct, that is the value youget
    if no histogram is collected.regards,
    naren
    Edited by: fuzzydba on Oct 25, 2010 10:52 AM

  • ORA-01403: no data found on LOGICAL STANDBY database

    Hi ,
    Logical Standby issue :
    Oracle 10.2.0.2 enterprise edition .
    M Working on LOGICAL Standby since 1 yrs but still i havent got this ......................................
    I m getting countinuously no data foud errror on logical standby database .
    I found the table causing the proble(db_logstdby_events) and skipped that table and instanciated table using bwlow package:
    exec dbms_logstdby.instantiate_table (.......................................
    but when i start apply process on logical standby it again give no data found for new table :
    Even i tried to instantiate the table using EXPORT/IMPORT during down time but the same facing same problem .
    As much as i known abt the error that is :
    table1
    id
    10
    20
    30
    Now if sql apply process on logical standby tries to performe the update transaction(for example) as belows
    update table1 set id=100 where id=50;
    above query will not be completed cos it will never find the values 50 which is not in table .Thts why this error comming ..
    Now my worry is ... no users dare to change/make such changes on Logical standby .So if there is no changes in tables then sqll apply should get all the values to be needded for an update ......
    watingggg guyssss/......

    Troubleshooting ORA-1403 errors with Flashback Transaction
    In the event that the SQL Apply engine errors out with an ORA-1403, it may be possible to utilize flashback transaction on the standby database to reconstruct the missing data. This is reliant upon the undo_retention parameter specified on the standby database instance.
    ORA-1403: No Data Found
    Under normal circumstances the ORA-1403 error should not be seen in a Logical Standby environment. The error occurs when data in a SQL Apply managed table is modified directly on the standby database, and then the same data is modified on the primary database.
    When the modified data is updated on the primary database and received by the SQL Apply engine, the SQL Apply engine verifies the original version of the data is present on the standby database before updating the record. When this verification fails, an ORA-1403: No Data Found error is thrown by Oracle Data Guard: SQL Apply.
    The initial error
    When the SQL Apply engine verification fails, the error thrown by the SQL Apply engine is reported in the alert log of the logical standby database as well as a record being inserted into the DBA_LOGSTDBY_EVENTS view. The information in the alert log is truncated, while the error is reported in it's entirety in the database view.
    LOGSTDBY stmt: update "SCOTT"."MASTER"
    set
    "NAME" = 'john'
    where
    "PK" = 1 and
    "NAME" = 'andrew' and
    ROWID = 'AAAAAAAAEAAAAAPAAA'
    LOGSTDBY status: ORA-01403: no data found
    LOGSTDBY PID 1006, oracle@staco03 (P004)
    LOGSTDBY XID 0x0006.00e.00000417, Thread 1, RBA 0x02dd.00002221.10
    The Investigation
    The first step is to analyze the historical data of the table that threw the error. This can be achieved using the VERSIONS clause of the SELECT statement.
    SQL> select versions_xid
    , versions_startscn
    , versions_endscn
    , versions_operation
    , pk
    , name
    from scott.master
    versions between scn minvalue and maxvalue
    where pk = 1
    order by nvl(versions_startscn,0);
    VERSIONS_XID VERSIONS_STARTSCN VERSIONS_ENDSCN V PK NAME
    03001900EE070000 3492279 3492290 I 1 andrew
    02000D00E4070000 3492290 D 1 andrew
    Depending upon the amount of undo retention that the database is configured to retain (undo_retention) and the activity on the table, the information returned might be extensive and the versions between syntax might need to be changed to restrict the amount of information returned.
    From the information returned, it can be seen that the record was first inserted at scn 3492279 and then was deleted at scn 3492290 as part of transaction ID 02000D00E4070000. Using the transaction ID, the database should be queried to find the scope of the transaction. This is achieved by querying the flashback_transaction_query view.
    SQL> select operation
    , undo_sql
    from flashback_transaction_query
    where xid = hextoraw('02000D00E4070000');
    OPERATION UNDO_SQL
    DELETE insert into "SCOTT"."MASTER"("PK","NAME") values
    ('1','andrew');
    BEGIN
    Note that there is always one row returned representing the start of the transaction. In this transaction, only one row was deleted in the master table. The undo_sql column when executed will restore the original data into the table.
    SQL> insert into "SCOTT"."MASTER"("PK","NAME") values ('1','andrew');
    SQL> commit;
    The SQL Apply engine may now be restarted and the transaction will be applied to the standby database.
    SQL> alter database start logical standby apply;

Maybe you are looking for