Redo data not applied on logical standby database 10g
after a network problem within the primary and the logical standby database. The redo data is not applied on the logical standby even if all the archived log are sent to it.
The below is the output from v$archive_gap and DBA_LOGSTDBY_LOG
SQL> select * from v$archive_gap;
no rows selected
SQL> SELECT SEQUENCE#, FIRST_TIME, APPLIED
FROM DBA_LOGSTDBY_LOG
ORDER BY SEQUENCE#; 2 3
SEQUENCE# FIRST_TIME APPLIED
3937 24-FEB-10 01:48:23 CURRENT
3938 24-FEB-10 10:31:22 NO
3939 24-FEB-10 10:31:29 NO
3940 24-FEB-10 10:31:31 NO
3941 24-FEB-10 10:33:44 NO
3942 24-FEB-10 11:54:17 NO
3943 24-FEB-10 12:05:30 NO
Any help?
Thanks
ORA-00600: internal error code, arguments: [krvxgirp], [], [], [], [], [], [], []
LOGSTDBY Analyzer process P003 pid=48 OS id=8659 stopped
Wed Feb 24 16:49:04 2010
Errors in file /oracle/product/10.2.0/admin/umarket/bdump/oradb_lsp0_8651.trc:
ORA-12801: error signaled in parallel query server P003
ORA-00600: internal error code, arguments: [krvxgirp], [], [], [], [], [], [], []
and below an Warning: Apply error received: ORA-26714: User error encountered while applying. Clearing. from oradb_lsp0_8651.trc
Thanks
Similar Messages
-
Real-time apply cascaded logical standby database
Hi
I have a primary database orcl
Pysical standby database orcl_std
Cascaded logical standby database orcl_tri which receives archivelogs from orcl_std
Real time apply is enabled both in orcl_std (physical standby) and orcl_tri (logical standby)
When I create a table in primary orcl, I am unable to see it on orcl_tri (Although real time apply is enabled)
However, when I switch log in primary, I can see the new table on orcl_tri.
My question is, why realtime apply is not working in my scenerio ?
orcl_std : ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION USING CURRENT LOGFILE;
orcl_tri: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Oracle 11.2.0.3.0Hi mseberg,
Thanks for your reply.
There is no load or network issue as I`ve just created these databases for the experiement.
I have the same output from standby and primary databases.
SQL> select bytes/1024/1024 from v$standby_log;
BYTES/1024/1024
10
10
10I can see below output in standby alertlog
Fri Nov 16 08:39:51 2012
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
ALTER DATABASE START LOGICAL STANDBY APPLY (orcl)
with optional part
IMMEDIATE
Attempt to start background Logical Standby process
Fri Nov 16 08:39:51 2012
LSP0 started with pid=37, OS id=16141
Completed: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
LOGMINER: SpillScn 1953318, ResetLogScn 995548
LOGMINER: summary for session# = 1
LOGMINER: StartScn: 0 (0x0000.00000000)
LOGMINER: EndScn: 0 (0x0000.00000000)
LOGMINER: HighConsumedScn: 1955287 (0x0000.001dd5d7)
LOGMINER: session_flag: 0x1
LOGMINER: Read buffers: 16
Fri Nov 16 08:39:55 2012
LOGMINER: session#=1 (Logical_Standby$), reader MS00 pid=30 OS id=16145 sid=49 started
Fri Nov 16 08:39:55 2012
LOGMINER: session#=1 (Logical_Standby$), builder MS01 pid=39 OS id=16149 sid=44 started
Fri Nov 16 08:39:55 2012
LOGMINER: session#=1 (Logical_Standby$), preparer MS02 pid=40 OS id=16153 sid=50 started
LOGMINER: Turning ON Log Auto Delete
LOGMINER: Begin mining logfile during commit scan for session 1 thread 1 sequence 202, +DATA/orcl_std/archivelog/2012_11_15/thread_1_seq_202.349.799450179
LOGMINER: End mining logfiles during commit scan for session 1
LOGMINER: Turning ON Log Auto Delete
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 202, +DATA/orcl_std/archivelog/2012_11_15/thread_1_seq_202.349.799450179
LOGMINER: End mining logfile for session 1 thread 1 sequence 202, +DATA/orcl_std/archivelog/2012_11_15/thread_1_seq_202.349.799450179
Fri Nov 16 08:40:04 2012
LOGSTDBY Analyzer process AS00 started with server id=0 pid=41 OS id=16162
Fri Nov 16 08:40:05 2012
LOGSTDBY Apply process AS03 started with server id=3 pid=45 OS id=16175
Fri Nov 16 08:40:05 2012
LOGSTDBY Apply process AS04 started with server id=4 pid=46 OS id=16179
Fri Nov 16 08:40:05 2012
LOGSTDBY Apply process AS01 started with server id=1 pid=42 OS id=16167
Fri Nov 16 08:40:05 2012
LOGSTDBY Apply process AS05 started with server id=5 pid=47 OS id=16183
Fri Nov 16 08:40:05 2012
LOGSTDBY Apply process AS02 started with server id=2 pid=44 OS id=16171Do you think real-time apply wasnt setup properly ? -
Slow SQL Apply on Logical Standby Database in Oracle10g
Hi,
We are using Oracle 10g Logical Standby database in our production farm but whenever there is bulk data load (5-6 GB data) on the primary database, the logical standby seems to be hung. It takes days to apply the 5-6 GB data on the logical standby.
Can anybody give me some pointers how can I make my SQL Apply fast on the logical standby for bulk data.
Thanks
AmitHi there,
I've a similar problem. I did an insert of 700k on a table. It takes me over 1 1/2 hours to see the data. Notice, I increased the "max_sga" to 300m and "max_servers" to 25" and didn't help the performance at all.
My version is 10.2.0.3 with the patch 6081550.
APPLIED_SCN APPLIED_TIME RESTART_SCN RESTART_TIME LATEST_SCN LATEST_TIME MINING_SCN MINING_TIME
1015618 29-NOV-2007 18:28:51 1009600 29-NOV-2007 18:28:51 1017519 29-NOV-2007 19:54:07 1015656 29-NOV-2007 18:32:14 -
ORA-01403: no data found on LOGICAL STANDBY database
Hi ,
Logical Standby issue :
Oracle 10.2.0.2 enterprise edition .
M Working on LOGICAL Standby since 1 yrs but still i havent got this ......................................
I m getting countinuously no data foud errror on logical standby database .
I found the table causing the proble(db_logstdby_events) and skipped that table and instanciated table using bwlow package:
exec dbms_logstdby.instantiate_table (.......................................
but when i start apply process on logical standby it again give no data found for new table :
Even i tried to instantiate the table using EXPORT/IMPORT during down time but the same facing same problem .
As much as i known abt the error that is :
table1
id
10
20
30
Now if sql apply process on logical standby tries to performe the update transaction(for example) as belows
update table1 set id=100 where id=50;
above query will not be completed cos it will never find the values 50 which is not in table .Thts why this error comming ..
Now my worry is ... no users dare to change/make such changes on Logical standby .So if there is no changes in tables then sqll apply should get all the values to be needded for an update ......
watingggg guyssss/......Troubleshooting ORA-1403 errors with Flashback Transaction
In the event that the SQL Apply engine errors out with an ORA-1403, it may be possible to utilize flashback transaction on the standby database to reconstruct the missing data. This is reliant upon the undo_retention parameter specified on the standby database instance.
ORA-1403: No Data Found
Under normal circumstances the ORA-1403 error should not be seen in a Logical Standby environment. The error occurs when data in a SQL Apply managed table is modified directly on the standby database, and then the same data is modified on the primary database.
When the modified data is updated on the primary database and received by the SQL Apply engine, the SQL Apply engine verifies the original version of the data is present on the standby database before updating the record. When this verification fails, an ORA-1403: No Data Found error is thrown by Oracle Data Guard: SQL Apply.
The initial error
When the SQL Apply engine verification fails, the error thrown by the SQL Apply engine is reported in the alert log of the logical standby database as well as a record being inserted into the DBA_LOGSTDBY_EVENTS view. The information in the alert log is truncated, while the error is reported in it's entirety in the database view.
LOGSTDBY stmt: update "SCOTT"."MASTER"
set
"NAME" = 'john'
where
"PK" = 1 and
"NAME" = 'andrew' and
ROWID = 'AAAAAAAAEAAAAAPAAA'
LOGSTDBY status: ORA-01403: no data found
LOGSTDBY PID 1006, oracle@staco03 (P004)
LOGSTDBY XID 0x0006.00e.00000417, Thread 1, RBA 0x02dd.00002221.10
The Investigation
The first step is to analyze the historical data of the table that threw the error. This can be achieved using the VERSIONS clause of the SELECT statement.
SQL> select versions_xid
, versions_startscn
, versions_endscn
, versions_operation
, pk
, name
from scott.master
versions between scn minvalue and maxvalue
where pk = 1
order by nvl(versions_startscn,0);
VERSIONS_XID VERSIONS_STARTSCN VERSIONS_ENDSCN V PK NAME
03001900EE070000 3492279 3492290 I 1 andrew
02000D00E4070000 3492290 D 1 andrew
Depending upon the amount of undo retention that the database is configured to retain (undo_retention) and the activity on the table, the information returned might be extensive and the versions between syntax might need to be changed to restrict the amount of information returned.
From the information returned, it can be seen that the record was first inserted at scn 3492279 and then was deleted at scn 3492290 as part of transaction ID 02000D00E4070000. Using the transaction ID, the database should be queried to find the scope of the transaction. This is achieved by querying the flashback_transaction_query view.
SQL> select operation
, undo_sql
from flashback_transaction_query
where xid = hextoraw('02000D00E4070000');
OPERATION UNDO_SQL
DELETE insert into "SCOTT"."MASTER"("PK","NAME") values
('1','andrew');
BEGIN
Note that there is always one row returned representing the start of the transaction. In this transaction, only one row was deleted in the master table. The undo_sql column when executed will restore the original data into the table.
SQL> insert into "SCOTT"."MASTER"("PK","NAME") values ('1','andrew');
SQL> commit;
The SQL Apply engine may now be restarted and the transaction will be applied to the standby database.
SQL> alter database start logical standby apply; -
Forward physical standby archivelog files to a logical standby database
Hi,
We have a production database (1) and we have a physical standby database (2) for it.
Is it possible to forward the archivelogs files from (2) to a logical standby database (3). We want to use the (3) as a UAT - Ad Hoc read only Database.
Thanks.Hi,
The following Data Guard configurations using cascaded destinations are supported.
A. Primary Database > Physical Standby Database with cascaded destination > Physical Standby Database
B. Primary Database > Physical Standby Database with cascaded destination > Logical Standby Database
A physical standby database can support a maximum of nine (30 as of Version 11.2) remote destinations.
Physical Standby Forwarding Redo to a Logical Standby :
Advantages :
1. Users can connect to Logical Standby database and they can access
2. Instead of querying primary database they can use logical standby database.
3. without any additional overhead on your primary system, and without consuming any additional transatlantic bandwidth.
Disadvantages :
The following data types will not support in Logical standby database just check your application before implementing logical standby
a. BFILE
b. Collections (including VARRAYS and nested tables)
c. Multimedia data types (including Spatial, Image, and Oracle Text)
d. ROWID, UROWID
e. User-defined types
f. LOBs stored as SecureFiles
g. XMLType stored as Object Relational
h. Binary XML
Thanks
LaserSoft -
Dataguard Problem(logical standby database)
Hi,
I have successfully created logical standby database, and everything is working fine, all of the SQL is applying and archiving is also shipping.
Until I create a new tablespace for e.g. pay in the primary database, and suddenly SQL applying is stopped, but the archive is shipping.
I am using Windows XP SP2 and Oracle 10gRel2.
The contents of AlertLog file are as
Wed Jul 23 22:52:19 2008
Thread 1 cannot allocate new log, sequence 133
Checkpoint not complete
Current log# 3 seq# 132 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO03.LOG
Wed Jul 23 22:52:23 2008
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
Wed Jul 23 22:52:23 2008
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
Wed Jul 23 22:52:23 2008
Thread 1 advanced to log sequence 133 (LGWR switch)
Current log# 1 seq# 133 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO01.LOG
Thread 1 cannot allocate new log, sequence 134
Checkpoint not complete
Current log# 1 seq# 133 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO01.LOG
Wed Jul 23 22:52:29 2008
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
Wed Jul 23 22:52:29 2008
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
Wed Jul 23 22:52:29 2008
Thread 1 advanced to log sequence 134 (LGWR switch)
Current log# 2 seq# 134 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO02.LOG
Wed Jul 23 22:55:49 2008
Thread 1 cannot allocate new log, sequence 135
Checkpoint not complete
Current log# 2 seq# 134 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO02.LOG
Wed Jul 23 22:55:54 2008
Destination LOG_ARCHIVE_DEST_2 is SYNCHRONIZED
Wed Jul 23 22:55:54 2008
Destination LOG_ARCHIVE_DEST_2 no longer supports SYNCHRONIZATION
Wed Jul 23 22:55:54 2008
Thread 1 advanced to log sequence 135 (LGWR switch)
Current log# 3 seq# 135 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\IMRAN\REDO03.LOG
When i use this command, the SQL Apply starts again but the tablespace is not created on Logical standby database.
kindly give me a solution.
Thanks in advanced.In Standy database also you need to add TB details for it to recognise Primary DB new TB .
try adding it and retry your operation . -
How to monitor SQL Apply for 10.2.0.3 logical standby database
We have a logical standby database setup for reporting purposes. Users want to monitor whether sql apply is working or failed closely as it has reporting repercations.
In case of 9i databases , there was "Data Not Applied (logs)" metric which we used for alerting and paging, in case a backlog of more than 5 log files developed.
With 10.2.0.3 onwards, the metric no more exists.
I would like to learn from other, how to monitor the setup, so that if the backlog in logs shipping or applying develops, we get page.
Regards.regather the statistics on the table with method_opt=>for all columns or for all indexed columns or whatever size 1
The 'size 1' directive will remove the histogram statistics.
Sorry, didn't read ur post in a hurry. Below article (http://www.freelists.org/post/oracle-l/Any-quick-way-to-remove-histograms,13) removes histogram without re-analyzing the table. Hope that helps!!!
On 3/16/07, Wolfgang Breitling <breitliw@xxxxxxxxxxxxx> wrote:
I also did a quick check and just using
exec
dbms_stats.set_column_stats(user,'table_name',colname=>'column_name',d
istcnt=>
<num_distinct>);
will remove the histogram without removing the low_value andhigh_value.
At 01:40 PM 3/16/2007, Alberto Dell'Era wrote:
On 3/16/07, Allen, Brandon <Brandon.Allen@xxxxxxxxxxx> wrote:
Is there any faster way to remove histograms other than
re-analyzing
>
the table? I want to keep the existing table, index & columnstats,
>
but with only 1 bucket (i.e. no histograms).You might try the attached script, that reads the stats using
dbms_stats.get_column_stats and re-sets them, minus the histogram,
using dbms_stats.set_column_stats.
I haven't fully tested it - it's only 10 minutes old, even if Ihave
slightly modified for you another script I've used for quite some
time - and the spool on 10.2.0.3 seems to confirmthat the histogram
is, indeed, removed, while all the other statistics are preserved.I
have also reset density to 1/num_distinct, that is the value youget
if no histogram is collected.regards,
naren
Edited by: fuzzydba on Oct 25, 2010 10:52 AM -
Hi Friends,
I am getting the following exception in logical standby database at the time of Sql Apply.
After run the command alter database start logical standby apply sql apply services start but after few second automatically stop and getting following exception.
alter database start logical standby apply
Tue May 17 06:42:00 2011
No optional part
Attempt to start background Logical Standby process
LOGSTDBY Parameter: MAX_SERVERS = 20
LOGSTDBY Parameter: MAX_SGA = 100
LOGSTDBY Parameter: APPLY_SERVERS = 10
LSP0 started with pid=30, OS id=4988
Tue May 17 06:42:00 2011
Completed: alter database start logical standby apply
Tue May 17 06:42:00 2011
LOGSTDBY status: ORA-16111: log mining and apply setting up
Tue May 17 06:42:00 2011
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 4, Transaction Chunk Size = 201
LOGMINER: Memory Size = 100M, Checkpoint interval = 500M
Tue May 17 06:42:00 2011
LOGMINER: krvxpsr summary for session# = 1
LOGMINER: StartScn: 0 (0x0000.00000000)
LOGMINER: EndScn: 0 (0x0000.00000000)
LOGMINER: HighConsumedScn: 2660033 (0x0000.002896c1)
LOGMINER: session_flag 0x1
LOGMINER: session# = 1, preparer process P002 started with pid=35 OS id=4244
LOGSTDBY Apply process P014 started with pid=47 OS id=5456
LOGSTDBY Apply process P010 started with pid=43 OS id=6484
LOGMINER: session# = 1, reader process P000 started with pid=33 OS id=4732
Tue May 17 06:42:01 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1417, X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
Tue May 17 06:42:01 2011
LOGMINER: Turning ON Log Auto Delete
Tue May 17 06:42:01 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
Tue May 17 06:42:01 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1418, X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
LOGSTDBY Apply process P008 started with pid=41 OS id=4740
LOGSTDBY Apply process P013 started with pid=46 OS id=7864
LOGSTDBY Apply process P006 started with pid=39 OS id=5500
LOGMINER: session# = 1, builder process P001 started with pid=34 OS id=4796
Tue May 17 06:42:02 2011
LOGMINER: skipped redo. Thread 1, RBA 0x00058a.00000950.0010, nCV 6
LOGMINER: op 4.1 (Control File)
Tue May 17 06:42:02 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1419, X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1420, X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1421, X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
LOGSTDBY Analyzer process P004 started with pid=37 OS id=5096
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
LOGSTDBY Apply process P007 started with pid=40 OS id=2760
Tue May 17 06:42:03 2011
Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
LOGSTDBY Apply process P012 started with pid=45 OS id=7152
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1422, X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1423, X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1424, X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
LOGMINER: session# = 1, preparer process P003 started with pid=36 OS id=5468
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
Tue May 17 06:42:04 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1425, X:\TANVI\ARCHIVE2\ARC01425_0748170313.001
LOGSTDBY Apply process P011 started with pid=44 OS id=6816
LOGSTDBY Apply process P005 started with pid=38 OS id=5792
LOGSTDBY Apply process P009 started with pid=42 OS id=752
Tue May 17 06:42:05 2011
krvxerpt: Errors detected in process 34, role builder.
Tue May 17 06:42:05 2011
krvxmrs: Leaving by exception: 600
Tue May 17 06:42:05 2011
Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
LOGSTDBY status: ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
Tue May 17 06:42:06 2011
Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_lsp0_4988.trc:
ORA-12801: error signaled in parallel query server P001
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
Tue May 17 06:42:06 2011
LogMiner process death detected
Tue May 17 06:42:06 2011
logminer process death detected, exiting logical standby
LOGSTDBY Analyzer process P004 pid=37 OS id=5096 stopped
LOGSTDBY Apply process P010 pid=43 OS id=6484 stopped
LOGSTDBY Apply process P008 pid=41 OS id=4740 stopped
LOGSTDBY Apply process P012 pid=45 OS id=7152 stopped
LOGSTDBY Apply process P014 pid=47 OS id=5456 stopped
LOGSTDBY Apply process P005 pid=38 OS id=5792 stopped
LOGSTDBY Apply process P006 pid=39 OS id=5500 stopped
LOGSTDBY Apply process P007 pid=40 OS id=2760 stopped
LOGSTDBY Apply process P011 pid=44 OS id=6816 stopped
Tue May 17 06:42:10 2011Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []submit an SR to ORACLE SUPPORT.
refer these too
*ORA-600/ORA-7445 Error Look-up Tool [ID 153788.1]*
*Bug 6022014: ORA-600 [KRVXBPX20] ON LOGICAL STANDBY* -
MV Logs not getting purged in a Logical Standby Database
We are trying to replicate a few tables in a logical standby database to another database. Both the source ( The Logical Standby) and the target database are in Oracle 11g R1.
The materialized views are refreshed using FAST REFRESH.
The Materialized View Logs created on the source ( the Logical Standby Database) are not getting purged when the MV in the target database is refreshed.
We checked the entries in the following Tables: SYS.SNAP$, SYS.SLOG$, SYS.MLOG$
When a materialized view is created on the target database, a record is not inserted into the SYS.SLOG$ table and it seems like that's why the MV Logs are not getting purged.
Why are we using a Logical Standby Database instead of the Primary ? Because, the load on the Primary Database is too much and the machine doesn't have enough resources to support MV based replication. The CPU usage is 95% all the time. The appplication owner won't allow us to go against the Primary database.
Do we have to do anything different in terms of Configuration/Privileges etc. because we are using a Logical Standby Database as a source ?
Thanks in Advance.We have a 11g RAC database in solaris OS where there is huge gap in archive log apply.
Thread Last Sequence Received Last Sequence Applied Difference
1 132581 129916 2665
2 108253 106229 2024
3 107452 104975 2477
The MRP0 process seems not to be working also.Almost 7000+ archives lag in standby if compared with primary database.
i suggest you to go with Incremental rollforward backups to make it SYNC, use this below link for step by step procedure.
http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/
Here questions.
1) Whether those archives are transported & just not applied?
2) Is in production do you have archives or backup of archives?
3) What you have found errors in alert log file?
post
SQL> select severity,message,error_code,timestamp from v$dataguard_status where dest_id=2;
4) What errors in primary database alert log file?
Also post
select ds.dest_id id
, ad.status
, ds.database_mode db_mode
, ad.archiver type
, ds.recovery_mode
, ds.protection_mode
, ds.standby_logfile_count "SRLs"
, ds.standby_logfile_active active
, ds.archived_seq#
from v$archive_dest_status ds
, v$archive_dest ad
where ds.dest_id = ad.dest_id
and ad.status != 'INACTIVE'
order by
ds.dest_id
/Also check errors from standby database. -
How to apply the changes in logical standby database
Hi,
I am new to Dataguard. I am now using 10.2.0.3 and followed the steps from Oracle Data Guard Concepts and Administration Guide to setup a logical standby database.
When I insert a record to a table from the primary database side, when i query the same table from the logical standby database, it doesn't show the new records.
Did I miss something? What I want to do is when I insert a record in the primary db, then the corresponding record will be inserted in the standby db.
Or I totally misunderstand what Oracle data guard is? Any help are appreciated.
DenisHi
Can anyone help to answer me is my logical standby db have a archive gap?
SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME, NEWEST_SCN, NEWEST_T
IME FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN APPLIED_TIME READ_SCN READ_TIME NEWEST_SCN
NEWEST_TIME
851821 29-JUL -08 17:58:29 851822 29-JUL -08 17:58:29 1551238
08-AUG -08 08:43:29
SQL> select pid, type, status, high_scn from v$logstdby;
no rows selected
SQL> alter database start logical standby apply;
Database altered.
SQL> select pid, type, status, high_scn from v$logstdby;
PID
TYPE
STATUS HIGH_SCN
2472
COORDINATOR
ORA-16116: no work available
3380
READER
ORA-16127: stalled waiting for additiona 852063
l transactions to be applied
2480
BUILDER
ORA-16116: no work available
2492
ANALYZER
ORA-16111: log mining and apply setting
up
2496
APPLIER
ORA-16116: no work available
2500
APPLIER
ORA-16116: no work available
3700
APPLIER
ORA-16116: no work available
940
APPLIER
ORA-16116: no work available
2504
APPLIER
ORA-16116: no work available
9 rows selected.
Thanks a lot.
Message was edited by:
Denis Chan -
Apply Patches on Oracle Database with Logical Standby Database
Here I am:
I got a primary database with a logical standby database running Oracle 11g. I got two client applications, one is the production site pointing to the primary one, another one is just a backup site pointing to the logical one.Things will only be written into the primary database every mid night and client applications can only query the database but not add, update nor delete.And now, I want to apply the latest patch on both of my databases. I am also the DNS administrator, I can make the name server pointing to the backup site instead of the production one.I want to firstly apply the patch on the logical one, and then the physical one.
I found some reference which explains how to apply patches by adopting "Rolling Upgrade Method". however, I want to avoid doing any "switch over" mentioned in the reference because I can make use of name server. Can I just apply patches as the following way?
1)Stop SQL apply
2)Apply patches on logical standby database
3)let the name server point to the backup site
4)Apply patches on the primary database
5)Start SQL apply
6)Let the name server point back to the production site
Thanks in advance.Pl follow the steps in MOS Doc 437276.1 ( Upgrading Oracle Database with a Logical Standby Database In Place )
HTH
Srini -
ORA-16821: logical standby database dictionary not yet loaded
Dear all,
I have a dataguard architecture with a primary and a standby database (for reporting stuffs). Since I upgraded physical standby to logical standby, I receive this error !
ORA-16821: logical standby database dictionary not yet loaded
If someone has an idea, should be great !!
Thanks
oldschoolHi,
Ok I applied :
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Database altered.
SQL> alter database start logical standby apply immediate;
Database altered.
SQL>
And now I received this :
ORA-16825: Fast-Start Failover and other errors or warnings detected for the database
Cause: The broker has detected multiple errors or warnings for the database. At least one of the detected errors or warnings may prevent a Fast-Start Failover from occurring.
Action: Check the StatusReport monitorable property of the database specified.
What does it mean to check the status report.....
I found this about monitorable status report !!
DGMGRL> show database 'M3RPT' 'StatusReport';
STATUS REPORT
INSTANCE_NAME SEVERITY ERROR_TEXT
* WARNING ORA-16821: logical standby database dictionary not yet loaded
DGMGRL>
What can I do ?
Thanks a lot
oldschool
Edited by: oldschool on Jun 4, 2009 2:37 AM -
Add new datafile to logical standby database but not in primary
Hi,
Is it ok to add a new datafile to the SYSAUX tablespace on the logical standby database but not on primary? We are running out of disk space on the partition where SYSAUX01.dbf resides so we want to add a new SYSAUX02.dbf in another partition which has space. but this will only be on the logical standby not on primary, there is still lots of space in primary. standby_file_management is MANUAL and this is LOGICAL STANDBY not PHYSICAL.
Is this possible or where there be any issues?
Thanks.Logical Standby can differ from Primary, it can have extra tablespaces, datafiles, tables, indexes, users ...
HTH
Enrique -
Hi,
When we say "Logical Standby Databases are logically identical to primary databases although the physical organization and structure of the data can be different." what does it exactly means?
Does it mean that in logical standby tablespace name, schema name, table name, column names etc can be different and still has the same data as primary?
Does it mean that we can exclude indexes and constraints as present in primary?
Only the data should match with primary word by word, value by value?
I am asking this as i have never worked in a logical standby database but i seriously want to know.
Please answer.
Regards,
SIDPhysical standby differs from logical standby:
Physical standby schema matches exactly the source database.
Archived redo logs and FTP'ed directly to the standby database which is always running in "recover" mode. Upon arrival, the archived redo logs are applied directly to the standby database.
Logical standby is different from physical standby:
Logical standby database does not have to match the schema structure of the source database.
Logical standby uses LogMiner techniques to transform the archived redo logs into native DML statements (insert, update, delete). This DML is transported and applied to the standby database.
Logical standby tables can be open for SQL queries (read only), and all other standby tables can be open for updates.
Logical standby database can have additional materialized views and indexes added for faster performance.
Installing Physical standbys offers these benefits:
An identical physical copy of the primary database
Disaster recovery and high availability
High Data protection
Reduction in primary database workload
Performance Faster
Installing Logical standbys offer:
Simultaneous use for reporting, summations and queries
Efficient use of standby hardware resources
Reduction in primary database workload
Some limitations on the use of certain datatypes -
Real time apply for logical standby
Hi
Oracle 11.2.0.3.0
I have a primary database orcl and logical standby database orcl_std.
Real time apply is enabled. I have standby redologs in both primary and standby sides and I`ve started recovery with below command:
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
When I create a new table in primary database, I am unable to see it on standby database (Although real time apply is enabled)
However, when I switch log in primary, I can see the new table in standby database.
My question is, why realtime apply is not working in my scenerio ? I was expecting to see the new table immediately in standby database once it is created in primary database. Why am I supposed to wait for log switch in real time apply ?Using Real-Time Apply to Apply Redo Data Immediately
http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_apply.htm#i1022881
1.What is compatible parameter, it should be 11.1
2.Try to check parameters mentioned in below link:
http://easyoradba.com/2011/01/10/real-time-apply-in-oracle-data-guard-10g/
Regards
Girish Sharma
Edited by: Girish Sharma on Nov 15, 2012 12:37 PM
Maybe you are looking for
-
How do I set-up my IPhone and IPad to print wireless to a Epson xp-600 printer
How do I set-up my iphone and ipad to wireless print using a epson xp-600 wireless printer. Instructions with the epson ask's to load the disc in each computor that you want to use this printer. Do I have to sync each device to the mac for this to ha
-
"bit rate too high" in menus with overlay files
Hi I opened in DVDSP4 a project I was working in DVDSP2. In DVDSP2 the project could build without a problem but now on DVDSP4 during build the program either gets stuck or gives the message "Muxer Bitrate Too High" when it reaches a menu with an ove
-
Which harddrive/backup drive for my iMac
I have a 20" aluminum iMac, about 1 year old, running Leopard. I the the internal HD has capacity for about 250GB if memory serves me right. I have about 200GB of stuff on it now, most of which is pictures in iphoto, music, and videos. Was thinking I
-
Funny behavior with Item Editors
I have a DataGrid which has a column with CheckBoxes (item editors), initially they are all unselected (suppose there are 5 rows), if i click on the 5 of them quickly to change them to selected then something funny happens, some of them change to sel
-
Microphone Issue using Adobe Premiere Pro CS5.5
I have also posted this at the Adobe forums. I'm just hoping someone, somewhere can help. I am a recent convert from Final Cut to Adobe Premiere Pro CS5.5, mainly so I can start burning to blu-ray. I am finding the switch a frustrating experience. Th