Bytes paged out Logical standby (urgent!!!!!)
Hi,
We are having a 3 node LOGICAL STANDBY RAC ,it is continuoulsly paging out bytes to disk.
SQl > select from v$logstdby_stats;*
NAME VALUE
number of preparers 2
number of appliers 27
maximum SGA for LCR cache 3072
parallel servers in use 32
maximum events recorded 100
preserve commit order FALSE
transaction consistency NONE
record skip errors Y
record skip DDL Y
record applied DDL N
record unsupported operations N
coordinator state IDLE
transactions ready 0
transactions applied 0
coordinator uptime 10848
realtime logmining Y
apply delay 0
Log Miner session ID 1
txns delivered to client 3425760
DML txns delivered 3315152
DDL txns delivered 1072
CTAS txns delivered 153
Recursive txns delivered 109536
Rolled back txns seen 4226
LCRs delivered to client 20249373
bytes of redo processed 29038698292
bytes paged out 4482430016
seconds spent in pageout 8677
bytes checkpointed 0
seconds spent in checkpoint 0
bytes rolled back 0
seconds spent in rollback 7
seconds system is idle 0
SQL > SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN STATUS
COORDINATOR 61551734060 ORA-16116: no work available
READER 61551732712 ORA-16127: stalled waiting for additional transactions to be applied
BUILDER 61551732695 ORA-16243: paging out 607512 bytes of memory to disk
PREPARER 61551732695 ORA-16127: stalled waiting for additional transactions to be applied
PREPARER 61551732692 ORA-16127: stalled waiting for additional transactions to be applied
ANALYZER 61551732694 ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
The main issue is LogMiner is busy mining 1 logfile(512mb) from last 6 hrs and the value of " bytes paged out" is continuoulsy increasing and many log switches are happening on the logical standby but still no new transactions are getting applied on logical standby .
we took an AWR report for the given period and found that no SQL were being fired upon...but still generatiing archives of 512mb each .
Database version :- 10.2.0.2 RAC
OS : Solaris 10
Kindly help me to resolve this issue?????????
Edited by: user8974795 on Sep 26, 2011 2:15 AM
Hi,
please execute the following steps:
1. Copy the partially corrupted file over from primary.
2. Re-register the logfile if needed.
alter database register or replace logical logfile 'xxxxxxxxxxxxx';
3. Restart Logical Apply
Similar Messages
-
[Logical Standby] Which table/SQL caused paging-out
We have a Primary-Logical DR configuration.
Recently, it has a problem with the logical: it's continuously paging out data from some transactions:
SELECT SUBSTR(name, 1, 40) AS NAME, SUBSTR(value,1,32) AS VALUE FROM GV$LOGSTDBY_STATS;
number of preparers 3
number of appliers 18
maximum SGA for LCR cache 4095
parallel servers in use 24
maximum events recorded 1000000
preserve commit order TRUE
transaction consistency FULL
record skip errors Y
record skip DDL Y
record applied DDL N
record unsupported operations Y
coordinator state IDLE
transactions ready 7
transactions applied 0
coordinator uptime 9646
realtime logmining Y
apply delay 0
Log Miner session ID 1
txns delivered to client 1068651
DML txns delivered 1017135
DDL txns delivered 15
CTAS txns delivered 0
Recursive txns delivered 51501
Rolled back txns seen 23463
LCRs delivered to client 11682189
bytes of redo processed 14475529508
bytes paged out 1482524624
seconds spent in pageout 8922
bytes checkpointed 0
seconds spent in checkpoint 0
bytes rolled back 7500032
seconds spent in rollback 90
seconds system is idle 0
SELECT SID, SERIAL#, SPID, TYPE, HIGH_SCN, STATUS_CODE, STATUS
FROM GV$LOGSTDBY_PROCESS
ORDER BY TYPE, SPID;
ANALYZER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
APPLIER 16116 ORA-16116: no work available
BUILDER 16243 ORA-16243: paging out 4752 bytes of memory to disk
COORDINATOR 16116 ORA-16116: no work available
PREPARER 16127 ORA-16127: stalled waiting for additional transactions to be applied
PREPARER 16127 ORA-16127: stalled waiting for additional transactions to be applied
PREPARER 16127 ORA-16127: stalled waiting for additional transactions to be applied
READER 16127 ORA-16127: stalled waiting for additional transactions to be applied
select xidusn, xidslt, xidsqn, count(*) from system.logmnr_spill$
group by xidusn, xidslt, xidsqn;
996 46 249 254
710 37 838 825
623 3 706 254
478 7 42564 254
765 38 649 824
42 6 415494 3729
264 35 4817 3738
How can we identify the table/SQL to skip & instantiate it later so the logical DB will not being lag far behind.
Thank you.Hi,
Best way to find SQL is to mine the current archive log getting applied on standby and check for the SQL, you might not get the exact SQL, but you will get the object which is getting updated.
Or
You can use AWR report from logical standby of this time to find the update statement which is resource extensive.
There is no way to find the exact SQL on primary which is causing the issue on standby.
Regards
Anudeep -
Logical standby: SQL Apply too slow
Hi all,
I have a question regarding SQL Apply performance in logical standby. There are two kind of operations that are remarkably slow when applying them on logical standby. These are "truncate table" and "delete from table" operations.
When logical standby pick up one of mentioned statements from logs one of appliers start working whereas rest others are waiting. It looks like standby hang and very slow sql apply is moving on gradually and finally when operation completes standby is behind primary for 4 or 5 or even 8 hours.
What can be done in this regard to speed up sql apply and alleviate this situation?
Best Regards,
AlexAre you absolutely sure that the truncate is the problem (and deletes). How did you check it?
You can use LogMiner to check what are most of the commands in the log currently applied. I use this:
BEGIN
sys.DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/home/oracle/arc_43547_1_595785865.arc', OPTIONS => sys.DBMS_LOGMNR.ADDFILE);
END;
BEGIN
sys.DBMS_LOGMNR.START_LOGMNR( OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
END;
SELECT seg_owner,seg_name,table_name,operation,COUNT(1) FROM V$LOGMNR_CONTENTS
GROUP BY seg_owner,seg_name,table_name,operation
ORDER BY COUNT(1) DESC
BEGIN
sys.DBMS_LOGMNR.END_LOGMNR();
END;
Most of the times in our cases when SQL Apply is slow is because of high activity on particular object. This can be detected by high number of DMLs for that object using LogMiner. If this object is not needed on the logical standby you can skip it and thus SQL Apply will be faster because it will not apply changes for this particular one. If it's needed and this is not a regular rate, then you can skip it temporarily, turn on SQL Apply , after problematic logs are applied, turn off SQL Apply, instantiate the object and unskip it, turn on sql apply again.
Another thing that can drastically slowdown SQL Apply is the size of memory available for SQL Apply(Alert log shows that max is ~4.5GB or something like this, I'm not sure )
You can increase it with something like this:
ALTER DATABASE STOP LOGICAL STANDBY APPLY;
BEGIN
DBMS_LOGSTDBY.APPLY_SET('MAX_SGA', 3000); -- set to 3000 MB
END;
ALTER DATABASE START LOGICAL STANDBY APPLY;
You have to increase it if the following reports:
SELECT NAME, VALUE FROM V$LOGSTDBY_STATS
WHERE NAME LIKE '%page%' OR
NAME LIKE '%uptime%' or name like '%idle%';
that 'bytes paged out' increases if run every few seconds during slow SQL Apply.
I hope that it's something that can be fixed using the above info. If no, please comment and share your investigations.
Thanks -
Logical standby stopped when trying to create partitions on primary(Urgent
RDBMS Version: 10.2.0.3
Operating System and Version: Solaris 5.9
Error Number (if applicable): ORA-1119
Product (i.e. SQL*Loader, Import, etc.): Data Guard on RAC
Product Version: 10.2.0.3
logical standby stopped when trying to create partitions on primary(Urgent)
Primary is a 2node RAC ON ASM, we implemented partitions on primar.
Logical standby stopped appling logs.
Below is the alert.log for logical stdby:
Current log# 4 seq# 860 mem# 0: +RT06_DATA/rt06/onlinelog/group_4.477.635601281
Current log# 4 seq# 860 mem# 1: +RECO/rt06/onlinelog/group_4.280.635601287
Fri Oct 19 10:41:34 2007
create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL
Fri Oct 19 10:41:34 2007
ORA-1119 signalled during: create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL...
LOGSTDBY status: ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
LOGSTDBY Apply process P004 pid=49 OS id=16403 stopped
Fri Oct 19 10:41:34 2007
Errors in file /u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc:
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
Here is the trace file info:
/u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0
System name: SunOS
Node name: iscsv341.newbreed.com
Release: 5.9
Version: Generic_118558-28
Machine: sun4u
Instance name: RT06
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 16387, image: [email protected] (LSP0)
*** 2007-10-19 10:41:34.804
*** SERVICE NAME:(SYS$BACKGROUND) 2007-10-19 10:41:34.802
*** SESSION ID:(1614.205) 2007-10-19 10:41:34.802
knahcapplymain: encountered error=12801
*** 2007-10-19 10:41:34.804
ksedmp: internal or fatal error
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
KNACDMP: *******************************************************
KNACDMP: Dumping apply coordinator's context at 7fffd9e8
KNACDMP: Apply Engine # 0
KNACDMP: Apply Engine name
KNACDMP: Coordinator's Watermarks ------------------------------
KNACDMP: Apply High Watermark = 0x0000.0132b0bc
Sorry our primary database file structure is different from stdby, we used db_file_name_convert in the init.ora, it look like this:
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+OT06_DATA/OT06TSG001/','+RT06_DATA/RT06/','+RECO/OT06TSG001','+RECO/RT06'
*.db_files=2000
*.db_name='OT06'
*.db_recovery_file_dest='+RECO'
Is there any thing wrong in this parameter.
I tried this parameter before for cloning using rman backup. This din't work.
What exactly must be done? for db_file_name_convert to work.
Even in this case i think this is the problem its not converting the location and the logical halts.
help me out.....
let me know if you have any questions.
Thanks Regards
Raghavendra rao Yella.Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
Logical : paging out memory to disk.
Hi,
We have to reorg the table in production and so I tried to test in test database but everytime I issue move command , logical apply get stopped with error " paging out memory to disk". I tried to increare SGA and allocated more memory for LCR but think this is not a proper solution.
Can you guys suggest me any better way ? or do you think shall I use skip handler??
Thanks so much!Hello,
Just re opening this thread since I am getting the same error in one my databases. DB version is 11.2.0.3 and my OS is RHEL 5, X86-64. My ulimit is set to "unlimited" for oracle user. I am seeing the following error and session is being terminated.
ORA-04030: out of process memory when trying to allocate 16408 bytes (QERHJ hash-joi,QERHJ Bit vector)
More info from incident files. Any hints will be appreciated. Thanks.
========= Dump for incident 360697 (ORA 4030) ========
----- Beginning of Customized Incident Dump(s) -----
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
*** 2012-12-04 14:29:30.744
52% 2719 MB, 1991346 chunks: "permanent memory "
qmxdGetChildNo ds=0x2ab9c31d9620 dsprt=0x2ab9c302dcd0
37% 1950 MB, 663782 chunks: "free memory "
qmxdGetChildNo ds=0x2ab9c31d9620 dsprt=0x2ab9c302dcd0
2% 111 MB, 2897152 chunks: "qmxdplsArrayGetNI1 "
qmxdpls_subhea ds=0x2ab8725ba6d0 dsprt=0x2ab86dbc27c0
2% 111 MB, 2897152 chunks: "qmxdplsArrayNI0 "
qmxdpls_subhea ds=0x2ab8725ba6d0 dsprt=0x2ab86dbc27c0
2% 101 MB, 29363 chunks: "permanent memory "
qmxlu subheap ds=0x2ab9c3031d10 dsprt=0x2ab8725ba6d0
2% 101 MB, 663782 chunks: "qmxdGetChildNodes-subheap "
qmxdpls_nodeli ds=0x2ab9c302dcd0 dsprt=0x2ab8725ba6d0
1% 59 MB, 3787 chunks: "pl/sql vc2 " PL/SQL
koh-kghu call ds=0x2ab86e1d23b0 dsprt=0xbb07ca0
1% 52 MB, 17734 chunks: "permanent memory "
ds=0x2ab9c3034780 dsprt=0x2ab8725ba6d0
1% 34 MB, 466 chunks: "free memory "
pga heap ds=0xbb07ca0 dsprt=(nil)
0% 23 MB, 1462 chunks: "pmucalm coll " PL/SQL
koh-kghu call ds=0x2ab86e1a11a0 dsprt=0xbb07ca0
=======================================
PRIVATE MEMORY SUMMARY FOR THIS PROCESS
PRIVATE HEAP SUMMARY DUMP
5454 MB total:
5420 MB commented, 635 KB permanent
34 MB free (31 MB in empty extents),
5335 MB, 1 heap: "session heap " 60 KB free held
==========================================
INSTANCE-WIDE PRIVATE MEMORY USAGE SUMMARY
Dumping Work Area Table (level=1)
=====================================
Global SGA Info
global target: 4096 MB
auto target: 256 MB
max pga: 819 MB
pga limit: 1638 MB
pga limit known: 0
pga limit errors: 0
pga inuse: 6581 MB
pga alloc: 7038 MB
pga freeable: 276 MB
pga freed: 2000919 MB
pga to free: 0 %
broker request: 0
pga auto: 20 MB
pga manual: 0 MB
pga alloc (max): 10338 MB
pga auto (max): 1039 MB
pga manual (max): 0 MB
# workareas : 0
# workareas(max): 80
================================
PER-PROCESS PRIVATE MEMORY USAGE
Private memory usage per Oracle process
Top 10 processes:
(percentage is of 7038 MB total allocated memory)
78% pid 81: 5420 MB used of 5457 MB allocated <= CURRENT PROC
7% pid 176: 300 MB used of 486 MB allocated (185 MB freeable)
1% pid 42: 48 MB used of 55 MB allocated (5952 KB freeable)
1% pid 36: 41 MB used of 44 MB allocated
1% pid 38: 41 MB used of 44 MB allocated (1088 KB freeable)
1% pid 41: 41 MB used of 44 MB allocated (1088 KB freeable)
1% pid 20: 10 MB used of 42 MB allocated (30 MB freeable)
0% pid 44: 5570 KB used of 33 MB allocated (1600 KB freeable)
0% pid 10: 28 MB used of 31 MB allocated (2304 KB freeable)
0% pid 73: 24 MB used of 26 MB allocated (1280 KB freeable) -
Logical Standby out of sync after archiver stuck (how to resync)
Hi,
I had an archiver stuck on my logical standby database for about 4 hours this night.
2 hours later my primary db also had an archiver stuck.
After solving the problem by repairing the backup mechanism and startup of the archivelog backup I realised
that some tables are out of sync.
How can I get back all things back in sync again?
DB: 11gR2 EE
OS: RedHat Linux 5.5
Thanks
941743Hi,
I don't sure have or not resyncronization whole schema, but you can use with PL/SQL as
Stop SQL Apply on logical standby database
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
on primary
SQL> begin
2> for t in (select table_name from db_tables where owner = '<your schema name>')
3> loop
4> DBMS_LOGSTDBY.INSTANTIATE_TABLE(shema_name => '<your schema name>', object_name => T.Table_name, dblink => '<you dblink name>');
5> end loop;
6>end;
Start SQL Apply on logical standby database
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;If you skip DML and DDL for your schema, then you must UNSKIP DML and DDL on this tables you must add following script to PL/SQL block before INSTANTIATE_TABLEs.
DBMS_LOGSTDBY.UNSKIP(stmt => 'DML', schema_name => '<your schema name>', object_name => T.Table_Name);
DBMS_LOGSTDBY.UNSKIP(stmt => 'DDL', schema_name => '<your schema name>', object_name => T.Table_Name);Regards
Mahir M. Quluzade
Edited by: Mahir M. Quluzade on Sep 13, 2012 6:17 PM -
Logical standby stopped lastnight
Subject: Logical standby stopped lastnight
Author: raghavendra rao yella, United States
Date: Nov 14, 2007, 0 minutes ago
Os info: solaris 5.9
Oracle info: 10.2.0.3
Error info: ORA-16120: dependencies being computed for transaction at SC
N 0x0002.c8f6f182
Message: Our logical standby stopped last night. we tried to stop and start the standby but no help.
Below are some of the queries to get the status:
APPLIED_SCN LATEST_SCN MINING_SCN RESTART_SCN
11962328446 11981014649 11961580453 11961536228
APPLIED_TIME LATEST_TIME MINING_TIME RESTART_TIME
07-11-13 09:09:41 07-11-14 10:26:26 07-11-13 08:57:53 07-11-13 08:56:36
sys@RP06>SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN STATUS
COORDINATOR 1.1962E+10 ORA-16116: no work available
READER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
BUILDER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
PREPARER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
ANALYZER 1.1962E+10 ORA-16120: dependencies being computed for transac
tion at SCN 0x0002.c8f6c002
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
10 rows selected.
Select PID,
TYPE,
STATUS
From
V$LOGSTDBY
Order by
HIGH_SCN; 2 3 4 5 6 7 8
PID TYPE STATUS
17896 ANALYZER ORA-16120: dependencies being computed for transaction at SC
N 0x0002.c8f6f182
17892 PREPARER ORA-16127: stalled waiting for additional transactions to be
applied
17890 BUILDER ORA-16243: paging out 8144 bytes of memory to disk
17888 READER ORA-16127: stalled waiting for additional transactions to be
applied
28523 COORDINATOR ORA-16116: no work available
17904 APPLIER ORA-16116: no work available
17906 APPLIER ORA-16116: no work available
17898 APPLIER ORA-16116: no work available
17900 APPLIER ORA-16116: no work available
17902 APPLIER ORA-16116: no work available
10 rows selected.
How can i get this transaction information, which log miner is looking for dependencies?
Let me know if you have any questions.
Thanks in advance.
Message was edited by:
raghu559Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
Slow replication on logical standby DB
Hi All,
Before 5 day we run update statistics script on our MIS DB. Now, we face slow replication in peak hours on this from RAC and not able to generate reports.
Please suggest me why this happen, how to solve this ? It very critical for me ..
Details are following-----
BEGIN
-- Run job synchronously.
DBMS_SCHEDULER.run_job (job_name=> 'SYS.GATHER_STATS_JOB');
END;
Oracle Version-----
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
NLSRTL Version 10.2.0.3.0 - Production
AIX Version 5.3
SQL> select NAME ,OPEN_MODE, PROTECTION_MODE ,DATABASE_ROLE,GUARD_STATUs,LOG_MODE from v$database;
OPEN_MODE PROTECTION_MODE DATABASE_ROLE GUARD_S LOG_MODE
READ WRITE MAXIMUM PERFORMANCE LOGICAL STANDBY ALL ARCHIVELOG
Topas -------------
Topas Monitor for host: UMISDB01 EVENTS/QUEUES FILE/TTY
Wed Jan 20 11:51:13 2010 Interval: 2 Cswitch 6723 Readch 0.0G
Syscall 4300 Writech 6657.3K
Kernel 5.4 |## | Reads 2780 Rawin 0
User 58.1 |################# | Writes 288 Ttyout 1337
Wait 2.9 |# | Forks 0 Igets 0
Idle 33.7 |########## | Execs 0 Namei 33
Physc = 2.25 %Entc= 64.2 Runqueue 5.0 Dirblk 0
Waitqueue 6.5
Network KBPS I-Pack O-Pack KB-In KB-Out
en6 2.2 7.0 5.0 0.4 1.8 PAGING MEMORY
en1 0.0 0.0 0.0 0.0 0.0 Faults 2762 Real,MB 16384
lo0 0.0 0.0 0.0 0.0 0.0 Steals 7251 % Comp 94.4
PgspIn 3 % Noncomp 5.5
Disk Busy% KBPS TPS KB-Read KB-Writ PgspOut 0 % Client 5.5
hdisk10 100.0 4.2K 542.5 4.2K 0.0 PageIn 5530
hdisk6 100.0 9.6K 1.2K 9.0K 659.1 PageOut 1664 PAGING SPACE
hdisk5 100.0 8.8K 1.1K 8.1K 715.3 Sios 7194 Size,MB 32768
hdisk4 29.1 1.4K 89.9 128.6 1.3K % Used 9.9
hdisk15 8.5 1.1K 46.2 92.4 1.0K NFS (calls/sec) % Free 91.1
hdisk8 3.5 261.2 31.6 0.0 261.2 ServerV2 0
hdisk14 2.5 514.4 14.6 0.0 514.4 ClientV2 0 Press:
hdisk12 2.0 759.5 8.5 0.0 759.5 ServerV3 0 "h" for help
hdisk0 1.5 16.1 4.0 12.1 4.0 ClientV3 0 "q" to quit
hdisk13 0.5 827.8 13.6 0.0 827.8
hdisk1 0.5 4.0 1.0 0.0 4.0
hdisk7 0.5 442.1 7.5 0.0 442.1
Name PID CPU% PgSp Owner
oracle 1876038 14.1 15.3 oracle
oracle 1597544 14.0 11.3 oracle
lrud 16392 0.5 0.6 root
oracle 1515570 0.4 467.5 oracle
oracle 1695836 0.3 27.3 oracle
oracle 1642498 0.3 323.3 oracle
oracle 1204230 0.3 291.3 oracle
oracle 512222 0.2 483.5 oracle
oracle 1368188 0.2 7.4 oracle
oracle 1458238 0.2 227.3 oracle
oracle 1712180 0.1 307.4 oracle
oracle 1638546 0.1 37.2 oracle
aioserve 848030 0.1 0.4 root
Signal 2 received
Thanks in advanceSantosh Pradhan wrote:
Hi ,
oracle 10.2.0.3 enterprise edition logical standby
We performed heavy updates on our production database due to which logical standby gone lots of logs behind with primary database and log are getting apply on logical standby very slowly.
Kindly suggest how to speed up apply process on logical standby ....Hope you are using "ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;" command
Here please check the below note for Adjusting the Number of APPLIER Processes , also if redo transport is slow check the settings for "LOG_ARCHIVE_MAX_PROCESSES"
http://docs.oracle.com/cd/B28359_01/server.111/b28294/manage_ls.htm#CHDBGBFC -
Best practice on using Flashback and Logical Standby
Hello,
I'm testing a fail-back scenario where I first need to activate a logical standby, then do some dummy transactions before I flashback this db and resme the redo apply. Here is what the steps look like:
1) Ensure logical standby is in-sync with primary
2) Enable flashback on standby
3) Create a flashback guaranteed restore point
4) Defer log shipping from primary
5) Activate the logical standby so it’s fully open to read-write
6) Dummy activities against the standby (which is now fully open)
7) Flashback the database to the guaranteed checkpoint
8) Resume log shipping on primary
9) Resume redo apply on secondary
In the end, i can see the log shipping is happening but the logical standby does not apply any of these..and there is no error in the alert log on Standby side. But the following query could explains why the standby is idle:
SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN STATUS
COORDINATOR ORA-16240: Waiting for log file (thread# 2, sequence# 0)
ORA-16240: Waiting for log file (thread# string, sequence# string)
Cause: Process is idle waiting for additional log file to be available.
Action: No action necessary. This informational statement is provided to record the event for diagnostic purposes.
I dont understand why it's looking for sequence #0 after the flashback.
Thanks for the help.Hello;
I hesitate to answer your question because you are not doing a good job of keeping the forum clean :
Total Questions: 13 (13 unresolved)
Please consider closing some of you old answered questions and rewarding those who helped you.
No action necessary.
Do you really have a thread 2? ( Redo thread number )
Quick check
select applied_scn,latest_scn from v$logstdby_progress;Use the DBA_LOGSTDBY_LOG View if you don't have a thread 2 then the sequence# is meaningless.
COLUMN DICT_BEGIN FORMAT A10;
SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,
TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# AS THR# FROM DBA_LOGSTDBY_LOG
ORDER BY SEQUENCE#;Logical Standby questions are difficult, not a lot of them out there I'm thinking.
Check
http://docs.oracle.com/cd/E14072_01/server.112/e10700/manage_ls.htm
"Waiting On Gap State" ( However I still believe you don't have a 2nd thread# )
OR
http://psilt.wordpress.com/2009/04/29/simple-logical-standby/
Best Regards
mseberg
Edited by: mseberg on Apr 26, 2012 5:13 PM -
ORA-01403 - Logical Standby Apply ends on delete/update statement
- This thread is relocated to this forum; advice from Daniel Roy -
After implementing 2 logical standby databases and running pretty smooth for a while 'strange' errors occur which puzzle me. Sometimes I skip a transaction or exclude a schema from replication and hold my breath for what next roars its ugly head.
Despite fulfilling all logical standby prerequisites and maintaining primary keys on tables I now run into ORA-01403 errors. The updated table only contains 1 row and has a foreign key to another small table.
I could instantiate these tables, but I want to understand why these errors occur and prevent them form happing or learn how to resolve them best.
Anyone around who dealt with these matters and won?
I'm running this implementation with Oracle 9.2.0.7 on Tru64 51.b
I'm able to create the logical standby databases manually and with aid of the Data Guard Creation Wizard (EM10g 10.1).
Can anyone also help me out with refining the faulty transaction from e.g. V$LOGMNR_CONTENTS? (Without disrupting the data guard setup).
I've already retrieved redo info from archivelogs, but there must be an easy way.
Regards,
ErikAny way for you to turn tracing on for the DB where you see this ORA-01403 error? We could then probably find out exactly what goes wrong. It's very hard for us to know exactly what might be wrong, since we don't know your exact setup (except for this table). Let me know if that's not possible, and I could construct a logical DB setup to test (even tho it would be on Windows, I don't have Tru64).
Daniel -
My environment is Primary database is 11.1.0.7 64bit on Windows 2003 Enterprise 64bit. Logical is on the same platform and oracle version but a different server. I created a physical standby first and it applied the logs quickly without any issues. I received no errors when I changed it over to a Logical standby database. The problem that is happening is as soon as I issue the command "alter database start logical standby apply;" the CPU usage goes to 100% and the SQL apply takes a long time to apply a log. When I was doing this on 10G I never ran into this, as soon as the log was received, it was applied within a couple of minutes. I don't think it could be a memory issue since there is plenty on the Logical standby server. I just can't figure out why the SQL apply is so slow and the CPU usage skyrockets. I went through all of the steps in the guide "Managing a Logical Standby Database" from Oracle and I don't see anything wrong. The only difference between the two databases is that on the Primary I have Large Page support enabled, I don't on the Logical. Any help would be greatly appreciated, I need to use this Logical to report off of.
Thanks for the responses. I have found what is causing the problem. I kept noticing that the statements it was slowing down on were the ones where data was being written to the SYS.AUD$ table in the System tablespace on the Logical Standby database. A quick count of the records showed that I had almost 6 million records in that table. After I decided to truncate SYS.AUD$ on the Logical, the archive logs started to apply normally. I wonder why the Logical has a problem with this table and the Primary doesn't. I didn't even know auditing was turned on on the Primary database, it must be enabled by default. Now I know why my System table space has grown from 1gb to 2gb since November.
Now that I fixed it for now, I am unsure what to do to keep this from happening. Can I turn off Auditing on the Logical and keep it on for the Primary? Would this stop data from being written to the SYS.AUD$ table on the Logical? It doesn't appear that there is any kind of cleanup on this table that is offered by Oracle, I guess I can just clean out this table occasionally but that is just another thing to add to the list of maintenance tasks. I notice that you can also write this audit data to a file on the OS. Has anyone here done that? -
Logical standby Slow after starting auditing
Hi:
I have logical standby, which was keeping with the primary database, then we started auditing on the primary database(Database auditing.) , After starting auditing my Logical standby not keeping with primary, after doing reasearch we found out that there are so many sql statement from sys schema on AUD$ table, and they are doing full tablescan on it, which is of 3GB as off now. Also This table is own by sys and by default all sys objects should be excluded from logical standby apply. But this is not, when I tryed to skip it by my self, then apply process gives ora-600 and stops.
I think it is a bug or what ???
Any help on this will be appriciated.
Regards
BhushanHi Guys:
After taking my time we decided to truncate the sys.AUD$ on the LOGICAL STANDBY and that solved our problem.Now LS is catching up fast as I can see the progree is verry fast.
Still if anybody from "ORACLE CORP" is reading this thread they should give us the reason why the sys.aud$ table is beging replicated on the LOGICAL STANDBY site. is this a BUG in oracle?
Regards
Bhushan -
Logical Standby working issues Oracle 9i, Windows
Hi,
Set up Oracle 9i Logical Standby on Windows. (instructions as per Oracle Documentation)
Did not have any issues setting up.
While setting up the Logical Standby, Recovered the Primary Database until Oct 10/09 8:16 pm
Registered the archive log in the logical standby generated hence and the FAL took care of copying/registering the rest of the archivelogs.
Created and inserted some records in Primary database and could see them in Standby.
So far so good.
On Oct11 data was entered into Primary database. Archivelogs were shipped to Standby, I could see them registered in DBA_LOGSTDBY_LOG.
The APPLIED_SCN,NEWEST_SCN were in sync as per DBA_LOGSTDBY_PROGRESS.
Today, we had some issues with data and when we queried the user tables: (no skip settings)
Couldn't see any data in standby past the recovery...
No errors reported in DBA_LOGSTDBY_EVENTS. No errors in Alert log also.
What could be happening?
Thanks,
MadhuriI figured it out...
Today, we had some issues with data and when we queried the user tables: (no skip settings)
Couldn't see any data in standby past the recovery...I was using two tables as random spot check and both did not get updated. So, I was under the impression SQL APPLY did not do anything.
But, it did apply the redo on the rest of the tables.
These 2 tables in question were skipped because both of them had Function Based indexes.
They are very huge individual tables .
So, exporting them from Primary database and Importing them into Standby Database. Skipping DML in DataGuard.
That solved the problem.
--Madhuri -
Logical Standby & Primary site Time Diiference
Hi,
I have the one primary site over RAC configuration and one Logical standby
Site.We have configured the Logical standby for archived files. We would like to
know, how can we compute the time difference between Primary Site and Logical
Site ex. IF suppose some SCN XYZ is applying on logical standby site so when
the same SCN (XYZ) generate (time) on the primary site. We need exact time
difference between Primary site and Logical Site.
If there is any query or other method pls suggest to find out this information.
Thanks, DewanHi,
From memory, I use this:SELECT
TO_CHAR(MIN(TIME),'YYYY-MM-DD HH24:MI:SS') OLDEST,
TO_CHAR(MAX(TIME),'YYYY-MM-DD HH24:MI:SS') NEWEST,
MAX(TIME)-MIN(TIME) DELTA FROM
SELECT L.SEQUENCE# SEQ, L.FIRST_TIME TIME,
(CASE WHEN L.NEXT_CHANGE# < P.READ_SCN THEN 'YES'
WHEN L.FIRST_CHANGE# < P.APPLIED_SCN THEN 'CURRENT'
ELSE 'NO' END) APPLIED
FROM DBA_LOGSTDBY_LOG L, DBA_LOGSTDBY_PROGRESS P
ORDER BY SEQUENCE#
WHERE APPLIED != 'YES';Regards,
Yoann.
PS: This does not work for Archived Redo Logs not yet sent to Logical Standby:
Message was edited by:
Yoann Mainguy -
Logical Standby SQL Apply Using Incorrect Decode Statement
We are seeing statements erroring out on our logical standby that have been rewritten (presumably by sql apply) with decode statements that don't appear to be correct. For example, here is one of the rewritten statements.
update /*+ streams restrict_all_ref_cons */ "CADPROD"."OMS_SQL_STATEMENT" p
set *"APPLICATION"=decode(:1,'N',"APPLICATION",:2)*,
"STATEMENT"=dbms_reputil2.get_final_lob(:3,"STATEMENT",:4)
where (:5='N' or(1=1 and (:6='N' or(dbms_lob.compare(:7,"STATEMENT")=0)or(:7 is null and "STATEMENT" is null)))) and(:8="APPLICATION")
The problem comes in, we believe, with the attempt to write the value "APPLICATION" to the application column which is only a 10 character field. the value for the :1 bind variable is "N" and the value for :2 is null.
We see the following error on the logical standby:
ORA-00600: internal error code, arguments: [kgh_heap_sizes:ds], [0x01FCDBE60], [], [], [], [], [], []
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [kxtoedu+54] [PC:0x2542308] [ADDR:0xFFFFFFFFFFFFFFFF] [UNABLE_TO_READ] []
ORA-12899: value too large for column "CADPROD"."OMS_SQL_STATEMENT"."APPLICATION" (actual: 19576, maximum: 10)
Is this a configuration issue or is it normal for SQL Apply to convert statements from logminer into decode statements?
We have an Oracle 10.2.0.4 database running on windows 2003 R2 64-bit os. We have 3 physical and 2 logical standby's, no problems on the physical standbys.Hello;
I noticed some of your parameters seem to be wrong.
fal_client - This is Obsolete in 11.2
You have db_name='test' on the Standby, it should be 'asadmin'
fal_server=test is set like this on the standby, it should be 'asadmin'
I might consider changing VALID_FOR to this :
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)Would review 4.2 Step-by-Step Instructions for Creating a Logical Standby Database of Oracle Document E10700-02
Document 278371.1 is showing its age in my humble opinion.
-----Wait on this until you fix your parameters----------------------
Try restarting the SQL Apply
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATEI don't see the parameter MAX_SERVERS, try setting it to 8 times the number of cores.
Use these statements to trouble shoot :
SELECT NAME, VALUE, UNIT FROM V$DATAGUARD_STATS;
SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE ;TRANSACTIONS%';
SELECT COUNT(1) AS IDLE_PREPARERS FROM V$LOGSTDBY_PROCESS WHERE
TYPE = 'PREPERER' AND STATUS_CODE = 16166;Best Regards
mseberg
Edited by: mseberg on Feb 14, 2012 7:37 AM
Maybe you are looking for
-
Switching from a PC to a Mac was a horrible mistake :(
I am being far more UNproductive than I should be on a Mac. Too many problems on a Mac. I just switched to a Mac from a PC. Lots and lots of problems. I am power user and I think I really made a horrible mistake. I have wasted so much work time this
-
Logic 9 32 bit Server randomly doesn't Show GUI
It was really hard to get a solution to this because 99 percent of the problems were annoyingly about Logic X even though it clearly states it only runs 64 bits. This issue just randomly started. None of my 32 bit plugins will show it's GUI. The 32 b
-
WTMG withholding tax conversion
Hello to you all, Once I run Tcode WTMG withholding tax conversion, I was surprised to receive an error message FWTM379 "Table T059Z, country IL, percentage rate for type 06, code 01 (==> long txt)". The classic WT codes based on the idea that the pe
-
Using VI Clear in LabView 7.0
Hello, we programmed a software with LabView 6.1 which uses the serial port over VISA 2.0.0. Now we have LabView 7.0 with VISA 3.0.1 and the problem that the VI "VISA Clear" has a deviant behavior as before. The serial communication don`t work now, i
-
Fiori web service GetEntity single record show multiple times
Hi Guys I have created Purchase Order GetEntity Web Service. In single Purchase order there are multiple Products. I have show that products in Line Item. In that line item a single records show multiple times. From Netweaver Geteway GetEntity serv