Oracle10gR2-Data Guard
I want to configure Oracle10gR2 Data Guard for the following scenario:
• In a Data Guard environment, we want to configure One Primary Server and Two Standby databases.
• The Primary server will be using the internal hard drives for storage (operating system, Oracle, production database, logs, etc).
• The First Standby database server will be connected to the Primary Server on LAN. This server will be using the SAN as its storage. The internal drives will be used for the operating system only.
• The Second Standby Database would be hosted in a Remote site and connected through Optical Fiber. This database server will also use SAN as its storage.
Any body can help me out if there are any issues regarding SAN in a data guard environment?
Regards
No issues.
Similar Messages
-
In dataguard when i check view log(OBJECT-Viewlog) it return
"Data Guard Remote Process Startup Fail"?
Regardshi there,
What's your question and you need to provide the version and the type of standby db too? -
Data Guard adding new data files to a tablespace.
In the past, if you were manually updating an Oracle physical standby database there were issues with adding a data file to a tablespace. It was suggested that the data file should be created small and then the small physical file copied to the standby database. Once the small data file was in place it would be resized on the primary database then the repication would change the size on the standby.
My question is, does Data Guard take care of this automaticlly for a physical standby? I can't find any specific reference on how it handles a new datafile.Never mind, I found the answer.
STANDBY_FILE_MANAGEMENT=auto
Set on the standby database will create the datafiles. -
Problem with logminer in Data Guard configuration
Hi all,
I experience strange problem with applying of the logs in DataGuard configuration on the logical standby database side.
I've set up the configuration step by step as it is described in documentation (Oracle Data Guard Concepts and Administration, chapter 4).
Everything went fine until I issued
ALTER DATABASE START LOGICAL STANDBY APPLY;
I saw that log applying process was started by checking the output of
SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
and
SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
but in few minutes it stoped and quering DBA_LOGSTDBY_EVENTS I saw the following records:
ORA-16111: log mining and apply setting up
ORA-01332: internal Logminer Dictionary error
Alert log says the following:
LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
Wed Jan 21 16:57:57 2004
Errors in file /opt/oracle/admin/whouse/bdump/whouse_lsp0_5817.trc:
ORA-01332: internal Logminer Dictionary error
Here is the end of the whouse_lsp0_5817.trc
error 1332 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01332: internal Logminer Dictionary error
But the most useful info I found in one more trace file (whouse_p001_5821.trc):
krvxmrs: Leaving by exception: 604
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
ORA-06512: at line 1
Seems that somewhere the correct privileges were not given or smth like this. By the way I was doing all the operations under SYS account (as SYSDBA).
Could smb give me a clue where could be my mistake or what was done in the wrong way?
Thank you in advance.Which is your SSIS version?
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
Data guard synchronization after link down b/w primary and physical standby
Hi All,
I have configured data guard on oracle 11gr2 db. Normally switchover between my primary and physical standby happens smoothly and the Apply lag would be zero. Recently We had to test a scenario when the network link between Primary and Physical standby is completely down and Physical standby is isolated completely for more than half an hour.
When we brought up the link every thing worked smoothly but apply lag started increasing from 0 to around 3 hrs. And then it started reducing to 0. Currently Apply lag and transport lag shows 0.
But is this normal behaviour of oracle data guard that when the link between primary and physical standby is completely down, It requires 3-4 hrs for resynchronization ??? Even when during isolation, there were very few transactions happend on primary database ??
Are there any documents available for this scenario??
ThanksHi, after the link is up, if there were some transactions and produced archive logs it's normal to take some time for resync. To check if 3-4 hours is normal or not, you can repeat the scenario and this time check
- how many archivelogs does primary produce in this period.
- after the link is up, does archivelog transfer immediately starts from primary to standby? Is primary able to send these archivelogs parallel?
- Is there anything wrong with the apply process?
check primary & standby alert log files, and run this query on standby to check the transport and apply processes:
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
regards -
Data Guard Failover after primary site network failure or disconnect.
Hello Experts:
I'll try to be clear and specific with my issue:
Environment:
Two nodes with NO shared storage (I don't have an Observer running).
Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
Two single instance databases, one per node. NO RAC.
What I'm being able to perform with no issues:
Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
Automatic fail(over) when primary node is rebooted with "reboot" or "init"
Automatic fail(over) when primary node is shut down with "shutdown".
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:
This is the portion of the alert log at Standby site when Real Time Replication is working fine:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Terminal Recovery: applying standby redo logs.
Terminal Recovery: thread 1 seq# 7 redo required
Terminal Recovery:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Attempt to do a Terminal Recovery (primary_db)
Media Recovery Start: Managed Standby Recovery (primary_db)
started logmerger process
Mon Dec 23 17:13:33 2013
Managed Standby Recovery not using Real Time Apply
Media Recovery failed with error 16157
Recovery Slave PR00 previously exited with exception 283
ORA-283 signalled during: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
Mon Dec 23 17:13:34 2013
Shutting down instance (immediate)
Shutting down instance: further logons disabled
Stopping background process MMNL
Stopping background process MMON
License high water mark = 38
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
ALTER DATABASE DISMOUNT
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
ARCH shutting downARCH shutting down
ARCH shutting down
ARC0: Relinquishing active heartbeat ARCH role
ARC2: Archival stopped
ARC0: Archival stopped
ARC1: Archival stopped
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:40 2013
Stopping background process VKTM
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:43 2013
Instance shutdown complete
Mon Dec 23 17:13:44 2013
Adjusting the default value of parameter parallel_max_servers
from 1280 to 470 due to the value of parameter processes (500)
Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = 64 KB
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total System Global Area size is 3762 MB. For optimal performance,
prior to the next instance restart:
1. Increase the number of unused large pages by
at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
get 100% of the System Global Area allocated with large pages
2. Large pages are automatically locked into physical memory.
Increase the per process memlock (soft) limit to at least 3770 MB to lock
100% System Global Area's large pages into physical memory
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 32
Number of processor cores in the system is 16
Number of processor sockets in the system is 2
CELL communication is configured to use 0 interface(s):
CELL IP affinity details:
NUMA status: NUMA system w/ 2 process groups
cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
CELL communication will use 1 IP group(s):
Grp 0:
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =88
LICENSE_MAX_USERS = 0
SYS auditing is disabled
NUMA system with 2 nodes detected
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
ORACLE_HOME = /u01/oracle/product/11.2.0.4
System name: Linux
Node name: node2.localdomain
Release: 2.6.32-131.0.15.el6.x86_64
Version: #1 SMP Tue May 10 15:42:40 EDT 2011
Machine: x86_64
Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
System parameters with non-default values:
processes = 500
sga_target = 3760M
control_files = "/u02/oracle/orafiles/primary_db/control01.ctl"
control_files = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
db_file_name_convert = "standby_db"
db_file_name_convert = "primary_db"
log_file_name_convert = "standby_db"
log_file_name_convert = "primary_db"
control_file_record_keep_time= 40
db_block_size = 8192
compatible = "11.2.0.4.0"
log_archive_dest_1 = "location=/u02/oracle/archivelogs/primary_db"
log_archive_dest_2 = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
log_archive_dest_state_2 = "ENABLE"
log_archive_min_succeed_dest= 1
fal_server = "primary_db"
log_archive_trace = 0
log_archive_config = "DG_CONFIG=(primary_db,standby_db)"
log_archive_format = "%t_%s_%r.dbf"
log_archive_max_processes= 3
db_recovery_file_dest = "/u02/oracle/fast_recovery_area"
db_recovery_file_dest_size= 30G
standby_file_management = "AUTO"
db_flashback_retention_target= 1440
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
job_queue_processes = 0
audit_file_dest = "/u01/oracle/admin/primary_db/adump"
audit_trail = "DB"
db_name = "primary_db"
db_unique_name = "standby_db"
open_cursors = 300
pga_aggregate_target = 1250M
dg_broker_start = FALSE
diagnostic_dest = "/u01/oracle"
Mon Dec 23 17:13:45 2013
PMON started with pid=2, OS id=29108
Mon Dec 23 17:13:45 2013
PSP0 started with pid=3, OS id=29110
Mon Dec 23 17:13:46 2013
VKTM started with pid=4, OS id=29125 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Mon Dec 23 17:13:46 2013
GEN0 started with pid=5, OS id=29129
Mon Dec 23 17:13:46 2013
DIAG started with pid=6, OS id=29131
Mon Dec 23 17:13:46 2013
DBRM started with pid=7, OS id=29133
Mon Dec 23 17:13:46 2013
DIA0 started with pid=8, OS id=29135
Mon Dec 23 17:13:46 2013
MMAN started with pid=9, OS id=29137
Mon Dec 23 17:13:46 2013
DBW0 started with pid=10, OS id=29139
Mon Dec 23 17:13:46 2013
DBW1 started with pid=11, OS id=29141
Mon Dec 23 17:13:46 2013
DBW2 started with pid=12, OS id=29143
Mon Dec 23 17:13:46 2013
DBW3 started with pid=13, OS id=29145
Mon Dec 23 17:13:46 2013
LGWR started with pid=14, OS id=29147
Mon Dec 23 17:13:46 2013
CKPT started with pid=15, OS id=29149
Mon Dec 23 17:13:46 2013
SMON started with pid=16, OS id=29151
Mon Dec 23 17:13:46 2013
RECO started with pid=17, OS id=29153
Mon Dec 23 17:13:46 2013
MMON started with pid=18, OS id=29155
Mon Dec 23 17:13:46 2013
MMNL started with pid=19, OS id=29157
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = /u01/oracle
Mon Dec 23 17:13:46 2013
ALTER DATABASE MOUNT
ARCH: STARTING ARCH PROCESSES
Mon Dec 23 17:13:50 2013
ARC0 started with pid=23, OS id=29210
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Successful mount of redo thread 1, with mount id 2071851082
Mon Dec 23 17:13:51 2013
ARC1 started with pid=24, OS id=29212
Allocated 15937344 bytes in shared pool for flashback generation buffer
Mon Dec 23 17:13:51 2013
ARC2 started with pid=25, OS id=29214
Starting background process RVWR
ARC1: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Dec 23 17:13:51 2013
RVWR started with pid=26, OS id=29216
Physical Standby Database mounted.
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Mon Dec 23 17:13:51 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:13:51 2013
MRP0 started with pid=27, OS id=29219
MRP0: Background Managed Standby Recovery process started (primary_db)
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC2: Becoming the heartbeat ARCH
ARC2: Becoming the active heartbeat ARCH
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
This is the rest of the log:
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:52
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:55
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
started logmerger process
Mon Dec 23 17:13:56 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:58
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Mon Dec 23 17:14:01 2013
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:14:01
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Error 12543 received logging on to the standby
FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
Archiver process freed from errors. No longer stopped
Mon Dec 23 17:15:07 2013
Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
Mon Dec 23 17:19:51 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Mon Dec 23 17:26:18 2013
RFS[1]: Assigned to RFS process 31456
RFS[1]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:26:47 2013
flashback database to scn 15921680
ORA-16157 signalled during: flashback database to scn 15921680...
Mon Dec 23 17:27:05 2013
alter database recover managed standby database using current logfile disconnect
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:27:05 2013
MRP0 started with pid=28, OS id=31481
MRP0: Background Managed Standby Recovery process started (primary_db)
started logmerger process
Mon Dec 23 17:27:10 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: alter database recover managed standby database using current logfile disconnect
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Mon Dec 23 17:27:18 2013
RFS[2]: Assigned to RFS process 31492
RFS[2]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:28:18 2013
RFS[3]: Assigned to RFS process 31614
RFS[3]: No connections allowed during/after terminal recovery.
Do you have any advice?
Thanks!
Alex.Hello;
What's not clear to me in your question at this point:
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:"
Are you trying a failover to the Standby?
Please advise.
Is it possible your "valid_for clause" is set incorrectly?
Would also review this:
ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
Best Regards
mseberg -
How do i find dataloss in Data Guard?
We are using redo log, in async mode, following is our setting.
SERVICE=xxx_sb max_failure=100 reopen=600 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=xxx_sb
When i query V$managed_Standby for delay_mins, its always zero. Meaning there is no delay in copying a log. I have 2 questions..
1. How can i communicate to business that in worst case we will lose x Mintues of data? Its an OLTP, where the transactions are less then 2mins. Also during the night there are some batch jobs where the transactions are 60mins longs.
2. Most of the time during peak hours there is a log switch happening every 10-15mins but during non-peak it may not happen for a long period of time, is it advisable to set ARCHIVE_LAG_TIME to 10 mins? as im not using archiver , we are using log writer for standby.
any explanation or point to documentation would be appreciated,
Thanks,Production databases who are running with fully fined configured Data Guard, do'nt have any dataloss because failover operation ensures zero data loss if dataguard is configured with maximum protection mode or maximum availability mode at failover time.
http://www.dbazone.com/docs/oracle_10gDataGuard_overview.pdf
The above pdf is oracle white paper which too confirmed it.
LGWR SYNC AFFIRM in Oracle Data Guard is used for zero data loss. How does one ensure zero data loss? Well, the redo block generated at the primary has to reach the standby across the network (that's where the SYNC part comes in - i.e. it is a synchronous network call), and then the block has to be written on disk on the standby (that's where the AFFIRM part comes in) - typically on a standby redo log.
Can you have LGWR SYNC NOAFFIRM? Yes sure. Then you will have synchronous network transport, but the only thing you are guaranteed is that the block has reached the remote standby's memory. It has not been written on to disk yet. So not really a zero data loss solution (e.g. what if the standby instance crashes before the disk I/O).
To sum up -> LGWR SYNC AFFIRM means primary transaction commits are waiting for ntk I/O + disk I/O acks. LGWR SYNC NOAFFIRM means primary transaction commits are waiting for ntk I/O only.
Source:http://www.dbasupport.com/forums/showthread.php?t=54467
HTH
Girish Sharma -
I have one problem with Data Guard. My archive log files are not applied.
I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
In Enterprise Manager on Primary database it looks ok. I get the following message Data Guard status Normal
But as I wrote above the archive log files are not applied
After I created the Physical Standby database, I have also done:
1. I connected to the Physical Standby database instance.
CONNECT SYS/SYS@luda AS SYSDBA
2. I started the Oracle instance at the Physical Standby database without mounting the database.
STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
3. I mounted the Physical Standby database:
ALTER DATABASE MOUNT STANDBY DATABASE
4. I started redo apply on Physical Standby database
alter database recover managed standby database disconnect from session
5. I switched the log files on Physical Standby database
alter system switch logfile
6. I verified the redo data was received and archived on Physical Standby database
select sequence#, first_time, next_time from v$archived_log order by sequence#
SEQUENCE# FIRST_TIME NEXT_TIME
3 2006-06-27 2006-06-27
4 2006-06-27 2006-06-27
5 2006-06-27 2006-06-27
6 2006-06-27 2006-06-27
7 2006-06-27 2006-06-27
8 2006-06-27 2006-06-27
7. I verified the archived redo log files were applied on Physical Standby database
select sequence#,applied from v$archived_log;
SEQUENCE# APP
4 NO
3 NO
5 NO
6 NO
7 NO
8 NO
8. on Physical Standby database
select * from v$archive_gap;
No rows
9. on Physical Standby database
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1110
MRP0: Background Media Recovery process shutdown
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 2148
RFS[1]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 2384
RFS[2]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[3]: Assigned to RFS process 3188
RFS[3]: Identified database type as 'physical standby'
Primary database is in MAXIMUM PERFORMANCE mode
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[4]: Assigned to RFS process 3168
RFS[4]: Identified database type as 'physical standby'
RFS[4]: No standby redo logfiles created
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
10. on Physical Standby database
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 9 13664 2
RFS IDLE 0 0 0 0
10) on Primary database:
select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARCm: Becoming the 'no FAL' ARCH
ARCm: Becoming the 'no SRL' ARCH
ARCd: Becoming the heartbeat ARCH
Error 1034 received logging on to the standby
Error 1034 received logging on to the standby
LGWR: Error 1034 creating archivelog file 'luda'
LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
11)on primary db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
Luda 4 NO
Luda 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
Luda 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
Luda 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
Luda 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
Luda 8 NO
12) on standby db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
13) my init.ora files
On standby db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_unique_name='luda'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='luda'
*.fal_server='irina'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
*.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
On primary db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
*.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='irina'
*.fal_server='luda'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
*.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
Please help me!!!!Hi,
After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
In another session 'show configuration' results in the following, confirming that the enable succeeded.
DGMGRL> show configuration
Configuration
Name: avhtest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
avhtest - Primary database
avhtestls53 - Physical standby database
Current status for "avhtest":
Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
It there anybody that experienced the same problem and/or knows the solution to this?
With kind regards,
Martin Schaap -
What is wrong with my Data Guard system?
What is wrong with my Data Guard system(10g10.2.0 on OEL5.0)? What method should I take to diagnose it and repair it?
After shutting down last night, my Data Guard does not run normally today.
On the primary database I issue the following commands:
DGMGRL> show configuration verbose;
Configuration
Name: sdb10g
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
sdb10g - Primary database
stdby10g - Physical standby database
- Fast-Start Failover target
Fast-Start Failover
Threshold: 30 seconds
Observer: hostp
Current status for "sdb10g":
Warning: ORA-16607: one or more databases have failed
DGMGRL> show fast_start failover;
show fast_start failover;
Syntax error before or at "fast_start"
SQL> startup
ORACLE instance started.
Database mounted.
ORA-16649: database will open after Data Guard broker has evaluated Fast-Start
Failover status
On the physical standby database I issue the commands:
SQL> startup mount
ORACLE instance started.
Database mounted.
SQL> recover managed standby database disconnect;
Media recovery complete.
DGMGRL> show configuration verbose;
Configuration
Name: sdb10g
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
sdb10g - Primary database
stdby10g - Physical standby database
- Fast-Start Failover target
Fast-Start Failover
Threshold: 30 seconds
Observer: hostp
Current status for "sdb10g":
Warning: ORA-16607: one or more databases have failed
DGMGRL> disable fast_start failover;
Error: ORA-01034: ORACLE not available
Failed.
Message was edited by:
frank.qian
null
nullThe primary database cannot be opened and the fast_start failover cannot be discabled:
SQL> SQL> ALTER DATABASE open
ERROR at line 1:
ORA-16649: database will open after Data Guard broker has evaluated Fast-Start
Failover status
DGMGRL> disable fast_start failover;
Error: ORA-01034: ORACLE not available -
Data Guard Agent, Authentication Failure
I'm working with two Windows 2003 servers, attempting to use one as a standby and one as a primary database use Data Guard. However, I'm having a bit of trouble when trying to get one server to communicate through the Management Agent and Management Service. I've done Management Agent installs on about 20 XP workstations and they've also worked wonderfully with the Oracle Grid Control.
When the agent on my would-be standby database instance starts up I'm receiving the following errors in emagent.trc:
2005-11-01 15:16:54 Thread-3836 WARN main: clear collection state due to OMS_version difference
2005-11-01 15:16:54 Thread-3836 WARN command: Job Subsystem Timeout set at 600 seconds
2005-11-01 15:16:54 Thread-3836 WARN upload: Upload manager has no Failure script: disabled
2005-11-01 15:16:54 Thread-3836 WARN upload: Recovering left over xml files in upload directory
2005-11-01 15:16:54 Thread-3836 WARN upload: Recovered 0 left over xml files in upload directory
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric RuntimeLog does not have any data columns
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric collectSnapshot does not have any data columns
2005-11-01 15:16:54 Thread-3836 ERROR engine: [oracle_bc4j] CategoryProp NAME [VersionCategory] is not one of the valid choices
2005-11-01 15:16:54 Thread-3836 ERROR engine: ParseError: File=D:\oracle\product\10.1.0\dg\sysman\admin\metadata\oracle_bc4j.xml, Line=486, Msg=attribute NAME in <CategoryProp> cannot be NULL
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name EFFICIENCY__BYTES_SAVED_WITH_COMPRESSION__AVG_PER_SEC_SINCE_START too long, truncating to EFFICIENCY__BYTES_SAVED_WITH_COMPRESSION__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name ESI__ERRORS__ESI_DEFAULT_FRAGMENT_SERVED__AVG_PER_SEC_SINCE_START too long, truncating to ESI__ERRORS__ESI_DEFAULT_FRAGMENT_SERVED__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_STA
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__LATENCY__MAX_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__LATENCY__MAX_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__LATENCY__AVG_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__LATENCY__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_STA
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__REQUESTS__MAX_PER_SEC_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__REQUESTS__MAX_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric Wireless_PID does not have any data columns
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric numberOfAppDownloadsOverInterval_instance does not have any data columns
2005-11-01 15:17:00 Thread-4172 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
SQL = " OCISessionBegin"...
LOGIN = dbsnmp/<PW>@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MY_DATABASE)(PORT=1521))(CONNECT_DATA=(SID=CPD2DB)))
2005-11-01 15:17:00 Thread-4172 ERROR vpxoci: ORA-01017: invalid username/password; logon denied
2005-11-01 15:17:00 Thread-4172 WARN vpxoci: Login 0xe8c220 failed, error=ORA-01017: invalid username/password; logon denied
2005-11-01 15:17:00 Thread-4172 WARN TargetManager: Exception in computing dynamic properties of {MY_DATABASE, oracle_database },MonitorConfigStatus::ORA-01017: invalid username/password; logon denied
2005-11-01 15:17:01 Thread-4172 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
I've already toggled the Local Security Policy (Log On As Batch Job) setting in Windows, unlocked the Monitoring Profile account, etc. I've also tried to set the Preferred Host Credentials for the database, but it doesn't seem to want to authenticate the Windows 2003 Administrator user.
Anyone have any other suggestions?Check the following:
Does the user have administrative privilege on the system?
Is the user running this part of ORA_DBA group?
Does the user have the local security policy "Logon as Batch Job"?
Have you set the OS Preferred Credential? If you are a domain user, this will be looking for domain\user name instead of just the user name.
On another note:
Have you doen any upgrades to the OMS repository?
If yes, is the new Repository compatible with the EM Console? -
Clarification on Data Guard(Physical Standyb db)
Hi guys,
I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
However I need clarification on the setup and whether or not it is working as expected.
My environment is Windows 32bit (Windows 2003)
Oracle 10.2.0.2 (Client/Server)
2 Physical machines
Here is what I have done.
Machine 1
1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
2. Modify the pfile to include the following:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgp'
*.fal_server='oradgs'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
*.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgp
The locations on the harddisk are all available and archived redo are created (e:\archlogs)
3. I then add the necessary (4) standby logs on primary.
4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
RMAN> run
{allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
backup database plus archivelog delete input;
5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
6. Then created a standby controlfile. (At this time the db was in open/write mode).
7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
Machine2
8. I created an Oracle service called the same as primary (oradgp).
9. Created a listener also.
9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
10. I then copied over the pfile from the primary to standby and created an spfile with this one.
It looks like this:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgs'
*.fal_server='oradgp'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
*.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgs
log_file_name_convert='junk','junk'
11. User RMAN to restore the db as:-
RMAN> startup mount;
RMAN> restore database;
Then RMAN created the datafiles.
12. I then added the same number (4) of standby redo logs to machine2.
13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
I wanted to enable realtime apply so, I cancelled the recover by :-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
and issued:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
Also performed a log swith on primary and it got transported to the standby and was applied (YES).
Also ensured that there are no gaps via some queries where no rows were returned.
15. I now wanted to perform a switchover, hence issued:-
Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
All the archivers stopped as expected.
16. Now on machine2:
Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
17. On machine1:
Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
Primary_Now_Standby_SQL>STARTUP MOUNT;
Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
17. On machine2:
Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
However, here are my questions for clarifications:-
Q1. There is a question about ONLINE REDO LOGS within "#" characters.
Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
MRP0 APPLYING_LOG 1 47 452 1024000
but :
SQL> select max(sequence#) from v$archived_log;
46
Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
42 NO
43 YES
44 YES
45 YES
46 YES
What could be the possible reasons why sequence# 42 didn't get applied but the others did?
After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
Thank you very much in advance.
Regards,
Bharath
Edited by: Bharath3 on Jan 22, 2010 2:13 AMParameters:
Missing on the Primary:
DB_UNIQUE_NAME=oradgp
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
Missing on the Standby:
DB_UNIQUE_NAME=oradgs
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
Your questions (Q1 answered above):
You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Up to you. Not a requirement.
You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary. -
Need suggestion on Active data guard or Logical Stand by
Hi All,
Need a suggestion of on below scenario.
We have a production database ( oracle version 11g R2 ) and planning to have a Logical standby or physical standy (Active data guard). Our usage of the standby database is below.
1) Planning to run online reports (100+) 24x7. So might create additional indexes,materialized views etc.
2) daily data feed ( around 300+ data files ) to data warehouse. daily night, jobs will be scheduled to extract data and send to warehouse. Might need additional tables for jobs usage.
Please suggest which one is good.
Regards,
vara.Hello,
In active dataguad Whig is feature from 11gRx ,
If you choose active dataguard, you have couple of good options, one is you can make a high availability of your production database, which can act as image copy of production, as you are asking in 11g you have more advantage where you can open in read only mode and at the sometime MRP will be active, so you can redirect users to connect standby to perform select operations for reporting purpose. So that you can control much load on production ,
Even uou can perform switchover in case of role change, perform failover if your primary is completely lost. Also you can convert to physical to logical standby databases & you can configure FSFO
You have plenty of options with active dataguard.
Refer http://www.orafaq.com/node/957
consider closing the thread if answered and keep the forum clean.
>
User Profile for user11261773
user11261773
Handle: user11261773
Status Level: Newbie
Registered: Jul 14, 2011
Total Posts: 12
Total Questions: 6 (5 unresolved)
>
Edited by: CKPT on Mar 18, 2012 8:14 PM -
We are using oracle 9.2, i am facing a problem in the dataguard is that i want to know whether the log have been applied or not.....below are the outputs....
we are using manual data guard......
SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;
no rows selectedwhen i fire the above query it is not showing any result......
pls suggest me ..
SQL> show parameter stand
NAME TYPE VALUE
standby_archive_dest string /arch/log
standby_file_management string MANUAL
SQL> SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG"
2 FROM V$LOG_HISTORY
3 GROUP BY THREAD#;
THREAD# LAST_APPLIED_LOG
1 1724
2 1537
SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;
no rows selectedwe are using a manual standby database.......
SQL> select DATABASE_ROLE, SWITCHOVER_STATUS,DATAGUARD_BROKER from v$database;
DATABASE_ROLE SWITCHOVER_STATUS DATAGUAR
PHYSICAL STANDBY SESSIONS ACTIVE DISABLED
SQL> show parameter stand
NAME TYPE VALUE
standby_file_management string MANUALso pls suggest me how would i know whether the archives have been applied or not.....
although i have posted the query which i was using for the same
and one more thing i am getting an error in alert log file also...
Sun Aug 10 12:28:09 2008
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Sun Aug 10 12:28:09 2008
ALTER DATABASE RECOVER CONTINUE DEFAULT
Sun Aug 10 12:28:09 2008
Media Recovery Log /arch/log/1_1724.dbf
Sun Aug 10 12:31:09 2008
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Sun Aug 10 12:31:09 2008
ALTER DATABASE RECOVER CONTINUE DEFAULT
Sun Aug 10 12:31:09 2008
Media Recovery Log /arch/log/1_1725.dbf
Errors with log /arch/log/1_1725.dbf
ORA-308 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Sun Aug 10 12:31:09 2008
ALTER DATABASE RECOVER CANCEL
Sun Aug 10 12:31:09 2008
Media Recovery Cancelled
Completed: ALTER DATABASE RECOVER CANCEL
Sun Aug 10 12:33:09 2008
alter database open read only
Sun Aug 10 12:33:09 2008
SMON: enabling cache recovery
Sun Aug 10 12:33:09 2008
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication fou
nd)
Completed: alter database open read only
# -
What are the pros and cons using Active Data Guard vs Data Guard?
My understanding is that Active Data Guard is an additional database option for Oracle 11gR2 Enterprise Edition. I need to know the pros and cons using Active Data Guard vs Data Guard in order to decide whether to get pay extra for the Active Data Guard.
Thanks for any help.Hemant K Chitale wrote:
Before jumping in to Active Data Guard, one needs to evaluate :
a. Is there really a need to run queries on the Standby ? The Standby could / should be at a remote site so queries are "across the network". Depending on the nature of the queries and the volume of output, the "performance" of the queries may not seem to be the same.
b. If the database is not in Maximum Protection mode, the data "seen" at the standby may not be in "real-time" synch
c. Not all applications are truely read-only when querying. Some applications use "jobs" that write to tables when querying. Such would not work with Active DataGuard. (example : EBusiness Suite). There are very complicated ways of handling this -- and one needs to consider if the complications can be introduced and supported.
Over the network accessing standby read only is really not an good idea, I think no one will compare performance with primary and standby,
But some of them they want to validate data which are very critical, as it is matching with primary or not, Its an added advantage with ACTIVE DATAGUARD
Prior to that until unless stop MRP, open database and then we need to validate, So there is an interruption of recovery, I can say its also an advantage where there is no interruption of recovery. -
What are the oracle processes involved in Data Guard Operation
Hi All,
I have a Primary and secondary physical standby database.
I want to know what are the oracle processes involved in the synchrnization between primary and secondary.
Thanks
SantoshThe best place to get this information is Data Guard Concepts and Administration Guild
The link for 10g Release 2
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#i1039416
The link for 10g Release 1
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10823/concepts.htm#1039415
The link for Oracle 9i
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96653/concepts.htm#1027493
Maybe you are looking for
-
How to use interactive layout in Photoshop CC for the photomerge function?
I find it really hard to combine the images together because the end result from photomerge does not have the snap in function like the interactive layout. Does anyone know how can I bring back the interactive layout in Photoshop CC? I downloaded the
-
It works in Apex but not in PHP?
When I run the following from my php application/script I find that I get no data back but when I run just the SQL statement in Database Express' SQL Command console (Apex) I get good data back. $StartDate="2006/01/01 06:00:00"; $EndDate="2007/09/29
-
Lemon font looks horrible?
my urxvt file: ! Xresource file URxvt*depth: 24 URxvt*geometry: 91x21 URxvt.borderColor: #181512 URxvt.internalBorder: 0 URxvt.cursorUnderline: 1 !URxvt.cursorBlink: 1 !URxvt*.transparent: true !URxvt*.shading: 15 ! -- scrollbar URxvt.scrollstyle: pl
-
Why, when exporting as master file from FCX, I have problem with sync
I edited all audio of a short movie in protools, bounced a stereo file, imported it in final cut and shared the master file at prores 422, from there through compressor I have created 2 files (mpeg 2 and ac3) and burnt a dvd but seems always to be
-
Why is Quicktime sooooo slow?
I have a G3 700mhz Ibook with ATI Radeon M6 16meg, 640meg ram, on a completely fresh install of 10.4.7 with all updates. Yet when I watch quicktime trailers in Safari they are extremely laggy. Even after they fully load, it will play only a few frame