Using OEM to monitor Data Guard database
Can someone please send me the link or docs of how to use OEM monitoring data guard? In specific, I would like to use OEM to monitor and be sure logs are applying to stand by site.
Any ideas?
Hello ,
I will extend the document on the Fast Failover feature one of these days.
What you need to do is:
For using Fast Failover, you need to configure the Data Guard observer process.
The configuration of this process can be done by selecting the Fast-Start Failover Disabled link on the Data Guard page of the database (Primary or Standby).
You will be automatically redirected to the Fast-Start Failover: Configure Page.
From this page you can do the configuration of the DG observer process, that then will enable you to activate the Fast Failover feature.
Regards
Rob
http://oemgc.wordpress.com
Similar Messages
-
Monitoring Data Guard with SNMP?
I have configured Data Guard within two Oracle environments and have written a small Perl script which monitors the applied log service and sends an email if something fails to be applied.
I am assuming this is not the most efficient way of monitoring the systems and would like to use SNMP.
Can anyone tell me if it is possible to monitor Data Guard using SNMP (traps)? If so would you know what documents are available?
Cheers!Some of the parameters that you need to have with Physical standby database are :
*.background_dump_dest='/ford/app/oracle/admin/xchbot1/bdump'
*.compatible='9.2.0.7'
*.control_files='/home30/oradata/xchange/xchbot1/control01.ctl','/home30/oradata/xchange/xchbot1/control02.ctl','/home30/orad
ata/xchange/xchbot1/control03.ctl'
*.core_dump_dest='/ford/app/oracle/admin/xchbot1/cdump'
*.db_block_buffers=1024
*.db_block_size=8192
*.db_file_multiblock_read_count=8# SMALL
*.db_files=1000# SMALL
*.db_name='xchbot1'
*.global_names=TRUE
*.log_archive_dest_1='LOCATION=/home30/oradata/xchange/xchbot1/archivelog'
*.log_archive_dest_2='SERVICE=standby'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='arch_%t_%s.arc'
*.log_archive_start=true
*.log_buffer=16384000# SMALL
*.log_checkpoint_interval=10000
*.max_dump_file_size='10240'# limit trace file size to 5 Meg each
*.parallel_max_servers=5
*.parallel_min_servers=1
*.processes=50# SMALL
*.rollback_segments='rbs01','rbs02','rbs03','rbs04','rbs05','rbs06','rbs07','rbs08','rbs09','rbs10'
*.shared_pool_size=67108864
*.sort_area_retained_size=2048
*.sort_area_size=10240
*.user_dump_dest='/ford/app/oracle/admin/xchbot1/udump' -
Creating a Data Guard Database with RMAN in 10&11g
I found this notes for 9i, looking for same for 10g & 11g
183570.1 Creating a Data Guard Database with RMAN (Recovery Manager) using Duplicate CommandThese notes just show duplicating dbs without datagurd, i am looking for Creating a Data Guard DB with RMAN in 10 &11g.
-
Use OEM to monitor oracle database size
Hello All,
We currently have several database which are currently monitored via OEM however i will appreciate if anyone can tell me how i can monitor the the size of my oracle databases via oem
ThanksHi,
I'm not sure of OEM will monitor database size, but many 3rd party tools do.
There are many ways to achieve Oracle table monitoring, and many create specialized extension metadata tables to monitor Oracle table growth. In Oracle 10g you can perform Oracle tables monitoring with the dba_hist_seg_stat tables, specifically the space_used_total column.
On my databases, I create STATSPACK extension tables to track table and overall database growth:
http://www.dba-oracle.com/te_table_monitoring.htm
The full scripts are in my Oracle Press book, Oracle tuning with STATSPACK . . .
http://www.amazon.com/Oracle9i-High-Performance-Tuning-STATSPACK-Burleson/dp/007222360X
Hope this helps . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference"
http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm -
Shell scripts to monitor data guard
Hi All,
Please help me to have the shell scripts for monitoring the data guard.
Thanks,
Mahihere is the shell script we use to monitor dataguard, it sends mail if there is a gap for more than 20 archive logs..
#set Oracle environment for Sql*Plus
#ORACLE_BASE=/oracle/app/oracle ; export ORACLE_BASE
ORACLE_HOME=/oracle/app/oracle/product/10.2.0 ; export ORACLE_HOME
ORACLE_SID=usagedb ; export ORACLE_SID
PATH=$PATH:/oracle/app/oracle/product/10.2.0/bin
#set working directory. script is located here..
cd /oracle/scripts
#Problem statemnt is constructed in message variable
MESSAGE=""
#hostname of the primary DB.. used in messages..
HOST_NAME=`/usr/bin/hostname`
#who will receive problem messages.. DBAs e-mail addresses seperated with space
DBA_GROUP='[email protected] '
#SQL statements to extract Data Guard info from DB
LOCAL_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=1; \n exit \n'
STBY_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
STBY_APPLY_SQL='select applied_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
#Get Data guard information to Unix shell variables...
LOCAL_ARC=`echo $LOCAL_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
STBY_ARC=`echo $STBY_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
STBY_APPLY=`echo $STBY_APPLY_SQL | sqlplus -S / as sysdba | tail -2|head -1`
#Allow 20 archive logs for transport and Apply latencies...
let "STBY_ARC_MARK=${STBY_ARC}+20"
let "STBY_APPLY_MARK= ${STBY_APPLY}+20"
if [ $LOCAL_ARC -gt $STBY_ARC_MARK ] ; then
MESSAGE=${MESSAGE}"$HOST_NAME Standby -log TRANSPORT- error! \n local_Arc_No=$LOCAL_ARC but stby_Arc_No=$STBY_ARC \n"
fi
if [ $STBY_ARC -gt $STBY_APPLY_MARK ] ; then
MESSAGE=${MESSAGE}"$HOST_NAME Standby -log APPLY- error! \n stby_Arc_No=$STBY_ARC but stby_Apply_no=$STBY_APPLY \n"
fi
if [ -n "$MESSAGE" ] ; then
MESSAGE=${MESSAGE}"\nWarning: dataguard error!!! \n .\n "
echo $MESSAGE | mailx -s "$HOST_NAME DataGuard error" $DBA_GROUP
fi -
EM Job scheduling with Data Guard Database
Hi there,
I have an "Issue" on efficient scheduling Enterprise Manager (12.1.0.3) Jobs again databases which are protected with Oracle Data Guard.
What I want:
A Job should be run against the database, which has the "Primary Database Role" from Data Guard point of view.
When I'm scheduling the job with the Database System as job target, where the Primary and Standby Databases are exisitng in, the Job is executed against all databases.
I can put check-step as first step in the job to check the role by sql, but when I let 2 of 3 job executions ( 2 standby 1, primary ) fail, I will have o mange these failed jobs all time.
Creating an dynamic group also isn't helping, as the property of database role can't be used for defining the group membership.
I have taken a look at emcli but also there I dont find a possibility tho check for database role.
Any one a idea on this topic?
br
JörgThese notes just show duplicating dbs without datagurd, i am looking for Creating a Data Guard DB with RMAN in 10 &11g.
-
Use dedicated server for data guard?
Hi All,
I've heard from someone that it is possible to separate data guard from the database and put it to a server, such that the data guard server will be dedicated to shipping log to standby site db etc. I don't know if such architecture would work. Could anyone please clarify it? Thanks in advance.An Introduction to Data Guard from Oracle Doc's is here:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#i1039416 -
HOW to use xml to transfer data from database?
Hi,
Would you please give me some suggestion on how to use xml connect with a table in oracle/MS_access database?
Thanks a lot
ZhaoHave you already had a look at our online documentation here on OTN, or at a book like Building Oracle XML Applications? There are lots of examples in those.
If you have a specific question about using the API's that's not working for you, we can help out here. -
Should I use OEM to load data?
Hi,
OWB client/repository: 9.0.2
I have defined the source (flat file), target(Oracle table), and a mapping operator (direct mapping)... and target table was also "deploy"ed. I have also "generated" the mapping. And when I view "Generated Scripts" for the mapping, it shows loader controle file (CTL) and a TCL file and the "Validation messages" shows mapping "status" valid.
But, I do not see "deploy" button on the "Generated script" screen for the mapping. My question is what do I need to do to extract from source file and load the target table.
Thank you for any advice,
GaneshGanesh,
In 9.0.2, you have to save the scripts to the file system on the server in order to be executed. Where to store the files, is configured in the configuration properties of the target module (there is an entry for SQL loader control files). Put a folder path that is relative to the server (i.e. a Unix path for a Unix server; Windows path for a Windows server, etc.).
Then, if you have Oracle Enterprise Manager (OEM) installed and an Oracle Management Server (OMS) running (and intelligent agents as well), you can register the tcl script in OEM and execute/schedule from there.
In the 9.0.4 release this entire architecture has changed, which introduced deployment of SQL loader mappings as well.
Mark. -
Use of Flashback Database in Data guard environments
11.2.0.3/RHEL 5.8
I've come across several docs which talk about configuring FLASHBACK DATABASE in dataguard environments. We have several
Physical standby DBs (Single Instance & RAC) running in our shop.I would like to know two or three major(common) use of FLASHBACK DATABASE in data guard environments.
I understood one use mentioned in the below URL ie. recovering from a logical mistake
http://uhesse.com/2010/08/06/using-flashback-in-a-data-guard-environment/
I would like to know what are the other major/common use of Flashback Database feauture in DataGuard environmentA couple of other uses:
1) You can use flashback to test your DR. So you can activate your standby. Test application/network connectivity and functionality on your DR site and when done revert this database back to a physical standby. You do however have to ensure that this is allowed in your environment. In some places I have worked this would be a big no no as there were zero data loss requirements. However some companies will allow this as long as the standby is back in place within a certain time period.
2) In the case that you have to do a failover for whatever reason, but then what was the primary site becomes available, you can flashback what was your primary to make it the standby rather than re-instatiating the database from scratch.
Eg. You have a power outage at your primary site so you perform a failover and your standby becomes the primary. Once what was your primary site is back online, you can convert your previous primary into a standby by doing a full back/restore (or whatever method you choose) to recreate your standby again. However you also have the option of using flashback on this database and then convert it into a standby as this would potentially be quicker than re-instantiating the standby. -
DB link problem between active Data Guard and report application database
My database version in 11.2.0.2.0 and OS is Oracle Solaris 10 9/10.
I am facing a problem in my Active data guard Database for reporting purpose. Active Data guard information is as below.
SQL> select name, database_role, open_mode from v$database;
NAME DATABASE_ROLE OPEN_MODE
ORCL PHYSICAL STANDBY READ ONLY WITH APPLY
Problem detail is below
I have created a db link (Name: DATADB_LINK) between active data guard and report application database for reporting purpose.
SQL> create database link DATADB_LINK connect to HR identified by hr using 'DRFUNPD';
Database link created.
But when I run a query using db link from my report application database I got this below error.
ORA-01555: snapshot too old: rollback segment number 10 with name "_SYSSMU10_4261549777$" too small
ORA-02063: preceding line from DATADB_LINK
Then I check Active Data Guard database alart log file and get below error
ORA-01555 caused by SQL statement below (SQL ID: 11yj3pucjguc8, Query Duration=1 sec, SCN: 0x0000.07c708c3):SELECT "A2"."BUSINESS_TRANSACTION_REFERENCE","A2"."BUSINESS_TRANSACTION_CODE",MAX(CASE "A1"."TRANS_DATA_KEY" WHEN 'feature' THEN "A1"."TRANS_DATA_VALUE" END ),MAX(CASE "A1"."TRANS_DATA_KEY" WHEN 'otherFeature' THEN "A1"."TRANS_DATA_VALUE" END )
But the interesting point if I run the report query directly in Active Data Guard database, I never got error.
So is it a problem of DB link between active Data Guard and other database?Fazlul Kabir Mahfuz wrote:
My database version in 11.2.0.2.0 and OS is Oracle Solaris 10 9/10.
I am facing a problem in my Active data guard Database for reporting purpose. Active Data guard information is as below.
SQL> select name, database_role, open_mode from v$database;
NAME DATABASE_ROLE OPEN_MODE
ORCL PHYSICAL STANDBY READ ONLY WITH APPLY
Problem detail is below
I have created a db link (Name: DATADB_LINK) between active data guard and report application database for reporting purpose.
SQL> create database link DATADB_LINK connect to HR identified by hr using 'DRFUNPD';
Database link created.
But when I run a query using db link from my report application database I got this below error.
ORA-01555: snapshot too old: rollback segment number 10 with name "_SYSSMU10_4261549777$" too small
ORA-02063: preceding line from DATADB_LINK
Then I check Active Data Guard database alart log file and get below error
ORA-01555 caused by SQL statement below (SQL ID: 11yj3pucjguc8, Query Duration=1 sec, SCN: 0x0000.07c708c3):SELECT "A2"."BUSINESS_TRANSACTION_REFERENCE","A2"."BUSINESS_TRANSACTION_CODE",MAX(CASE "A1"."TRANS_DATA_KEY" WHEN 'feature' THEN "A1"."TRANS_DATA_VALUE" END ),MAX(CASE "A1"."TRANS_DATA_KEY" WHEN 'otherFeature' THEN "A1"."TRANS_DATA_VALUE" END )
But the interesting point if I run the report query directly in Active Data Guard database, I never got error.
So is it a problem of DB link between active Data Guard and other database?
Check this note which is applicable for your environment
*ORA-01555 on Active Data Guard Standby Database [ID 1273808.1]*
also
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:8908307196113 -
Cloning a database from data guard
Hi Gurus,
I have a situation where I am using 11g R1 11.1.0.7
Primary database in server prodsrv and active data guard read only db in drsrv host.
I have to clone a new dev database in drsrv database. since 11g R1 supports clone from active database, i thought we can use active data guard to clone a new database.
is that possible? my trial didn't work out.
released channel: s8
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/17/2010 01:32:36
RMAN-05531: a mounted database cannot be duplicated while data files are fuzzy
RMAN>
is that possible to run Rman active clone script to clone from active data guard? any help much appreciated.
duplicate target database to 'devdb' from active
database ;
thank you.Hi,
Before doing any experiment.Read the concept and proceed further.Can you please go through the below link:
http://www.databasejournal.com/features/oracle/article.php/3834931/Using-Oracle-11gs-Active-Data-Guard-and-Snapshot-Standby-Features.htm
Best regards,
Rafi.
http://rafioracledba.blogspot.com -
Data Guard monitoring Scripts.
I am looking for scripts to monitor Data Guard?
Can someone help me with this please?
Thanks in advance.These scripts are Unix specific:
## THIS ONE IS CALLED BY THE NEXT
#!/bin/ksh
# last_log_applied.ksh <oracle_sid> [connect string]
if [ $# -lt 1 ]
then
echo "$0: <oracle_sid> [connect string]"
exit -1
fi
oracle_sid=$1
connect_string=$2
ORACLE_HOME=`grep $oracle_sid /var/opt/oracle/oratab | awk -F":" {'print $2'}`
export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
PATH=$PATH:$ORACLE_HOME/bin
export PATH
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
export ORA_NLS33
ORACLE_SID=$oracle_sid
export ORACLE_SID
ofile="/tmp/${oracle_sid}_last_log_seq.log"
#### STANDBY SERVER NEEDS TO CONNECT VIA SYSDBA
if [ ${connect_string:="NULL"} = "NULL" ]
then
$ORACLE_HOME/bin/sqlplus -s /nolog << EOF >tmpfile 2>&1
set pagesize 0;
set echo off;
set feedback off;
set head off;
spool $ofile;
connect / as sysdba;
select max(sequence#) from v\$log_history;
EOF
#### PASS CONNECT STRING IN FOR PRIMARY SERVER
else
$ORACLE_HOME/bin/sqlplus -s $connect_string << EOF >tmpfile 2>&1
set pagesize 0;
set echo off;
set feedback off;
set head off;
spool $ofile;
select max(sequence#) from v\$log_history;
EOF
fi
tmp=`grep -v [^0-9,' '] $ofile | tr -d ' '`
rm $ofile tmpfile
echo "$tmp"
# standby_check.ksh
#!/bin/ksh
export STANDBY_DIR="/opt/oracle/admin/standby"
if [ $# -ne 1 ]
then
echo "Usage: $0: <ORACLE_SID>"
exit -1
fi
oracle_sid=$1
# Max allowable logs to be out of sync on standby
machine_name=`uname -a | awk {'print $2'}`
. $STANDBY_DIR/CONFIG/params.$oracle_sid.$machine_name
user_pass=`cat /opt/oracle/admin/.opass`
echo "Running standby check on $oracle_sid..."
standby_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid`
primary_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid ${user_pass}@${oracle_sid}`
echo "standby_log_num = $standby_log_num"
echo "primary_log_num = $primary_log_num"
log_difference=`expr $primary_log_num - $standby_log_num`
if [ $log_difference -ge $ALARM_DIFF ]
then
/bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $EMAIL_LIST < $STANDBY_DIR/standby_warning_mail
# Page the DBA's if we're falling way behind
if [ $log_difference -ge $PAGE_DIFF ]
then
/bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $PAGE_LIST < $STANDBY_DIR/standby_warning_mail
fi
else
echo "Standby is keeping up ok ($log_difference logs behind)"
fi -
Data Guard Failover after primary site network failure or disconnect.
Hello Experts:
I'll try to be clear and specific with my issue:
Environment:
Two nodes with NO shared storage (I don't have an Observer running).
Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
Two single instance databases, one per node. NO RAC.
What I'm being able to perform with no issues:
Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
Automatic fail(over) when primary node is rebooted with "reboot" or "init"
Automatic fail(over) when primary node is shut down with "shutdown".
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:
This is the portion of the alert log at Standby site when Real Time Replication is working fine:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Terminal Recovery: applying standby redo logs.
Terminal Recovery: thread 1 seq# 7 redo required
Terminal Recovery:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Attempt to do a Terminal Recovery (primary_db)
Media Recovery Start: Managed Standby Recovery (primary_db)
started logmerger process
Mon Dec 23 17:13:33 2013
Managed Standby Recovery not using Real Time Apply
Media Recovery failed with error 16157
Recovery Slave PR00 previously exited with exception 283
ORA-283 signalled during: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
Mon Dec 23 17:13:34 2013
Shutting down instance (immediate)
Shutting down instance: further logons disabled
Stopping background process MMNL
Stopping background process MMON
License high water mark = 38
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
ALTER DATABASE DISMOUNT
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
ARCH shutting downARCH shutting down
ARCH shutting down
ARC0: Relinquishing active heartbeat ARCH role
ARC2: Archival stopped
ARC0: Archival stopped
ARC1: Archival stopped
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:40 2013
Stopping background process VKTM
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:43 2013
Instance shutdown complete
Mon Dec 23 17:13:44 2013
Adjusting the default value of parameter parallel_max_servers
from 1280 to 470 due to the value of parameter processes (500)
Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = 64 KB
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total System Global Area size is 3762 MB. For optimal performance,
prior to the next instance restart:
1. Increase the number of unused large pages by
at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
get 100% of the System Global Area allocated with large pages
2. Large pages are automatically locked into physical memory.
Increase the per process memlock (soft) limit to at least 3770 MB to lock
100% System Global Area's large pages into physical memory
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 32
Number of processor cores in the system is 16
Number of processor sockets in the system is 2
CELL communication is configured to use 0 interface(s):
CELL IP affinity details:
NUMA status: NUMA system w/ 2 process groups
cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
CELL communication will use 1 IP group(s):
Grp 0:
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =88
LICENSE_MAX_USERS = 0
SYS auditing is disabled
NUMA system with 2 nodes detected
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
ORACLE_HOME = /u01/oracle/product/11.2.0.4
System name: Linux
Node name: node2.localdomain
Release: 2.6.32-131.0.15.el6.x86_64
Version: #1 SMP Tue May 10 15:42:40 EDT 2011
Machine: x86_64
Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
System parameters with non-default values:
processes = 500
sga_target = 3760M
control_files = "/u02/oracle/orafiles/primary_db/control01.ctl"
control_files = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
db_file_name_convert = "standby_db"
db_file_name_convert = "primary_db"
log_file_name_convert = "standby_db"
log_file_name_convert = "primary_db"
control_file_record_keep_time= 40
db_block_size = 8192
compatible = "11.2.0.4.0"
log_archive_dest_1 = "location=/u02/oracle/archivelogs/primary_db"
log_archive_dest_2 = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
log_archive_dest_state_2 = "ENABLE"
log_archive_min_succeed_dest= 1
fal_server = "primary_db"
log_archive_trace = 0
log_archive_config = "DG_CONFIG=(primary_db,standby_db)"
log_archive_format = "%t_%s_%r.dbf"
log_archive_max_processes= 3
db_recovery_file_dest = "/u02/oracle/fast_recovery_area"
db_recovery_file_dest_size= 30G
standby_file_management = "AUTO"
db_flashback_retention_target= 1440
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
job_queue_processes = 0
audit_file_dest = "/u01/oracle/admin/primary_db/adump"
audit_trail = "DB"
db_name = "primary_db"
db_unique_name = "standby_db"
open_cursors = 300
pga_aggregate_target = 1250M
dg_broker_start = FALSE
diagnostic_dest = "/u01/oracle"
Mon Dec 23 17:13:45 2013
PMON started with pid=2, OS id=29108
Mon Dec 23 17:13:45 2013
PSP0 started with pid=3, OS id=29110
Mon Dec 23 17:13:46 2013
VKTM started with pid=4, OS id=29125 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Mon Dec 23 17:13:46 2013
GEN0 started with pid=5, OS id=29129
Mon Dec 23 17:13:46 2013
DIAG started with pid=6, OS id=29131
Mon Dec 23 17:13:46 2013
DBRM started with pid=7, OS id=29133
Mon Dec 23 17:13:46 2013
DIA0 started with pid=8, OS id=29135
Mon Dec 23 17:13:46 2013
MMAN started with pid=9, OS id=29137
Mon Dec 23 17:13:46 2013
DBW0 started with pid=10, OS id=29139
Mon Dec 23 17:13:46 2013
DBW1 started with pid=11, OS id=29141
Mon Dec 23 17:13:46 2013
DBW2 started with pid=12, OS id=29143
Mon Dec 23 17:13:46 2013
DBW3 started with pid=13, OS id=29145
Mon Dec 23 17:13:46 2013
LGWR started with pid=14, OS id=29147
Mon Dec 23 17:13:46 2013
CKPT started with pid=15, OS id=29149
Mon Dec 23 17:13:46 2013
SMON started with pid=16, OS id=29151
Mon Dec 23 17:13:46 2013
RECO started with pid=17, OS id=29153
Mon Dec 23 17:13:46 2013
MMON started with pid=18, OS id=29155
Mon Dec 23 17:13:46 2013
MMNL started with pid=19, OS id=29157
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = /u01/oracle
Mon Dec 23 17:13:46 2013
ALTER DATABASE MOUNT
ARCH: STARTING ARCH PROCESSES
Mon Dec 23 17:13:50 2013
ARC0 started with pid=23, OS id=29210
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Successful mount of redo thread 1, with mount id 2071851082
Mon Dec 23 17:13:51 2013
ARC1 started with pid=24, OS id=29212
Allocated 15937344 bytes in shared pool for flashback generation buffer
Mon Dec 23 17:13:51 2013
ARC2 started with pid=25, OS id=29214
Starting background process RVWR
ARC1: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Dec 23 17:13:51 2013
RVWR started with pid=26, OS id=29216
Physical Standby Database mounted.
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Mon Dec 23 17:13:51 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:13:51 2013
MRP0 started with pid=27, OS id=29219
MRP0: Background Managed Standby Recovery process started (primary_db)
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC2: Becoming the heartbeat ARCH
ARC2: Becoming the active heartbeat ARCH
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
This is the rest of the log:
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:52
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:55
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
started logmerger process
Mon Dec 23 17:13:56 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:58
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Mon Dec 23 17:14:01 2013
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:14:01
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Error 12543 received logging on to the standby
FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
Archiver process freed from errors. No longer stopped
Mon Dec 23 17:15:07 2013
Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
Mon Dec 23 17:19:51 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Mon Dec 23 17:26:18 2013
RFS[1]: Assigned to RFS process 31456
RFS[1]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:26:47 2013
flashback database to scn 15921680
ORA-16157 signalled during: flashback database to scn 15921680...
Mon Dec 23 17:27:05 2013
alter database recover managed standby database using current logfile disconnect
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:27:05 2013
MRP0 started with pid=28, OS id=31481
MRP0: Background Managed Standby Recovery process started (primary_db)
started logmerger process
Mon Dec 23 17:27:10 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: alter database recover managed standby database using current logfile disconnect
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Mon Dec 23 17:27:18 2013
RFS[2]: Assigned to RFS process 31492
RFS[2]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:28:18 2013
RFS[3]: Assigned to RFS process 31614
RFS[3]: No connections allowed during/after terminal recovery.
Do you have any advice?
Thanks!
Alex.Hello;
What's not clear to me in your question at this point:
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:"
Are you trying a failover to the Standby?
Please advise.
Is it possible your "valid_for clause" is set incorrectly?
Would also review this:
ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
Best Regards
mseberg -
Data guard standby restart in rocover mode
Hi,
I inherited a data guard database and found that the secondary DB has been out of sync with primary. I perform SCN recovery to the secondary and now they are fine syncing. I figured I had to manually put the standby in recover mode after open in read only mode in the one I worked on which is our qual environment where as it is not like that in the prod. Can anyone advise what I need to do to have this standby in recovery mode even after the DB go down for maintenance and come back on?
Thanks for your input,
fakordHi,
If you are using database guard broker then it will automatically starts the apply as it controls the apply. If not probably a shell script to start the standby database followed by this line can help.
alter database recover managed standby database using current logfile disconnect from session;
HTH,
Pradeep
Maybe you are looking for
-
I just purchased the Iphone 5 and i want to give my iphone 4 to a family member but there's a passcode set on my restrictions that i don't remember. How can I by pass this?
-
How to send a fax by using laptop pc + N70
Dear friends, can you give me any advise for this practise: edit a fax file on laptop, following, setup a connection between pc and my N70 with bluetooth device, then send the fax out via my N70. I have heard some people did this succesfully, but I f
-
Need Pivot Table to hide only the rows in the table, now the entire sheet.
Hello, I have an basic pivot table (account type, account number, location, fees assessed) and wish to have a small summary table a couple of columns to the right of the data in the pivot table. Is there a way to be able to have the pivot table onl
-
How to configure listener while upgrade to RAC 11.2.0.2?
I have 11.2.0.1 and preparing for upgrade to 11.2.0.2. While upgrading the single node we have to set the new oracle_home then configure listener for new home then upgrade the db. How to follow the same steps in case of RAC. The listeners are running
-
Uploading songs from IPOD onto new OS
My computer crashed and needed to be reformatted, so I lost everything. When I reformatted, I downloaded the newest version of ITunes. When I hooked up my Ipod though, it tell me that I have to erase all of my songs. Is there any way around this? Any