Data guard monitoring
Hi All,
I wanted to know whether difference between output of below queries would give me log gap between primary and standby database on which the dataguard is configured.
SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES';
SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES';
I am trying to write a linux script to monitor the data guard.. Please suggest..
Thanks
What I would do is create an SQL file and call it from a Linux script. Here's a start :
SPOOL /tmp/quickcheck.lst
PROMPT
PROMPT Checking Size and usage in GB of Flash Recovery Area
PROMPT
SELECT
ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
SUM(B.PERCENT_SPACE_USED) AS PERCENT_OF_SPACE_USED
FROM
V$RECOVERY_FILE_DEST A,
V$FLASH_RECOVERY_AREA_USAGE B
GROUP BY
SPACE_LIMIT,
SPACE_USED ,
SPACE_RECLAIMABLE ;
PROMPT
PROMPT Checking free space In Flash Recovery Area
PROMPT
column FILE_TYPE format a20
select * from v$flash_recovery_area_usage;
PROMPT
PROMPT Checking last sequence in v$archived_log
PROMPT
clear screen
set linesize 100
column STANDBY format a20
column applied format a10
SELECT name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
prompt
prompt Checking Last log on Primary
prompt
select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
SPOOL OFFRun on your primary
If you find this helpful please mark it so.
Best Regards
mseberg
Similar Messages
-
Data guard monitoring shell script
uname -a
Linux DG1 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
Hi Guys,
I am looking for a shell script that i can cron ,which monitors dataguard env (10g and 11g )and sent email alerts if DR go out of sync say by 10 or 15 logs
i found couple on the net but not working for some reason
http://emrebaransel.blogspot.com/2009/07/shell-script-to-check-dataguard-status.html
if you guys have some please shareYou are using an advanced version of Oracle and want to plug an obsolete script into it??
Why not just monitor the Data Guard with EM or Grid Control and setup emails in there? It is 100% more reliable than anything else. -
Data Guard monitoring Scripts.
I am looking for scripts to monitor Data Guard?
Can someone help me with this please?
Thanks in advance.These scripts are Unix specific:
## THIS ONE IS CALLED BY THE NEXT
#!/bin/ksh
# last_log_applied.ksh <oracle_sid> [connect string]
if [ $# -lt 1 ]
then
echo "$0: <oracle_sid> [connect string]"
exit -1
fi
oracle_sid=$1
connect_string=$2
ORACLE_HOME=`grep $oracle_sid /var/opt/oracle/oratab | awk -F":" {'print $2'}`
export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
PATH=$PATH:$ORACLE_HOME/bin
export PATH
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
export ORA_NLS33
ORACLE_SID=$oracle_sid
export ORACLE_SID
ofile="/tmp/${oracle_sid}_last_log_seq.log"
#### STANDBY SERVER NEEDS TO CONNECT VIA SYSDBA
if [ ${connect_string:="NULL"} = "NULL" ]
then
$ORACLE_HOME/bin/sqlplus -s /nolog << EOF >tmpfile 2>&1
set pagesize 0;
set echo off;
set feedback off;
set head off;
spool $ofile;
connect / as sysdba;
select max(sequence#) from v\$log_history;
EOF
#### PASS CONNECT STRING IN FOR PRIMARY SERVER
else
$ORACLE_HOME/bin/sqlplus -s $connect_string << EOF >tmpfile 2>&1
set pagesize 0;
set echo off;
set feedback off;
set head off;
spool $ofile;
select max(sequence#) from v\$log_history;
EOF
fi
tmp=`grep -v [^0-9,' '] $ofile | tr -d ' '`
rm $ofile tmpfile
echo "$tmp"
# standby_check.ksh
#!/bin/ksh
export STANDBY_DIR="/opt/oracle/admin/standby"
if [ $# -ne 1 ]
then
echo "Usage: $0: <ORACLE_SID>"
exit -1
fi
oracle_sid=$1
# Max allowable logs to be out of sync on standby
machine_name=`uname -a | awk {'print $2'}`
. $STANDBY_DIR/CONFIG/params.$oracle_sid.$machine_name
user_pass=`cat /opt/oracle/admin/.opass`
echo "Running standby check on $oracle_sid..."
standby_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid`
primary_log_num=`$STANDBY_DIR/last_log_applied.ksh $oracle_sid ${user_pass}@${oracle_sid}`
echo "standby_log_num = $standby_log_num"
echo "primary_log_num = $primary_log_num"
log_difference=`expr $primary_log_num - $standby_log_num`
if [ $log_difference -ge $ALARM_DIFF ]
then
/bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $EMAIL_LIST < $STANDBY_DIR/standby_warning_mail
# Page the DBA's if we're falling way behind
if [ $log_difference -ge $PAGE_DIFF ]
then
/bin/mailx -s "$oracle_sid: Standby is $log_difference behind primary." -r $FROM_EMAIL $PAGE_LIST < $STANDBY_DIR/standby_warning_mail
fi
else
echo "Standby is keeping up ok ($log_difference logs behind)"
fi -
Data Guard Broker - platform requirements
Hi there,
I've been checking the docs and haven't been able to find a definitive answer - my question is, if you have your primary and standby databases on a 64-bit architecture (HP-UX64 for example), can you have the broker that manages that configuration on a 32-bit architecture (Windows, Linux etc)?
Any advice would be greatly appreciated. If anyone has a setup like what I've described, please let me know.
Many thanks,
IMHi again, I've managed to answer my own question (from AskTom):
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4111318776437#60461502943814
"observer"? you mean the data guard monitor. Yes, the standby must be on the same platform as the
primary - but the monitoring software (enterprise manager stuff) may be elsewhere.
Cheers,
IM -
Best Practice for monitoring database targets configured for Data Guard
We are in the process of migrating our DB targets to 12c Cloud Control.
In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B. Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation. One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
For our new OEM CC environment we are setting up CC A and CC B. However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console. In other words, monitor and administer DB Primary and Standby from the same OEM CC Console. I am trying to determine the best practice. I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
I am interested in feedback. I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online). Thanks for your input and thoughts. I am deliberately trying to keep this as concise as possible.OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets. -
Monitoring failover - Data Guard Broker
Hi,
I work on a Oracle 10.2.0.4 database on Solaris 10. It is a 2 node RAC database with a physical standby configured.
I want to monitor (send a mail) to myself when failover occurs (which will be triggered by data guard broker ) , I think so I can monitor the failover using alert log (which normally does log the command when we initiate a failover) but I am not sure whether data guard broker also does the same( writes appropriate commands when a failover is triggered)
Is there any other way we can come to know when a failover occurs (we can query database_role from v$dataguard status) but I am looking for some trigger that will be fired instantaneously when a failover is initiated ?
Also is it possible to monitor the observer, whether it is up or not ?Hi,
you have several possiblities to do that. Easiest is to use predefined Grid Control Events for it. Or you may put a trigger on the event "after DB_ROLE_CHANGE on database".
Monitoring the observer can be done with dgmgrl like
connect sys/<pw>@<cd>;
show configuration verbose;That show you the presence and location of the observer.
I give you an example for the usage of the trigger that starts a service depending on the role of the database. You may customize it to send you an email.
begin
dbms_service.create_service('safe','safe');
end;
create trigger rollenwechsel after DB_ROLE_CHANGE on database
declare
vrole varchar(30);
begin
select database_role into vrole from v$database;
if vrole = 'PRIMARY' then
DBMS_SERVICE.START_SERVICE('safe');
else
DBMS_SERVICE.STOP_SERVICE('safe');
end if;
end;
/ -
Using OEM to monitor Data Guard database
Can someone please send me the link or docs of how to use OEM monitoring data guard? In specific, I would like to use OEM to monitor and be sure logs are applying to stand by site.
Any ideas?Hello ,
I will extend the document on the Fast Failover feature one of these days.
What you need to do is:
For using Fast Failover, you need to configure the Data Guard observer process.
The configuration of this process can be done by selecting the Fast-Start Failover Disabled link on the Data Guard page of the database (Primary or Standby).
You will be automatically redirected to the Fast-Start Failover: Configure Page.
From this page you can do the configuration of the DG observer process, that then will enable you to activate the Fast Failover feature.
Regards
Rob
http://oemgc.wordpress.com -
Monitoring Data Guard with SNMP?
I have configured Data Guard within two Oracle environments and have written a small Perl script which monitors the applied log service and sends an email if something fails to be applied.
I am assuming this is not the most efficient way of monitoring the systems and would like to use SNMP.
Can anyone tell me if it is possible to monitor Data Guard using SNMP (traps)? If so would you know what documents are available?
Cheers!Some of the parameters that you need to have with Physical standby database are :
*.background_dump_dest='/ford/app/oracle/admin/xchbot1/bdump'
*.compatible='9.2.0.7'
*.control_files='/home30/oradata/xchange/xchbot1/control01.ctl','/home30/oradata/xchange/xchbot1/control02.ctl','/home30/orad
ata/xchange/xchbot1/control03.ctl'
*.core_dump_dest='/ford/app/oracle/admin/xchbot1/cdump'
*.db_block_buffers=1024
*.db_block_size=8192
*.db_file_multiblock_read_count=8# SMALL
*.db_files=1000# SMALL
*.db_name='xchbot1'
*.global_names=TRUE
*.log_archive_dest_1='LOCATION=/home30/oradata/xchange/xchbot1/archivelog'
*.log_archive_dest_2='SERVICE=standby'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='arch_%t_%s.arc'
*.log_archive_start=true
*.log_buffer=16384000# SMALL
*.log_checkpoint_interval=10000
*.max_dump_file_size='10240'# limit trace file size to 5 Meg each
*.parallel_max_servers=5
*.parallel_min_servers=1
*.processes=50# SMALL
*.rollback_segments='rbs01','rbs02','rbs03','rbs04','rbs05','rbs06','rbs07','rbs08','rbs09','rbs10'
*.shared_pool_size=67108864
*.sort_area_retained_size=2048
*.sort_area_size=10240
*.user_dump_dest='/ford/app/oracle/admin/xchbot1/udump' -
Data Guard Gap Monitoring script
Hello,
Can anyone please provide me data guard gap monitoring script for databases(primary,standby) on RAC.
Oracle RDBMS 11.2.0.2(4-node RAC) on RHEL 5.6.
Thanks
Edited by: 951368 on Dec 26, 2012 9:21 AM951368 wrote:
Hello,
Can anyone please provide me data guard gap monitoring script for databases(primary,standby) on RAC.
Oracle RDBMS 11.2.0.2(4-node RAC) on RHEL 5.6.
Thanks
Edited by: 951368 on Dec 26, 2012 9:21 AMUse the script of MSeberg, Modify v$instance as gv$instance for RAC -
Shell script to monitor the data guard
Hi,
Can any body please provide the shell scripts to monitor the data guard in all scenarios and to get the mail when problem occurs in dataguard.
Thanks,
MahipalSorry Mahi. Looks like all of the scripts i've got are for logical standbys and not physical. Have a look at the link ualual posted - easy enough to knock up a script from one or more of those data dictionary views. Just had a look on metalink and there's what looks to be a good script in note 241438.1. Its a good starting point definately.
regards,
Mark -
Shell scripts to monitor data guard
Hi All,
Please help me to have the shell scripts for monitoring the data guard.
Thanks,
Mahihere is the shell script we use to monitor dataguard, it sends mail if there is a gap for more than 20 archive logs..
#set Oracle environment for Sql*Plus
#ORACLE_BASE=/oracle/app/oracle ; export ORACLE_BASE
ORACLE_HOME=/oracle/app/oracle/product/10.2.0 ; export ORACLE_HOME
ORACLE_SID=usagedb ; export ORACLE_SID
PATH=$PATH:/oracle/app/oracle/product/10.2.0/bin
#set working directory. script is located here..
cd /oracle/scripts
#Problem statemnt is constructed in message variable
MESSAGE=""
#hostname of the primary DB.. used in messages..
HOST_NAME=`/usr/bin/hostname`
#who will receive problem messages.. DBAs e-mail addresses seperated with space
DBA_GROUP='[email protected] '
#SQL statements to extract Data Guard info from DB
LOCAL_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=1; \n exit \n'
STBY_ARC_SQL='select archived_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
STBY_APPLY_SQL='select applied_seq# from V$ARCHIVE_DEST_STATUS where dest_id=2; \n exit \n'
#Get Data guard information to Unix shell variables...
LOCAL_ARC=`echo $LOCAL_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
STBY_ARC=`echo $STBY_ARC_SQL | sqlplus -S / as sysdba | tail -2|head -1`
STBY_APPLY=`echo $STBY_APPLY_SQL | sqlplus -S / as sysdba | tail -2|head -1`
#Allow 20 archive logs for transport and Apply latencies...
let "STBY_ARC_MARK=${STBY_ARC}+20"
let "STBY_APPLY_MARK= ${STBY_APPLY}+20"
if [ $LOCAL_ARC -gt $STBY_ARC_MARK ] ; then
MESSAGE=${MESSAGE}"$HOST_NAME Standby -log TRANSPORT- error! \n local_Arc_No=$LOCAL_ARC but stby_Arc_No=$STBY_ARC \n"
fi
if [ $STBY_ARC -gt $STBY_APPLY_MARK ] ; then
MESSAGE=${MESSAGE}"$HOST_NAME Standby -log APPLY- error! \n stby_Arc_No=$STBY_ARC but stby_Apply_no=$STBY_APPLY \n"
fi
if [ -n "$MESSAGE" ] ; then
MESSAGE=${MESSAGE}"\nWarning: dataguard error!!! \n .\n "
echo $MESSAGE | mailx -s "$HOST_NAME DataGuard error" $DBA_GROUP
fi -
Data Guard Agent, Authentication Failure
I'm working with two Windows 2003 servers, attempting to use one as a standby and one as a primary database use Data Guard. However, I'm having a bit of trouble when trying to get one server to communicate through the Management Agent and Management Service. I've done Management Agent installs on about 20 XP workstations and they've also worked wonderfully with the Oracle Grid Control.
When the agent on my would-be standby database instance starts up I'm receiving the following errors in emagent.trc:
2005-11-01 15:16:54 Thread-3836 WARN main: clear collection state due to OMS_version difference
2005-11-01 15:16:54 Thread-3836 WARN command: Job Subsystem Timeout set at 600 seconds
2005-11-01 15:16:54 Thread-3836 WARN upload: Upload manager has no Failure script: disabled
2005-11-01 15:16:54 Thread-3836 WARN upload: Recovering left over xml files in upload directory
2005-11-01 15:16:54 Thread-3836 WARN upload: Recovered 0 left over xml files in upload directory
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric RuntimeLog does not have any data columns
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric collectSnapshot does not have any data columns
2005-11-01 15:16:54 Thread-3836 ERROR engine: [oracle_bc4j] CategoryProp NAME [VersionCategory] is not one of the valid choices
2005-11-01 15:16:54 Thread-3836 ERROR engine: ParseError: File=D:\oracle\product\10.1.0\dg\sysman\admin\metadata\oracle_bc4j.xml, Line=486, Msg=attribute NAME in <CategoryProp> cannot be NULL
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name EFFICIENCY__BYTES_SAVED_WITH_COMPRESSION__AVG_PER_SEC_SINCE_START too long, truncating to EFFICIENCY__BYTES_SAVED_WITH_COMPRESSION__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name ESI__ERRORS__ESI_DEFAULT_FRAGMENT_SERVED__AVG_PER_SEC_SINCE_START too long, truncating to ESI__ERRORS__ESI_DEFAULT_FRAGMENT_SERVED__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_STA
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__LATENCY__MAX_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__LATENCY__MAX_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__LATENCY__AVG_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__LATENCY__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_STA
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__REQUESTS__MAX_PER_SEC_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__REQUESTS__MAX_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_STAR
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric Wireless_PID does not have any data columns
2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric numberOfAppDownloadsOverInterval_instance does not have any data columns
2005-11-01 15:17:00 Thread-4172 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
SQL = " OCISessionBegin"...
LOGIN = dbsnmp/<PW>@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MY_DATABASE)(PORT=1521))(CONNECT_DATA=(SID=CPD2DB)))
2005-11-01 15:17:00 Thread-4172 ERROR vpxoci: ORA-01017: invalid username/password; logon denied
2005-11-01 15:17:00 Thread-4172 WARN vpxoci: Login 0xe8c220 failed, error=ORA-01017: invalid username/password; logon denied
2005-11-01 15:17:00 Thread-4172 WARN TargetManager: Exception in computing dynamic properties of {MY_DATABASE, oracle_database },MonitorConfigStatus::ORA-01017: invalid username/password; logon denied
2005-11-01 15:17:01 Thread-4172 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
I've already toggled the Local Security Policy (Log On As Batch Job) setting in Windows, unlocked the Monitoring Profile account, etc. I've also tried to set the Preferred Host Credentials for the database, but it doesn't seem to want to authenticate the Windows 2003 Administrator user.
Anyone have any other suggestions?Check the following:
Does the user have administrative privilege on the system?
Is the user running this part of ORA_DBA group?
Does the user have the local security policy "Logon as Batch Job"?
Have you set the OS Preferred Credential? If you are a domain user, this will be looking for domain\user name instead of just the user name.
On another note:
Have you doen any upgrades to the OMS repository?
If yes, is the new Repository compatible with the EM Console? -
Data Guard configuration for RAC database disappeared from Grid control
Primary Database Environment - Three node cluster
RAC Database 10.2.0.1.0
Linux Red Hat 4.0 2.6.9-22 64bit
ASM 10.2.0.1.0
Management Agent 10.2.0.2.0
Standby Database Environment - one Node database
Oracle Enterprise Edition 10.2.0.1.0 Single standby
Linux Red Hat 4.0 2.6.9-22 64bit
ASM 10.2.0.1.0
Management Agent 10.2.0.2.0
Grid Control 10.2.0.1.0 - Node separate from standby and cluster environments
Oracle 10.1.0.1.0
Grid Control 10.2.0.1.0
Red Hat 4.0 2.6.9-22 32bit
After adding a logical standby database through Grid Control for a RAC database, I noticed sometime later the Data Guard configuration disappeared from Grid Control. Not sure why but it is gone. I did notice that something went wrong with the standby creation but i did not get much feedback from Grid Control. The last thing I did was to view the configuration, see output below.
Initializing
Connected to instance qdcls0427:ELCDV3
Starting alert log monitor...
Updating Data Guard link on database homepage...
Data Protection Settings:
Protection mode : Maximum Performance
Log Transport Mode settings:
ELCDV.qdx.com: ARCH
ELXDV: ARCH
Checking standby redo log files.....OK
Checking Data Guard status
ELCDV.qdx.com : ORA-16809: multiple warnings detected for the database
ELXDV : Creation status unknown
Checking Inconsistent Properties
Checking agent status
ELCDV.qdx.com
qdcls0387.qdx.com ... OK
qdcls0388.qdx.com ... OK
qdcls0427.qdx.com ... OK
ELXDV ... WARNING: No credentials available for target ELXDV
Attempting agent ping ... OK
Switching log file 672.Done
WARNING: Skipping check for applied log on ELXDV : disabled
Processing completed.
Here are the steps followed to add the standby database in Grid Control
Maintenance tab
Setup and Manage Data Guard
Logged in as sys
Add standby database
Create a new logical standby database
Perform a live backup of the primary database
Specify backup directory for staging area
Specify standby database name and Oracle home location
Specify file location staging area on standby node
At the end am presented with a review of the selected options and then the standby database is created
Has any body come across a similar issue?
Thanks,Any resolution on this?
I just created a Logical Standby database and I'm getting the same warning (WARNING: No credentials available for target ...) when I do a 'Verify Configuration' from the Data Guard page.
Everything else seems to be working fine. Logs are being applied, etc.
I can't figure out what credentials its looking for. -
Data Guard Broker connecting to standby database fails
Hello everybody
I checked lots of pages but I'm not able to find a solution für my problem. I already set up a primary and a standby database (prim = ALPHA1 / standby = ALPHA2).
After enabling my dgmgrl configuration I got two errors:
DGM-17016: failed to retrieve status for database "alpha2"
ORA-16664: unable to receive the result from a database
The dg log from ALPHA1 says:
06/04/2013 16:06:57
Site alpha2 returned ORA-16664.
Data Guard Broker Status Summary:
Type Name Severity Status
Configuration alphadgb Warning ORA-16607
Primary Database alpha1 Success ORA-00000
Physical Standby Database alpha2 Error ORA-16664
While the dg log from ALPHA2 (standby) says:
06/04/2013 16:43:28
SPFILE is missing value for property 'LogArchiveFormat' with sid='ALPHA2'
Warning: Property 'LogArchiveFormat' has inconsistent values:METADATA='arch_ALPHA2_%S_%t_%r.arc', SPFILE='(missing)', DATABASE='arch_ALPHA2_%S_%t_%r.arc'
Failed to connect to remote database alpha1. Error is ORA-12514
Failed to send message to site alpha1. Error code is ORA-12514.
How can I solve this issue? Every type of tnsping is successfull. The sqlplus login from the primary to the standby database works, the other way round works too! Therefore the tnsnames and listener data seems to be correct.
My configuration for ALPHA1 (primary db):
Listener
LISTENER_ALPHA1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.3.13)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
SID_LIST_LISTENER_ALPHA1 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ALPHA1_DGMGRL)
(ORACLE_HOME = /oracle/ALPHA1/orahome)
(SID_NAME = ALPHA1)
tnsnames.ora
ALPHA1.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.3.13)(PORT = 1521))
(CONNECT_DATA =
(SID = ALPHA1)
ALPHA2.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.3.13)(PORT = 1522))
(CONNECT_DATA =
(SID = ALPHA2)
DG_ALPHA1.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.3.13)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = ALPHA1_DGMGRL)
DG_ALPHA2.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.3.13)(PORT = 1522))
(CONNECT_DATA =
(SERVICE_NAME = ALPHA2_DGMGRL)
Parameters
archive_lag_target integer 0
log_archive_config string DG_CONFIG=(ALPHA2,ALPHA1)
log_archive_dest string
log_archive_dest_1 string LOCATION=USE_DB_RECOVERY_FILE_DEST valid_for=(all_logfiles,all_roles) DB_UNIQUE_NAME=ALPHA2
log_archive_dest_2 string SERVICE=ALPHA1 SYNC valid_for=(online_logfiles,primary_role) DB_UNIQUE_NAME=ALPHA1
log_archive_format string arch_ALPHA2_%S_%t_%r.arc
log_archive_local_first boolean TRUE
log_archive_max_processes integer 4
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
standby_archive_dest string ?/dbs/arch
For the DG Broker configuration
DGMGRL> connect sys/dgalpha42@DG_ALPHA1
DGMGRL> create configuration ALPHADGB
DGMGRL> primary database is ALPHA1
DGMGRL> connect identifier is DG_ALPHA1
DGMGRL> ;
DGMGRL> add database ALPHA2
DGMGRL> connect identifier is DG_ALPHA2
DGMGRL> maintained as physical
DGMGRL> ;
There were no errors.
DGMGRL> show database verbose ALPHA1
Database - alpha1
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
ALPHA1
Properties:
DGConnectIdentifier = 'dg_alpha1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'ALPHA2, ALPHA1'
LogFileNameConvert = 'ALPHA2, ALPHA1'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'ALPHA1'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraprakt)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ALPHA1_DGMGRL)(INSTANCE_NAME=ALPHA1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch_ALPHA1_%S_%t_%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
SUCCESS
DGMGRL> show database verbose ALPHA2
Database - alpha2
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: (unknown)
Real Time Query: OFF
Instance(s):
ALPHA2
Properties:
DGConnectIdentifier = 'dg_alpha2'
ObserverConnectIdentifier = ''
LogXptMode = 'SYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'ALPHA1, ALPHA2'
LogFileNameConvert = 'ALPHA1, ALPHA2'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'ALPHA2'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraprakt)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ALPHA2_DGMGRL)(INSTANCE_NAME=ALPHA2)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch_ALPHA2_%S_%t_%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
DGM-17016: failed to retrieve status for database "alpha2"
ORA-16664: unable to receive the result from a database
Can anybody help me to find a solution for this?Hey
thanks for the answer. I followed you recommendations but I got the same error again. I restored/recovered the old status and looked deeper into the dgmgrl configuration before enabling. I found an interesting point. (show database verbose ALPHAx)
Database - alpha1
Role: PRIMARY
Intended State: OFFLINE
Instance(s):
ALPHA1
Properties:
DGConnectIdentifier = 'dg_alpha1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'ALPHA2, ALPHA1'
LogFileNameConvert = 'ALPHA2, ALPHA1'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'ALPHA1'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraprakt)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ALPHA1_DGMGRL)(INSTANCE_NAME=ALPHA1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch_ALPHA1_%S_%t_%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
DISABLED
Database - alpha2
Role: PHYSICAL STANDBY
Intended State: OFFLINE
Transport Lag: (unknown)
Apply Lag: (unknown)
Real Time Query: OFF
Instance(s):
ALPHA2
Properties:
DGConnectIdentifier = 'dg_alpha2'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'ALPHA1, ALPHA2'
LogFileNameConvert = 'ALPHA1, ALPHA2'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'ALPHA2'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraprakt)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ALPHA2_DGMGRL)(INSTANCE_NAME=ALPHA2)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch_ALPHA2_%S_%t_%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
DISABLED
As the listener are configured ALPHA1 (prim) should be on port 1521 while ALPHA2 (stby) should work on 1522. In the configuration of DGMGRL only appears port 1521 (look at StaticConnectIdentifier). Is this maybe the reason of the networking problems with DG Broker? How can I fix this?
Regards Mirko
Edited by: 1009733 on 04.06.2013 09:22 -
Setting up the standby side after a crash (Data Guard)
Hi,
I hope this is the right area to publish my problem...
I can't find something like codetags for the system output - so I'm sorry for the bad looking
I have a problem. I use Oracle 11.2.0.3.0 with a dataguard environment. My primary database crashed and I activate the standby to be the new primary.
After the old primary gets repaired I want to define them as the ney standby. This didn't work because we have disabled flashback logging.
We created the new standby:
rman target sys/password@prim auxiliary sys/password@stby
duplicate target database for standby from active database;
After this we do this on the new standby:
alter database recover managed standby database disconnect from session;
It looks the now there is a working physical standby.
Now I look on the primary dataguard:
DGMGRL> show database verbose stby;
Database - stby
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: (unknown)
Real Time Query: OFF
Instance(s):
dbuc4
Properties:
DGConnectIdentifier = 'stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '900'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'dbuc4'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=stby)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=stby_DGMGRL.blubb.de)(INSTANCE_NAME=stby)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
ORA-16795: the standby database needs to be re-created
DGMGRL> show database verbose prim;
Database - prim
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
dbuc4
dbuc4stby
Properties:
DGConnectIdentifier = 'prim'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value
Database Status:
SUCCESS
DGMGRL> show database verbose dbuc4stby;
Database - dbuc4stby
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
dbuc4
dbuc4stby
Properties:
DGConnectIdentifier = 'dbuc4stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value
Database Status:
SUCCESS
DGMGRL> show configuration
Configuration - dg
Protection Mode: MaxPerformance
Databases:
prim - Primary database
stby - Physical standby database (disabled)
ORA-16795: the standby database needs to be re-created
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
On the stby side it looks like:
DGMGRL> show configuration
ORA-16795: Die Standby-Datenbank muss neu erstellt werden (The standby database needs to be recreated)
Did I have to create a new dataguard configuration?
I didn't know how I get this to work.
Thx fuechsinHi,
first of all: big thanks for your answer!
I think, this was a bad idea, too. But there was not enough space, so we decided to turn it off without thinking of consequences. :/
When the new hardware arrive I will enable flashback and never turn it off
The dataguard-log on the stby says:
01/23/2014 11:56:13
>> Starting Data Guard Broker bootstrap <<
Broker Configuration File Locations:
dg_broker_config_file1 = "/Daten/stby/stby_dgbroker1.dat"
dg_broker_config_file2 = "/Daten/stby/stby_dgbroker2.dat"
01/23/2014 11:56:17
Database needs to be reinstated or re-created, Data Guard broker ready
I want to try to delete the configuration on the prim and stby side and reconfigure it. But I don't know if there are side-effects on the working prim side - it is a productive system.
Best regards,
fuechsin
Maybe you are looking for
-
Read excel cell when excel is already open
Hello, I want to read a specific cell of a sheet with excel already open. Labview has only to read the cell and not have to open excel. I have several examples that show : Labvview open excel select excel file select workbook select sheet select cell
-
Questions re: buying in the USA & taking back to UK
Hi all, I'm a British exchange student and my year studying in Texas is coming to an end. I am considering buying a 13" MacBook Pro, as they are cheaper here using the education pricing (though I've not decided on whether to get the core i5 or i7 ver
-
Hi, First scenario. I run the following piece of code locally File file = new File(fileName); It works fine. Second Scenario. This code is put in a jar file and put on a shared drive. The application is started by clicking a bat file on the same shar
-
Whenever I use my phone while it's charging, it loses charge
I just recieved an iPhone 4 over the weekend and whenever I plug it in to charge, t'll charge when I don't use it. But, if I wanna do something like reply to a message or check a social network, it starts to lose charge. I really don't know what to d
-
Adpater framework using qRFC to send data to integration engine ?
HI Forum, when adpater framework sends data to the integration engine, i suppose it uses a qRFC, and when integration engine sends data to adpater framework, Is my assumption correct, if so, can i get a description about it,