Data Guard - dual configuration
Hi,
we are running Data Guard configuration (Windows Server 2008 R2) with physical standby database in place, using FSFO configuration with DG Broker - observer process on separate machine (10.2.0.5). My question is, if it's safe and possible to create another, separate Data Guard configuration (11.2.0.3) on the same machines (2x database server - databases, 1x client PC - observer)?
Thank you,
t.
Hello;
I've done it. Be it with Linux but that should not matter. I had two Oracle homes, one for each version and a script to set as needed. I ran into this during database upgrades too. Some of the databases were ready and some were not, so I was running 11.2.0.2 and 11.2.0.3 on the same server at the same time, both with Data Guard. No issues. The main thing was to use one listener and include the entries for the lower version in the file. I always start and stop the listener from the higher version.
A few years ago I still had some Oracle 10 databases and they worked just fine on a server with Oracle 11. The separate Oracle homes is the key. So on Linux I had no issue running Data Guard on the same server with two versions of Oracle.
Best Regards
mseberg
This is older, but worth a look : ( Using Multiple Oracle Homes )
http://docs.oracle.com/cd/B10500_01/em.920/a96697/moh.htm
Edited by: mseberg on May 31, 2013 7:49 AM
Similar Messages
-
Error: ORA-16532: Data Guard broker configuration does not exist
Hi there folks. Hope everyone is having a nice weekend.
Anyways, we have a 10.2.0.4 rac primary and a 10.2.0.4 standby physical standby. We recently did a switchover and the dgbroker files automatically got created in the Oracle_home/dbs location of the primary. Now need to move these files to the common ASM DG. For this, I followd the steps from this doc:
How To Move Dataguard Broker Configuration File On ASM Filesystem (Doc ID 839794.1)
The only exception to this case is that I have to do this on a Primary and not a standby so I am disabling and enabling the Primary(and not standby as mentioned in below steps)
To rename the broker configuration files in STANDBY to FRA/MYSTD/broker1.dat and FRA/MYSTD/broker2.dat, Follow the below steps
1. Disable the standby database from within the broker configuration
DGMGRL> disable database MYSTD;
2. Stop the broker on the standby
SQL> alter system set dg_broker_start = FALSE;
3. Set the dg_broker_config_file1 & 2 parameters on the standby to the appropriate location required.
SQL> alter system set dg_broker_config_file1 = '+FRA/MYSTD/broker1.dat';
SQL> alter system set dg_broker_config_file2 = '+FRA/MYSTD/broker2.dat'
4. Restart the broker on the standby
SQL> alter system set dg_broker_start = TRUE
5. From the primary, enable the standby
DGMGRL> enable database MYSTD;
6. Broker configuration files will be created in the new ASM location.
I did so but when I try to enable the Primary back I get this:
Error: ORA-16532: Data Guard broker configuration does not exist
Configuration details cannot be determined by DGMGRL
Form this link,(Errors setting up DataGuard Broker it would seem that I would need to recreate the configuration....Is that correct ? If yes then how come Metalink is missing this info of recreating the configuration... OR is it that that scenario wouldnt be applicable in my case ?
Thanks for your help.Yes I can confirm from the gv$spparameter view that the changes are effective for all 3 instances. From the alert log the alter system didnt throw u pany errros. I didnt restart the instances though since I dont have the approvals yet. But I dont think thats required.
-
Remove Data Guard Broker configuration file
Hi,
We have a data guard configuration running on Oracle 10g RAC (2 nodes) ASM. After failover, we rebuild the old primary as physical standby. But every time when enable the data guard broker, it created problems. So the best solution for us seems to completely remove the data guard broker configuration and recreate them. Here are our steps
On both primary and standby
1) alter system set DG_BROKER_START=FALSE
2) shutdown database
3) use asmcmd command to remove the broker directory under +DATA/dbname/DG_BROKER
4) start the database
5) alter system set DG_BROKER_START=TRUE
6) go through the many steps to recreate data guard broker configuration
I am just curious whether this is the right way and whether there is any simple way?
Thanks in advanceHi, I saw same issue when doing switchover testing in my lab environment.prerequisite is primary role and standby role switched and og can be applied without data guard broker.
Here is the step I resolved the issue
1)on both primary and standby database
SQL> alter system set dg_borker_start=false;
on primary DB:
SQL>alter system set dg_broker_config_file1='?/dbs/dr1afterswichoverpry.dat';
SQL>alter system set dg_broker_config_file1='?/dbs/dr2afterswichoverpry.dat';
on standby DB:
SQL>alter system set dg_broker_config_file1='?/dbs/dr1afterswichoverstby.dat';
SQL>alter system set dg_broker_config_file1='?/dbs/dr2afterswichoverstby.dat';
2) enable dg_borker_start on both primay and sandbby db
SQL> alter system set dg_borker_start=true;
3)on primary database to create configuration
Hope this can help you!
email: [email protected] -
How to remove data guard broker configuration when ORA-16625?
I setup data guard broker for a standby database. However, we recreate the database and re-setup the standby database. But find the earlier broker configuration still exists. But I cannot remove or disable the configuration or any database within the configuration. When I try to do so I got the error, although all network settings are correct:
Error: ORA-16625: cannot reach the database
How to remove the configuration at this stage?
Thanks for help,Hi, I saw same issue when doing switchover testing in my lab environment.prerequisite is primary role and standby role switched and og can be applied without data guard broker.
Here is the step I resolved the issue
1)on both primary and standby database
SQL> alter system set dg_borker_start=false;
on primary DB:
SQL>alter system set dg_broker_config_file1='?/dbs/dr1afterswichoverpry.dat';
SQL>alter system set dg_broker_config_file1='?/dbs/dr2afterswichoverpry.dat';
on standby DB:
SQL>alter system set dg_broker_config_file1='?/dbs/dr1afterswichoverstby.dat';
SQL>alter system set dg_broker_config_file1='?/dbs/dr2afterswichoverstby.dat';
2) enable dg_borker_start on both primay and sandbby db
SQL> alter system set dg_borker_start=true;
3)on primary database to create configuration
Hope this can help you!
email: [email protected] -
Data Guard Broker configuration in oracle10g r2
Hi,
I am facing difficulties while configuring data guard broker. Our setup is RAC primary and single standby database. Show configuration is raising the following error...
ORA-16607: one or more databases have failed
on dcSTANDBY.log file it shows below messages....
NSV0: Failed to connect to remote database ajmprod. Error is ORA-12545
NSV0: Failed to send message to site ajmprod. Error code is ORA-12545.
I have checked listener.ora and tnsnames.ora are fine. It's fine with tnsping and sqlplus conection. I am confused where is the problem.
Aprreciate your suggestions.Here is how I would look into this issue.
1. check the password file is setup for both and the password is common between them and matches the password of the sys user in the database.
2. check that listener(s) are up and running and status checks out.
3. check that you hosts file or DNS sees this hosts, you can check this with a ping command do from each host to the other
4. check that tnsnames.ora has the proper connect setup for both instances in you data guard configuration.
5. check tnsping to both instances from both hosts.
6. check all TNS connectivity to both instances using the SYS as sysdba, Use the TNS sqlplus sys@db as sysdba. Do this connection from both hosts if using more then one host.
If you have verified all this and it is still not working let me know and we can move on to next steps. -
Data Guard: Verify Configuration
If I verify the dataguard configuration within the EM (Additional Administration -> Verify Configuration), the check of the agents of the primary and the standby database results in a warning, that something could not be written to a process and that as result a switchover could fail. What does this warning mean ?
I have found something in an oracle book that during the check of the agents an SQL*Plus job is generated if credentials are available or a ping is generated instead, if the creadentials are not available.
Is this verification logged in any file to analyse in details why this check fails or has anyone an idea what is wrong in the configuration?my problem is, I'm working on a german system, so that I can provide the original warning only in German. But I have tried to translate it.
The detailed result window of the verification contains the following entry:
====================================
German warning:
Data Guard Status wird überprüft
PRIMDB: NORMAL
STBYDB: NORMAL
Inkonsistente Attribute werden überprüft
Agent-Status wird überprüft
PRIMDB ... Warning: Eingabe konnte nicht in Prozess geschrieben werden
Warning: Switchover oder Failover kann deshalb nicht erfolgreich durchgeführt werden
STBYDB ... Warning: Eingabe konnte nicht in Prozess geschrieben werden
Warning: Switchover oder Failover kann deshalb nicht erfolgreich durchgeführt werden
log-datei 76 wird gechselt ... Fertig
Angewendetes Log auf STBYDB wird überprüft ... OK
Verarbeitung abgeschlossen
=======================================
Try to translate it ;-)
Data Guard Status will be checked
PRIMDB: NORMAL
STBYDB: NORMAL
Inconsistent attributes will be checked
Agent-Status will be checked
PRIMDB ... Warning: Input could not be written to the process
Warning: Switchover or Failover can not be performed successfully
STBYDB ... Warning: Input could not be written to the process
Warning: Switchover or Failover can not be performed successfully
log-filei 76 will be changed ... Finished
Applied Log on STBYDB will be checked ... OK
Processing finished
================================================
Any ideas?
Edited by: gromit on Aug 6, 2009 7:30 AM -
Error: ORA-16525: the Data Guard broker is not yet available
Hi ,
After upgrading from 11201 to 11203 ON AIX GI/RDBMS on standby but have not upgraded the primary db yet.I had set dg_broker_start=false and disable configuration before i started the upgrade .
once the GI for oracle restart was upgraded i upgraded the rdbms binaries and brought up the standby on mount ,while trying to enable configuration its throwing the below error.I had already started the broker process.
SQL> show parameter dg_
NAME TYPE VALUE
dg_broker_config_file1 string /u01/app/omvmxp1/product/11.2.
0/dbhome_2/dbs/dr1mvmxs2.dat
dg_broker_config_file2 string /u01/app/omvmxp1/product/11.2.
0/dbhome_2/dbs/dr2mvmxs2.dat
dg_broker_start boolean TRUE
DGMGRL> show configuration;
Configuration - Matrxrep_brkr
Protection Mode: MaxAvailability
Databases:
mvmxp2 - Primary database
mvmxs2 - Physical standby database
Error: ORA-16525: the Data Guard broker is not yet available
Fast-Start Failover: DISABLED
Configuration Status:
ERROR
from drcmvmxs2.log
Starting Data Guard Broker bootstrap <<Broker Configuration File Locations:
dg_broker_config_file1 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
dg_broker_config_file2 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
12/19/2012 16:05:33
Data Guard Broker shutting down
DMON Process Shutdown <<12/19/2012 16:10:20
Starting Data Guard Broker bootstrap <<Broker Configuration File Locations:
dg_broker_config_file1 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
dg_broker_config_file2 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
~
Regards
Edited by: Monto on Dec 19, 2012 1:23 PMHi,
I removed the configuration and removed the broker files from RAC primary(mvmxp2) and single instance standby(mvmxs2) and re-created back.i tried it many times but getting error "ORA-16532" .I needed to have this standby backup before i start upgrading the primary.
SQL> alter system set dg_broker_start=true scope=both;
System altered.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
palmer60:/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs>dgmgrl
DGMGRL for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@mvmxp2
Password:
Connected.
DGMGRL> CREATE CONFIGURATION 'Matrxrep'
AS
PRIMARY DATABASE IS 'mvmxp2'
CONNECT IDENTIFIER IS 'mvmxp2';> > >
Configuration "Matrxrep" created with primary database "mvmxp2"
DGMGRL> ADD DATABASE 'mvmxs2'
AS
CONNECT IDENTIFIER IS 'mvmxs2'
;Database "mvmxs2" added
DGMGRL> SHOW CONFIGURATION;
Configuration - Matrxrep
Protection Mode: MaxPerformance
Databases:
mvmxp2 - Primary database
mvmxs2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
DISABLED
DGMGRL> ENABLE CONFIGURATION;
Enabled.
DGMGRL> SHOW DATABASE MVMXS2;
Database - mvmxs2
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: (unknown)
Real Time Query: OFF
Instance(s):
mvmxs2
Database Status:
DGM-17016: failed to retrieve status for database "mvmxs2"
ORA-16532: Data Guard broker configuration does not exist
ORA-16625: cannot reach database "mvmxs2"
DGMGRL>
tailed the drcmvmxs2.log during stop and start of the broker
palmer60:/u01/app/omvmxp1/diag/rdbms/mvmxs2/mvmxs2/trace>tail -f drcmvmxs2.log
12/19/2012 20:32:20
drcx: cannot open configuration file "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
12/19/2012 20:32:55
drcx: cannot open configuration file "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
12/19/2012 20:59:10
Data Guard Broker shutting down
DMON Process Shutdown <<12/19/2012 20:59:35
Starting Data Guard Broker bootstrap <<Broker Configuration File Locations:
dg_broker_config_file1 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr1mvmxs2.dat"
dg_broker_config_file2 = "/u01/app/omvmxp1/product/11.2.0/dbhome_2/dbs/dr2mvmxs2.dat"
Not sure how to fix this one.
Regards -
Data Guard as a pass through?
Scenario is . . .
Host A is the Primary
Host B is a Standby
Host C is a Standby
Now I know we can set up A->B and A->C
Can we set up A->B->C ?
Essentially using B as a pass through between A and C. Or you can see it as B being in a DMZ.
Would B have to be Active Data Guard or anything special?
I guess what I am really asking is can a Standby be used as the source for another Standby.See http://download.oracle.com/docs/cd/E11882_01/server.112/e10700/cascade_appx.htm#i638620 for more information on Cascaded Standby Destinations. There are a few restrictions:
Cascading has the following restrictions:
* Logical and snapshot standby databases cannot cascade primary database redo.
* SYNC destinations cannot cascade primary database redo in a Maximum Protection Data Guard configuration.
* Cascading is not supported in Data Guard configurations that contain an Oracle Real Applications Cluster (RAC) primary database.
* Cascading is not supported in Data Guard broker configurations.
Keep an eye on this chapter and Note 409013.1 "Cascaded Standby Databases" when the next patch set for 11.2 comes out :^)
Larry -
Problem with logminer in Data Guard configuration
Hi all,
I experience strange problem with applying of the logs in DataGuard configuration on the logical standby database side.
I've set up the configuration step by step as it is described in documentation (Oracle Data Guard Concepts and Administration, chapter 4).
Everything went fine until I issued
ALTER DATABASE START LOGICAL STANDBY APPLY;
I saw that log applying process was started by checking the output of
SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
and
SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
but in few minutes it stoped and quering DBA_LOGSTDBY_EVENTS I saw the following records:
ORA-16111: log mining and apply setting up
ORA-01332: internal Logminer Dictionary error
Alert log says the following:
LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
Wed Jan 21 16:57:57 2004
Errors in file /opt/oracle/admin/whouse/bdump/whouse_lsp0_5817.trc:
ORA-01332: internal Logminer Dictionary error
Here is the end of the whouse_lsp0_5817.trc
error 1332 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01332: internal Logminer Dictionary error
But the most useful info I found in one more trace file (whouse_p001_5821.trc):
krvxmrs: Leaving by exception: 604
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
ORA-06512: at line 1
Seems that somewhere the correct privileges were not given or smth like this. By the way I was doing all the operations under SYS account (as SYSDBA).
Could smb give me a clue where could be my mistake or what was done in the wrong way?
Thank you in advance.Which is your SSIS version?
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
Data Guard configuration for RAC database disappeared from Grid control
Primary Database Environment - Three node cluster
RAC Database 10.2.0.1.0
Linux Red Hat 4.0 2.6.9-22 64bit
ASM 10.2.0.1.0
Management Agent 10.2.0.2.0
Standby Database Environment - one Node database
Oracle Enterprise Edition 10.2.0.1.0 Single standby
Linux Red Hat 4.0 2.6.9-22 64bit
ASM 10.2.0.1.0
Management Agent 10.2.0.2.0
Grid Control 10.2.0.1.0 - Node separate from standby and cluster environments
Oracle 10.1.0.1.0
Grid Control 10.2.0.1.0
Red Hat 4.0 2.6.9-22 32bit
After adding a logical standby database through Grid Control for a RAC database, I noticed sometime later the Data Guard configuration disappeared from Grid Control. Not sure why but it is gone. I did notice that something went wrong with the standby creation but i did not get much feedback from Grid Control. The last thing I did was to view the configuration, see output below.
Initializing
Connected to instance qdcls0427:ELCDV3
Starting alert log monitor...
Updating Data Guard link on database homepage...
Data Protection Settings:
Protection mode : Maximum Performance
Log Transport Mode settings:
ELCDV.qdx.com: ARCH
ELXDV: ARCH
Checking standby redo log files.....OK
Checking Data Guard status
ELCDV.qdx.com : ORA-16809: multiple warnings detected for the database
ELXDV : Creation status unknown
Checking Inconsistent Properties
Checking agent status
ELCDV.qdx.com
qdcls0387.qdx.com ... OK
qdcls0388.qdx.com ... OK
qdcls0427.qdx.com ... OK
ELXDV ... WARNING: No credentials available for target ELXDV
Attempting agent ping ... OK
Switching log file 672.Done
WARNING: Skipping check for applied log on ELXDV : disabled
Processing completed.
Here are the steps followed to add the standby database in Grid Control
Maintenance tab
Setup and Manage Data Guard
Logged in as sys
Add standby database
Create a new logical standby database
Perform a live backup of the primary database
Specify backup directory for staging area
Specify standby database name and Oracle home location
Specify file location staging area on standby node
At the end am presented with a review of the selected options and then the standby database is created
Has any body come across a similar issue?
Thanks,Any resolution on this?
I just created a Logical Standby database and I'm getting the same warning (WARNING: No credentials available for target ...) when I do a 'Verify Configuration' from the Data Guard page.
Everything else seems to be working fine. Logs are being applied, etc.
I can't figure out what credentials its looking for. -
Best Practice for monitoring database targets configured for Data Guard
We are in the process of migrating our DB targets to 12c Cloud Control.
In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B. Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation. One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
For our new OEM CC environment we are setting up CC A and CC B. However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console. In other words, monitor and administer DB Primary and Standby from the same OEM CC Console. I am trying to determine the best practice. I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
I am interested in feedback. I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online). Thanks for your input and thoughts. I am deliberately trying to keep this as concise as possible.OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets. -
Data Guard Transport and Real Time Apply Configuration
Hello,
I'm configuring a Data Guard configuration using Oracle 12.1.0.2 and I have observed that in a máximum perfomance configuration, that is the default.
The system upgrade to a SYNC configuration.
DGMGRL> show configuration
Configuration - DRSolution
Protection Mode: MaxPerformance
Members:
orcl - Primary database
STBY - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 53 seconds ago)
SQL> show parameters log_archive_Dest_2
NAME TYPE VALUE
log_archive_dest_2 string service="stby", ASYNC NOAFFIRM
delay=0 optional compression=
disable max_failure=0 max_conn
ections=1 reopen=300 db_unique
_name="STBY" net_timeout=30, v
alid_for=(online_logfile,all_r
oles)
However, if I insert 1 row in a test table at primary database and commit, immediately I see the change in the standby database.
If this behavior new in this versión?
How can I revert to the classic max performance configuration, where only send the archive when it is full.
Thanks
Arturo>How can I revert to the classic max performance configuration, where only send the archive when it is full.
That is ARCH shipping. The default in 11.2 and above LGWR shipping. So LGWR ships the redo immediately. The only thing with ASYNC is that it doesn't wait for the Standby to acknowledge before it returns a COMMIT to the user. In a low-load environment, if the standby can apply the redo fast enough, ASYNC is virtually the same as SYNC. That is MaximumPerformance becomes the same as MaximumAvailability. Only when the redo load is higher than the available bandwidth or the speed at which the Standby can apply it , will you see a lag in the Standby. (To force an explicit lag, there is a separate DELAY parameter).
ARCH shipping is deprecated. -
Cloud 12.1.0.4: Creating a Data Guard Configuration asymetric
Hello,
I'd like to créate a Data Guard from Cloud Console.
The primary database is a RAC of two nodes, but the standby node is one node and I dont't like to use ASM.
The wizard , ask the target node , the Standby Instance Name, but in the step 3 show this error:
No ASM instance is being managed as an EM target on the selected standby host.
To create a standby database on the selected host, an ASM instance must be added as an EM target.
Is not posible to use from Cloud this mix of storage configuration?
Thanks
ArturoHi,
It is recommended to use ASM on both sides. Normally OEM Expects the Current DB structures on STBY OEM.
Please follow the bleow procedure will help you.
Data Guard – ASM primary to filesystem physical standby – using RMAN duplicate
https://pythianpang.wordpress.com/2009/07/07/data-guard-asm-primary-to-filesystem-physical-standby-using-rman-duplicate/
Regards
Krishnan -
Can I make a Data Guard configuration using EM console without Grid Control
Can I make a Data Guard configuration using EM console without Grid Console?
Can I download Grid Console software from Oracle website without cost?Assuming this is for 10g,
You could use Oracle® Data Guard Broker
Even you can download Grid Control software for free from Oracle site, you can't legally use it without license. -
Data Guard configuration-Archivelogs not being transferred
Hi Gurus,
I have configured data guard in Linux with 10g oracle, although I am new to this concept. My tnsping is working well both sides. I have issued alter database recover managed standby using current logfile disconnect command in standby site. But I am not receiving the archive logs in the standby site. I have attached my both pfiles below for your reference:
Primary database name: Chennai
Secondary database name: Mumbai
PRIMARY PFILE:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_domain=""
background_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/bdump
core_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/cdump
user_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/udump
db_create_file_dest=/u01/app/oracle/product/10.2.0/db_1/oradata
db_recovery_file_dest=/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.2.0.1.0
processes=150
sga_target=285212672
audit_file_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/adump
remote_login_passwordfile=EXCLUSIVE
dispatchers="(PROTOCOL=TCP) (SERVICE=chennaiXDB)"
pga_aggregate_target=94371840
undo_management=AUTO
undo_tablespace=UNDOTBS1
control_files=("/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/controlfile/o1_mf_82gl1b43_.ctl", "/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/controlfile/o1_mf_82gl1bny_.ctl")
DB_NAME=chennai
DB_UNIQUE_NAME=chennai
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chennai'
LOG_ARCHIVE_DEST_2=
'SERVICE=MUMBAI LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=mumbai'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=mumbai
FAL_CLIENT=chennai
DB_FILE_NAME_CONVERT=(/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/,/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/)
LOG_FILE_NAME_CONVERT='/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/'
STANDBY_FILE_MANAGEMENT=AUTO
SECONDARY PFILE:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_domain=""
db_name=chennai
background_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/bdump
core_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/cdump
user_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/udump
db_recovery_file_dest=/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/
db_create_file_dest=/home/oracle/oracle/product/10.2.0/db_1/oradata/
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.2.0.1.0
processes=150
sga_target=285212672
audit_file_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/adump
remote_login_passwordfile=EXCLUSIVE
dispatchers="(PROTOCOL=TCP) (SERVICE=mumbaiXDB)"
pga_aggregate_target=94371840
undo_management=AUTO
undo_tablespace=UNDOTBS1
control_files="/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/controlfile/standby01.ctl","/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/controlfile/standby02.ctl"
DB_UNIQUE_NAME=mumbai
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mumbai'
LOG_ARCHIVE_DEST_2='SERVICE=chennai LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chennai'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
FAL_SERVER=chennai
FAL_CLIENT=mumbai
DB_FILE_NAME_CONVERT=(/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/,/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/)
LOG_FILE_NAME_CONVERT='/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/'
STANDBY_FILE_MANAGEMENT=AUTO
Any help would be greatly appreciated. Kindly, help me someone please..
-Vimal.Thanks Balazs, Mseberg, CKPT for all your replies...
CKPT....I just did what you said..Comes below primary output & standby output...
PRIMARY_
SQL> set feedback off
SQL> set trimspool on
SQL> set line 500
SQL> set pagesize 50
SQL> column name for a30
SQL> column display_value for a30
SQL> column ID format 99
SQL> column "SRLs" format 99
SQL> column active format 99
SQL> col type format a4
SQL> column ID format 99
SQL> column "SRLs" format 99
SQL> column active format 99
SQL> col type format a4
SQL> col PROTECTION_MODE for a20
SQL> col RECOVERY_MODE for a20
SQL> col db_mode for a15
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert /home/oracle/oracle/product/10
.2.0/db_1/oradata/MUMBAI/dataf
ile/, /u01/app/oracle/product/
10.2.0/db_1/oradata/CHENNAI/da
tafile/
db_name chennai
db_unique_name chennai
dg_broker_config_file1 /u01/app/oracle/product/10.2.0
/db_1/dbs/dr1chennai.dat
dg_broker_config_file2 /u01/app/oracle/product/10.2.0
/db_1/dbs/dr2chennai.dat
dg_broker_start FALSE
fal_client chennai
fal_server mumbai
local_listener
log_archive_config DG_CONFIG=(chennai,mumbai)
log_archive_dest_2 SERVICE=MUMBAI LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,P
RIMARY_ROLE)
DB_UNIQUE_NAME=mumbai
log_archive_dest_state_2 ENABLE
log_archive_max_processes 30
log_file_name_convert /home/oracle/oracle/product/10
.2.0/db_1/oradata/MUMBAI/onlin
elog/, /u01/app/oracle/product
/10.2.0/db_1/oradata/CHENNAI/o
nlinelog/, /home/oracle/oracle
/product/10.2.0/db_1/flash_rec
overy_area/MUMBAI/onlinelog/,
/u01/app/oracle/product/10.2.0
/db_1/flash_recovery_area/CHEN
NAI/onlinelog/
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management AUTO
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE,switchover_status from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
CHENNAI chennai MAXIMUM PERFORMANCE PRIMARY READ WRITE NOT ALLOWED
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
1 210
SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
2 FROM
3 (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
4 (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
5 WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
Thread Last Sequence Received Last Sequence Applied Difference
1 210 210 0
SQL> col severity for a15
SQL> col message for a70
SQL> col timestamp for a20
SQL> select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16191 15-AUG-2012 12:46:02 LGWR: Error 16191 creating archivelog file 'MUMBAI'
Error 16191 15-AUG-2012 12:46:02 FAL[server, ARC1]: Error 16191 creating remote archivelog file 'MUMBAI
Error 16191 15-AUG-2012 12:51:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 12:56:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:01:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:06:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:11:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:16:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:21:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:26:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:31:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:36:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:41:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:47:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:52:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:57:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:02:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
SEVERITY ERROR_CODE timestamp MESSAGE
16191.
Error 16191 15-AUG-2012 14:07:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:12:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:17:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:22:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:27:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:32:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:37:03 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 18:21:40 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 18:26:41 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
SQL> select ds.dest_id id
2 , ad.status
3 , ds.database_mode db_mode
4 , ad.archiver type
5 , ds.recovery_mode
6 , ds.protection_mode
7 , ds.standby_logfile_count "SRLs"
8 , ds.standby_logfile_active active
9 , ds.archived_seq#
10 from v$archive_dest_status ds
11 , v$archive_dest ad
12 where ds.dest_id = ad.dest_id
13 and ad.status != 'INACTIVE'
14 order by
15 ds.dest_id;
ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 210
2 ERROR UNKNOWN LGWR UNKNOWN MAXIMUM PERFORMANCE 0 0 0
SQL> column FILE_TYPE format a20
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area 2048 896
SQL> spool offspool u01/app/oracle/vimal.log
SP2-0768: Illegal SPOOL command
Usage: SPOOL { <file> | OFF | OUT }
where <file> is file_name[.ext] [CRE[ATE]|REP[LACE]|APP[END]]
SQL> spool /u01/app/oracle/vimal.log
Standby output_
SQL> set feedback off
SQL> set trimspool on
SQL> set line 500
SQL> set pagesize 50
SQL> set linesize 200
SQL> column name for a30
SQL> column display_value for a30
SQL> col value for a10
SQL> col PROTECTION_MODE for a15
SQL> col DATABASE_Role for a15
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert /u01/app/oracle/product/10.2.0
/db_1/oradata/CHENNAI/datafile
/, /home/oracle/oracle/product
/10.2.0/db_1/oradata/MUMBAI/da
tafile/
db_name chennai
db_unique_name mumbai
dg_broker_config_file1 /home/oracle/oracle/product/10
.2.0/db_1/dbs/dr1mumbai.dat
dg_broker_config_file2 /home/oracle/oracle/product/10
.2.0/db_1/dbs/dr2mumbai.dat
dg_broker_start FALSE
fal_client mumbai
fal_server chennai
local_listener
log_archive_config DG_CONFIG=(chennai,mumbai)
log_archive_dest_2 SERVICE=chennai LGWR ASYNC VAL
ID_FOR=(ONLINE_LOGFILES,PRIMAR
Y_ROLE) DB_UNIQUE_NAME=chennai
log_archive_dest_state_2 ENABLE
log_archive_max_processes 2
log_file_name_convert /u01/app/oracle/product/10.2.0
/db_1/oradata/CHENNAI/onlinelo
g/, /home/oracle/oracle/produc
t/10.2.0/db_1/oradata/MUMBAI/o
nlinelog/, /u01/app/oracle/pro
duct/10.2.0/db_1/flash_recover
y_area/CHENNAI/onlinelog/, /ho
me/oracle/oracle/product/10.2.
0/db_1/flash_recovery_area/MUM
BAI/onlinelog/
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management AUTO
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
CHENNAI mumbai MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
SQL> select process, status,thread#,sequence# from v$managed_standby;
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
MRP0 WAIT_FOR_LOG 1 152
SQL> col name for a30
SQL> select * from v$dataguard_stats;
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(1) interval
apply lag day(2) to second(0) interval
estimated startup time 10 second
standby has been open N
transport lag day(2) to second(0) interval
SQL> select * from v$archive_gap;
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/ 2048 150
SQL> spool off
-Vimal.
Maybe you are looking for
-
I am trying to follow the example shown at the below link but it is not working. I am modifying some of the code since I am using version 1.6. Any ideas as to what I may be doing wrong. The checkbox appears and I can select and deselect items but I c
-
No free trial as promised.
I can understand you wan't money for your software, but promising a free trial and then demanding pay after customers installs is pretty lousy for a company this size. I was offered a free trial. After install, you demand money. False advertising, og
-
Managing Metadata in Digital Libraries within Lightroom for Upload to WordPress
I have been researching SEO tips for my clients to build their image libraries and fill in all of the pertinent meta data. Unfortunately, I have not been able to find a workable solution to categorize the images and retain the metadata after uploadin
-
My droid X2 "modem did not power up, starting rsd protocol"
Okay, so i shall first start by saying i dropped my phone over 2 months ago but have NEVER had any problems since then, i do not have a warranty on this phone, and i don't have much money to spend. The day before yesterday my phone started showing "n
-
Evaluating expression functions in oracle
i am displaying a column on a jsp page but i want to show it in different colours in such a way that if it is between 0-80 then yellow and 81-100 green and 100 equal to red. so is there ne function in oracle that checks this column's value in the sel