Data Guard data population issue
Hi ...
I have data guard config with a scenario that new user schema is created (couple tablespaces and datafiles). The tablespace is setup in NOLOGGING mode. After checking the physical standby server, the datafiles are created.
My questions:
1. How to re-sync data from primary to physical standby assuming schema has created and no data has populated?
2. How to drop the data in physical standby server on the same schema with primary server re. above issue, then start the objective on 1.?
My environments are in AIX 5L with 9.2.0.4 in Unicode character set.
Thanks in advance for any information.
Cheers.
BAM
Hi Justin.
Thanks for your reply.
I have restored the tablespaces to logging mode again, and redo schema creation and re-import the data to the schema. When I did the db switch over, the primary site successfully changed to standby -- the opposite works well also.
However, after querying these:
SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
SELECT UNRECOVERABLE_CHANGE#,
TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss')
FROM V$DATAFILE;
on my primary site, there was 5591564 bytes of unrecoverable_change# on one of the datafile on 05-06-2004 02:44:48am. Then on 05-05-2004 10:51:34 the unrecoverable_change# was gone (becomes 0) for the same datafile.
In the doc. the unrecoverable_change# will be shown if somehow datablock error like following:
ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
happened. I've checked my logs in primary/standby alert logs and the data guard switch-over logs -- no similar error as above.
Any idea what was happened on above scenario?
Thanks.
BAM
Similar Messages
-
Data Guard - MRP stuck issues on a physical standby database
Hi,
Oracle 11.2.0.3 DG running. When i do a switchover the physical standby database has error with following error
ARC0: LGWR is actively archiving destination LOG_ARCHIVE_DEST_2
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance <primaryDB> - Archival Error
On standby DB
SQL>select process, thread#, sequence#, status from v$managed_standby where process='MRP0';
PROCESS THREAD# SEQUENCE# STATUS
MRP0 1 548 APPLYING_LOG
So according to Oracle support link i executed following
SQL>recover managed standby database cancel;
SQL>recover automatic standby database;
The above seems to resolve the issue. What is causing is this?Hello again;
Those both look perfect. I combed through my notes and found nothing like this for your version.
Yes, I would open an SR since it appears you have done everything correct.
ORA-600 [3020] "Stuck Recovery" [ID 30866.1]
The "Known Bugs" section of the above has a few 11.2.0.3 entries.
Generally the MRP gets stuck because data Guard thinks there's a GAP, you run out of room in the FRA on the Standby, or redo logs are too small and the system is switching very fast.
Best Regards
mseberg
Later
Never asked you but this "log_archive_max_processes" can be set as high as 30.
Edited by: mseberg on Jul 16, 2012 8:01 PM
h2. Still later
Found this which is closer :
Bug 13553883 - Archiver stuck but no ORA-19xxx error in alert log (messages need changing) [ID 13553883.8] -
MC Service Guard + Data Guard + Data Broker?
Hi guys,
A clustered 2 nodes on HP-UX with MC service guard (active/standby), have a running oracle 10g db in one of the nodes. I am planning to implement data guard on the 2 node. But MC service guard is responsible for failover of the nodes, which I think is the same role of data broker. Would this cause a conflict between the MC service guard and data broker failover role? Should I not implement data broker? Could anyone give me some ideas? Thanks in advance.YES!! you can install 10g grid control under HP MC/Service guard. I have done it with some undocumented tweaking. As this is an undocumented feature, use it at your own risk.
Problem :
Installer picks up the hostname and the Management service is installed on the host instead of the package. When the package flips over to the other host the management service breaks.
Solution :
Copy the CD/DVD to the host or better still download the media from edelivery.oracle.com to your host. (I personally do not mount CD/DVD on HP servers because of the crazy pfs mounting. It tends to hang the server if you donât do it right). You will have to install from the copied media since you will be modifying one of the supplied ini files.
Edit oraparam.ini under Disk1/install in the copied installation media. Change SHOW_HOSTNAME=NEVER_SHOW to SHOW_HOSTNAME=ALWAYS_SHOW and save the file
Start the installer. During the installation there will be a screen to enter the hostname. Change the hostname to the package name. (The labels in the hostname screen did not display properly. I use Exceed. If you see a screen with your hostname in a text box, just change it to the package name and continue).
Towards the end of the installation, opmn configuration will fail. When it does open $ORACLE_HOME/webcache/webcache.xml and find the line âHOSTNAME=package_nameâ and change it to âHOSTNAME=hostnameâ. Retry the configuration step after making the change and your install should proceed and complete successfully.
Web cache does not like packages. You should have multiple copies of webcache.xml â one for each host in the package (webcache.host.xml) â with the corresponding hostname in each file. As part of the package startup webcache.host.xml should be copied as webcache.xml before "opmnctl startall".
Given Oracleâs direction and attitude, these solutions will be short lived. They want you to use RAC and Application Server HA instead of Service Guard. -
our primary database have cascade DML(insert,update,delete ) in triggers. With physical stndby DB by redlog processing. are there a duplicated processing ( both red log sql and cascade trigger) in physical standby database?
If it is YES, we need to disable this trigger.
Thanks for explaining in advance!!
OradbThanks for your explaining.
A physical standby keeps in sync by applying redo blocks. my means inset data into table sql and cascade trigger with inserted table sqls in primary database will sync into physical standby database. So standby data inserted table will fire cascade trigger and then modify other tables.
If these cascade triggers are not be disabled, it seems there are a duplicated DML processing in standby database.
Some wrong?
Thanks for your clarification!!
oradb -
Data Guard - dual configuration
Hi,
we are running Data Guard configuration (Windows Server 2008 R2) with physical standby database in place, using FSFO configuration with DG Broker - observer process on separate machine (10.2.0.5). My question is, if it's safe and possible to create another, separate Data Guard configuration (11.2.0.3) on the same machines (2x database server - databases, 1x client PC - observer)?
Thank you,
t.Hello;
I've done it. Be it with Linux but that should not matter. I had two Oracle homes, one for each version and a script to set as needed. I ran into this during database upgrades too. Some of the databases were ready and some were not, so I was running 11.2.0.2 and 11.2.0.3 on the same server at the same time, both with Data Guard. No issues. The main thing was to use one listener and include the entries for the lower version in the file. I always start and stop the listener from the higher version.
A few years ago I still had some Oracle 10 databases and they worked just fine on a server with Oracle 11. The separate Oracle homes is the key. So on Linux I had no issue running Data Guard on the same server with two versions of Oracle.
Best Regards
mseberg
This is older, but worth a look : ( Using Multiple Oracle Homes )
http://docs.oracle.com/cd/B10500_01/em.920/a96697/moh.htm
Edited by: mseberg on May 31, 2013 7:49 AM -
Data Guard Configuration Issue / ORA-16047
So last night I decided to setup a test Physical Standby database. I had everything working correctly and when I started playing around with the Data Guard Broker I started having some problems. Now I can't get the logs to ship from the primary to the standby.
Version: Primary and Standby
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
OS: Primary and Standby
[oracle@dgdb0 trace]$ uname -a
Linux dgdb0.localdomain 2.6.32-100.28.5.el6.x86_64 #1 SMP Wed Feb 2 18:40:23 EST 2011 x86_64 x86_64 x86_64 GNU/LinuxI first noticed a problem with a large gap in sequence numbers.
Standby
SQL> SELECT sequence#, applied from v$archived_log order by sequence#;
SEQUENCE# APPLIED
8 YES
9 YES
10 YES
11 YES
12 YES
13 YES
14 YES
7 rows selected.
Primary
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 37
Next log sequence to archive 39
Current log sequence 39Here is some of the configuration information on the primary:
SQL> show parameter db_name
NAME TYPE VALUE
db_name string dgdb0
SQL> show parameter db_unique_name
NAME TYPE VALUE
db_unique_name string dgdb0
SQL> show parameter log_archive_config
NAME TYPE VALUE
log_archive_config string dg_config=(dgdb0,dgdb1)
SQL> show parameter log_archive_dest_2
NAME TYPE VALUE
log_archive_dest_2 string service=dgdb1 async valid_for=
(online_logfile,primary_role)
db_unique_name=dgdb1Standby parameters
SQL> show parameter db_name
NAME TYPE VALUE
db_name string dgdb0
SQL> show parameter db_unique_name
NAME TYPE VALUE
db_unique_name string dgdb1So I proceeded to run this query:
SQL> SELECT error from v$archive_dest WHERE dest_name='LOG_ARCHIVE_DEST_2';
ERROR
ORA-16047: DGID mismatch between destination setting and target
databaseThe error description is:
Cause: The DB_UNIQUE_NAME specified for the destination does not match the DB_UNIQUE_NAME at the destination.
Action: Make sure the DB_UNIQUE_NAME specified in the LOG_ARCHIVE_DEST_n parameter defined for the destination matches the DB_UNIQUE_NAME parameter defined at the destination.As you can see from above the DB_UNIQUE_NAME in the LOG_ARCHIVE_DEST_2 parameter matches that of the standby database.
Also DG_BROKER_START is set to false on both the primary and standby databases.
Finally, I've removed all the drc* files from the $ORACLE_HOME/dbs directories on both the primary and standby servers to ensure the broker is not configured.
Where did I go wrong? How can I get the standby caught up and working correctly again?
I apologized if I missed anything. I'm relatively new to standby databases.Centinul;
I have noticed a couple things
1. If you are running the query below from the standby you will probably always get the results you posted
SELECT sequence#, applied from v$archived_log order by sequence#;
What I do if run this from the primary and I add the "DEST_ID" column to the query.
2. You might have better luck finding GAPS using these queries:
select max(sequence#) from v$archived_log where applied='YES';
select process,status from v$managed_standby;
SELECT * FROM V$ARCHIVE_GAP;
3. You are mixing SQL results with Data Broker, that can bite you. Not sure where you went wrong but I would create PFILE versions at both ends before trying to Data Broker. The you can review each setting and avoid issues before adding Data Broker. Data Broker will take control and you may even find it adds entries to your parameter file.
The ORA-16047 is probably database parameter related and this should at least help answer the question. For example you might be missing log_archive_config on the Standby or soething. Comparing the two PFILE's should narrow this down
I checked my Data Broker notes but did not find an ORA-16047, I managed ORA-01031, ORA-16675, ORA-12514, and ORA-16608.
For me I decided it was a good idea to run Data Guard without Data Broker at first until I got the feel of it using SQL.
Last of all if you have not already consider buying Larry Carpenter's "Oracle Data Guard 11g Handbook" In my humble opinion its worth every penny and more.
Best Regards
mseberg -
Info on licensing issues of standby databases/Oracle Data Guard
Hi All,
Could anyone possibly give me some information on licensing issues of standby databases/Oracle Data Guard? Links to some electronic articles or journals might be useful. I am unable to find the appropriate info and need this quite urgently for my dissertation asap as my deadline is this approaching.
Thanks in advancePaul Drake posted a reply to a similar question on the Oracle-L mailing list that pointed out that the License Agreement
http://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?media=os_local_license_agreement§ion=11365&minisite=10021&respid=22372&grp=STORE&language=US
states, in part,
"Failover: Your license for the following programs, Oracle Database (Enterprise Edition, Standard Edition or Standard Edition One) and Oracle Internet Application Server (Enterprise Edition, Standard Edition, Standard Edition One or Java Edition) includes the right to run the licensed program(s) on an unlicensed spare computer in a failover environment for up to a total of ten separate days in any given calendar year. Any use beyond the right granted in the previous sentence must be licensed separately and the same license metric must be used when licensing the program(s)."
which seems to open a bit of wiggle room depending on how the term "run" is defined. This is probably also an area where your Oracle sales rep may have a little more flexibility to negotiate
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Hi All Gurus,
I'm getting Problem in configuring Oracle Data Guard Broker 10g as follows:
Installed Oracle 10g Cleint on a Different Machine as Primary & Standby.
TNSNAMES.ora Entry is as follows:PRIMARY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.29.3.135)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = Primary)
PRIMARY_DGMGRL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.29.3.135)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = Primary)
STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.29.3.137)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = Standby)
STANDBY_DGMGRL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.29.3.137)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = STANDBY)
Tnsping Utility is Working Well. Now I'm trying to Enable Fast Start Failover using DGMGRL on 3rd Machine, The Configuration of Primary is as follows:DGMGRL> SHOW DATABASE VERBOSE 'PRIMARY';
Database
Name: PRIMARY
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
primary
Properties:
InitialConnectIdentifier = 'PRIMARY'
LogXptMode = 'SYNC'
Standby Database:
DGMGRL> SHOW DATABASE VERBOSE 'STANDBY';
Database
Name: STANDBY
Role: PHYSICAL STANDBY
Enabled: YES
Intended State: ONLINE
Instance(s):
standby
Properties:
InitialConnectIdentifier = 'STANDBY'
LogXptMode = 'ARCH'
MY Primary Database is Running in Max Performance Mode. and I also tried to change the Standby LogXptMode from ARCH to SYNC, but it is giving me following Error:
DGMGRL> EDIT DATABASE 'STANDBY' SET PROPERTY LOGXPTMODE='SYNC';
Error: ORA-16789: missing standby redo logs
Failed.
Please tell me how can I change LogXptMode from ARCH to SYNC? please help me. Thanks
Regards,
ImranThanks for reply man. I have completed all of the Steps for Observer (Broker) on 3rd Machine. Thanks a lot
These were the Steps after creating Standby Logs:
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;
Operation requires shutdown of instance "primary" on database "PRIMARY"
Shutting down instance "primary"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "primary" on database "PRIMARY"
Starting instance "primary"...
ORACLE instance started.
Database mounted.
DGMGRL> EDIT DATABASE 'STANDBY' SET PROPERTY LOGXPTMODE='SYNC';
Property "logxptmode" updated
SQL> alter system set undo_retention=3600 scope=both;
System altered.
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 3600
undo_tablespace string UNDOTBS1
After doing the required methods, I enabled Fast-Start Failover;
DGMGRL> ENABLE FAST_START FAILOVER;
Enabled.
But When I run the following Command:
DGMGRL> START OBSERVER;
Observer started
I disconnected the Session and Login again, then it show me following Configuration:
DGMGRL> SHOW CONFIGURATION VERBOSE
Configuration
Name: DGCONFIG1
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
PRIMARY - Primary database
STANDBY - Physical standby database
- Fast-Start Failover target
Fast-Start Failover
Threshold: 30 seconds
Observer: Node5
Current status for "DGCONFIG1":
SUCCESS
Kindly tell me that is it working or I have to do more. Thanks
Regards,
Imran -
Database issues after starting data guard
Hi. We run OEM12c in Linux. All is working well and we monitor several targets and DBs.
In our QA db server, sun solaris 11 running oracle 10g, we started to test data guard configuration for several instances we are running there. our standy server is called STDBY. All works fine, but now we have a problem managing some aspects of data guard with OEM12c.
First problem is that the instances appear as SID_IPaddress of server, as opposed as DB_UNIQUENAME.DB_DOMAIN. curiously enough, the std by copy, appears correctly (SID_sbyp.db_domain)... This is very puzzling...
Second problem is when we try to access the Data Guard performance pane, it fails showing (both in primary and secondary DBs)
Data Guard Internal Error : See the OMS log for details.
No clue where to look for this problem.
All other functions, TOP, performance home, etc... look fine.Hi ,
Regarding "Data Guard Internal Error : See the OMS log for details"
Follow the below steps
On the Data Guard Page run the 'Verify Configuration'-Option twice. The first Execution will show an Output like
Initializing
Connected to instance test.oracle.com:mydb
Starting alert log monitor...
Updating Data Guard link on database homepage...
WARNING: Broker name (mytest) and target name (mydb) do not match.
WARNING: The broker name will be renamed to match the target name.
Skipping verification of fast-start failover static services check.
Data Protection Settings:
Protection mode : Maximum Performance
Redo Transport Mode settings:
pnjpcep1: ASYNC
cnjpcep1: ASYNC
Checking standby redo log files.....not checked due to broker name mismatch. Run verify again.
Checking Data Guard status
mydb : Normal
my11g : Normal
Tthe second Execution does not show this Warning any more, ie. it got fixed during the first Execution. Now it's possible to access the Data Guard Performance Page without Errors and you can see the Statistics.
Ref
Cloud Control: "Data Guard Internal Error" raised on Data Guard Performance Page (Doc ID 1484028.1)
Regards,
Rahul -
Oracle 9i Backup/Restore with Data Guard Issues
Greetings,
I am currently using Oracle 9i Data Guard to have a Primary/Standby model.
The current implementation compares the backup piece id from the backed up image (taken from the Primary) with the backup piece id on the Standby db. If the number are not equal then the restore will fail.
1- Is this the best implementation for restoring on a standby?
2- Would it be better to give a permitted gap range between the backup piece ids between the two databases? and if so what would be an acceptable range?
3- In case the gap range discussed in #2 above is allowed, when the Standby is started as primary and the primary set to be standby, would the Data Guard automatically tries to retrieve all the missing archive/redo log files from the primary db?
ThanksHi,
I am not sure I am following you here. Are you trying to create a physical standby, by using a backup taken from the primary?
If this is what you're trying to do, you can take any hot backup from the primary and use it to sync the standby with your primary. The older the backup you're using to create the standby, the more archive logs you'll need to get synchronized.
As long as you have all the archive logs needed in the archive_log_dest, data guard will automatically retrieve all the missing logs, and apply them on the standby. Again, only if they are available on the primary.
Not sure If this is what you were asking...
Idan. -
Data Guard adding new data files to a tablespace.
In the past, if you were manually updating an Oracle physical standby database there were issues with adding a data file to a tablespace. It was suggested that the data file should be created small and then the small physical file copied to the standby database. Once the small data file was in place it would be resized on the primary database then the repication would change the size on the standby.
My question is, does Data Guard take care of this automaticlly for a physical standby? I can't find any specific reference on how it handles a new datafile.Never mind, I found the answer.
STANDBY_FILE_MANAGEMENT=auto
Set on the standby database will create the datafiles. -
Problem with logminer in Data Guard configuration
Hi all,
I experience strange problem with applying of the logs in DataGuard configuration on the logical standby database side.
I've set up the configuration step by step as it is described in documentation (Oracle Data Guard Concepts and Administration, chapter 4).
Everything went fine until I issued
ALTER DATABASE START LOGICAL STANDBY APPLY;
I saw that log applying process was started by checking the output of
SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
and
SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
but in few minutes it stoped and quering DBA_LOGSTDBY_EVENTS I saw the following records:
ORA-16111: log mining and apply setting up
ORA-01332: internal Logminer Dictionary error
Alert log says the following:
LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
Wed Jan 21 16:57:57 2004
Errors in file /opt/oracle/admin/whouse/bdump/whouse_lsp0_5817.trc:
ORA-01332: internal Logminer Dictionary error
Here is the end of the whouse_lsp0_5817.trc
error 1332 detected in background process
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01332: internal Logminer Dictionary error
But the most useful info I found in one more trace file (whouse_p001_5821.trc):
krvxmrs: Leaving by exception: 604
ORA-00604: error occurred at recursive SQL level 1
ORA-01031: insufficient privileges
ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
ORA-06512: at line 1
Seems that somewhere the correct privileges were not given or smth like this. By the way I was doing all the operations under SYS account (as SYSDBA).
Could smb give me a clue where could be my mistake or what was done in the wrong way?
Thank you in advance.Which is your SSIS version?
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
Data not getting populated in Payslip in ESS Portal
Hi All
I am tryig to display Payslip in Portal. Have done all the necessary configuration in Benefits and payments->Salary statement->HRFOR/ EDTIN features.
Correct Payslip form is visible but data is not getting populated in the payslip.
Have tested the Payslip in PC00_M40_CEDT transaction with the variant i have set for HRFOR/EDTIN features and Payslip data is displayed correctly.
Have checked for PZ11_PDF transaction but i get a message saying it cannot be accessed through Easy access.
Can anyone pls let me know what might be the reason for data not getting populated in Payslip in Portal?
what is role of PZ11_PDF transaction in Payslip display in Portal?
Regards
AshaHello,
Do one thing for executing the PZ11_PDF trsaction please follow following steps.
1. Once you log in SAP system with same User - Id which you r using on Portal .
Once log in PUT "/N" in the command box . Then put the trasaction "PZ11_PDF" and execute it will
Call the salary statement .
Or
Once you log in SAP system put the trasction "/nsbwp" then give the trasaction "PZ11_PDF" it will
call the salary statement ..
give inputs once you done
.....The issue with Authorisations please check it ...
Add this object in ESS role "S_SERVICE' ...
and this object in ESS role "P_PERNR" ---infotype 0008
Edited by: Vivek D Jadhav on Jun 15, 2009 11:49 AM -
Data not getting populated in ESS Payslip in portal
Hi All
I am tryig to display Payslip in Portal. Have done all the necessary configuration in Benefits and payments->Salary statement->HRFOR/ EDTIN features.
Correct Payslip form is visible but data is not getting populated in the payslip.
Have tested the Payslip in PC00_M40_CEDT transaction with the variant i have set for HRFOR/EDTIN features and Payslip data is displayed correctly.
Have checked for PZ11_PDF transaction but i get a message saying it cannot be accessed through Easy access.
Can anyone pls let me know what might be the reason for data not getting populated in Payslip in Portal?
what is role of PZ11_PDF transaction in Payslip display in Portal?
Regards
AshaAsha,
Maintain Feature EDPDF which determines the SMARTFORM being used to make the payslip available for employees. This is more of a HR related issue and I believe if you post this in the ESS or HR Forum you would be able to resolve this issue.
Good Luck!
Sandeep Tudumu -
Data Guard Failover after primary site network failure or disconnect.
Hello Experts:
I'll try to be clear and specific with my issue:
Environment:
Two nodes with NO shared storage (I don't have an Observer running).
Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
Two single instance databases, one per node. NO RAC.
What I'm being able to perform with no issues:
Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
Automatic fail(over) when primary node is rebooted with "reboot" or "init"
Automatic fail(over) when primary node is shut down with "shutdown".
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:
This is the portion of the alert log at Standby site when Real Time Replication is working fine:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Terminal Recovery: applying standby redo logs.
Terminal Recovery: thread 1 seq# 7 redo required
Terminal Recovery:
Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
Media Recovery Complete (primary_db)
Terminal Recovery: successful completion
Forcing ARSCN to IRSCN for TR 0:15922544
Mon Dec 23 17:13:22 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Resetting standby activation ID 2071848820 (0x7b7de774)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Mon Dec 23 17:13:33 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
Attempt to do a Terminal Recovery (primary_db)
Media Recovery Start: Managed Standby Recovery (primary_db)
started logmerger process
Mon Dec 23 17:13:33 2013
Managed Standby Recovery not using Real Time Apply
Media Recovery failed with error 16157
Recovery Slave PR00 previously exited with exception 283
ORA-283 signalled during: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
Mon Dec 23 17:13:34 2013
Shutting down instance (immediate)
Shutting down instance: further logons disabled
Stopping background process MMNL
Stopping background process MMON
License high water mark = 38
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
ALTER DATABASE DISMOUNT
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
Mon Dec 23 17:13:38 2013
ARCH shutting downARCH shutting down
ARCH shutting down
ARC0: Relinquishing active heartbeat ARCH role
ARC2: Archival stopped
ARC0: Archival stopped
ARC1: Archival stopped
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:40 2013
Stopping background process VKTM
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Dec 23 17:13:43 2013
Instance shutdown complete
Mon Dec 23 17:13:44 2013
Adjusting the default value of parameter parallel_max_servers
from 1280 to 470 due to the value of parameter processes (500)
Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = 64 KB
Total Shared Global Region in Large Pages = 0 KB (0%)
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
RECOMMENDATION:
Total System Global Area size is 3762 MB. For optimal performance,
prior to the next instance restart:
1. Increase the number of unused large pages by
at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
get 100% of the System Global Area allocated with large pages
2. Large pages are automatically locked into physical memory.
Increase the per process memlock (soft) limit to at least 3770 MB to lock
100% System Global Area's large pages into physical memory
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 32
Number of processor cores in the system is 16
Number of processor sockets in the system is 2
CELL communication is configured to use 0 interface(s):
CELL IP affinity details:
NUMA status: NUMA system w/ 2 process groups
cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
CELL communication will use 1 IP group(s):
Grp 0:
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =88
LICENSE_MAX_USERS = 0
SYS auditing is disabled
NUMA system with 2 nodes detected
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
ORACLE_HOME = /u01/oracle/product/11.2.0.4
System name: Linux
Node name: node2.localdomain
Release: 2.6.32-131.0.15.el6.x86_64
Version: #1 SMP Tue May 10 15:42:40 EDT 2011
Machine: x86_64
Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
System parameters with non-default values:
processes = 500
sga_target = 3760M
control_files = "/u02/oracle/orafiles/primary_db/control01.ctl"
control_files = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
db_file_name_convert = "standby_db"
db_file_name_convert = "primary_db"
log_file_name_convert = "standby_db"
log_file_name_convert = "primary_db"
control_file_record_keep_time= 40
db_block_size = 8192
compatible = "11.2.0.4.0"
log_archive_dest_1 = "location=/u02/oracle/archivelogs/primary_db"
log_archive_dest_2 = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
log_archive_dest_state_2 = "ENABLE"
log_archive_min_succeed_dest= 1
fal_server = "primary_db"
log_archive_trace = 0
log_archive_config = "DG_CONFIG=(primary_db,standby_db)"
log_archive_format = "%t_%s_%r.dbf"
log_archive_max_processes= 3
db_recovery_file_dest = "/u02/oracle/fast_recovery_area"
db_recovery_file_dest_size= 30G
standby_file_management = "AUTO"
db_flashback_retention_target= 1440
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
job_queue_processes = 0
audit_file_dest = "/u01/oracle/admin/primary_db/adump"
audit_trail = "DB"
db_name = "primary_db"
db_unique_name = "standby_db"
open_cursors = 300
pga_aggregate_target = 1250M
dg_broker_start = FALSE
diagnostic_dest = "/u01/oracle"
Mon Dec 23 17:13:45 2013
PMON started with pid=2, OS id=29108
Mon Dec 23 17:13:45 2013
PSP0 started with pid=3, OS id=29110
Mon Dec 23 17:13:46 2013
VKTM started with pid=4, OS id=29125 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Mon Dec 23 17:13:46 2013
GEN0 started with pid=5, OS id=29129
Mon Dec 23 17:13:46 2013
DIAG started with pid=6, OS id=29131
Mon Dec 23 17:13:46 2013
DBRM started with pid=7, OS id=29133
Mon Dec 23 17:13:46 2013
DIA0 started with pid=8, OS id=29135
Mon Dec 23 17:13:46 2013
MMAN started with pid=9, OS id=29137
Mon Dec 23 17:13:46 2013
DBW0 started with pid=10, OS id=29139
Mon Dec 23 17:13:46 2013
DBW1 started with pid=11, OS id=29141
Mon Dec 23 17:13:46 2013
DBW2 started with pid=12, OS id=29143
Mon Dec 23 17:13:46 2013
DBW3 started with pid=13, OS id=29145
Mon Dec 23 17:13:46 2013
LGWR started with pid=14, OS id=29147
Mon Dec 23 17:13:46 2013
CKPT started with pid=15, OS id=29149
Mon Dec 23 17:13:46 2013
SMON started with pid=16, OS id=29151
Mon Dec 23 17:13:46 2013
RECO started with pid=17, OS id=29153
Mon Dec 23 17:13:46 2013
MMON started with pid=18, OS id=29155
Mon Dec 23 17:13:46 2013
MMNL started with pid=19, OS id=29157
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = /u01/oracle
Mon Dec 23 17:13:46 2013
ALTER DATABASE MOUNT
ARCH: STARTING ARCH PROCESSES
Mon Dec 23 17:13:50 2013
ARC0 started with pid=23, OS id=29210
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Successful mount of redo thread 1, with mount id 2071851082
Mon Dec 23 17:13:51 2013
ARC1 started with pid=24, OS id=29212
Allocated 15937344 bytes in shared pool for flashback generation buffer
Mon Dec 23 17:13:51 2013
ARC2 started with pid=25, OS id=29214
Starting background process RVWR
ARC1: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Dec 23 17:13:51 2013
RVWR started with pid=26, OS id=29216
Physical Standby Database mounted.
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
Mon Dec 23 17:13:51 2013
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:13:51 2013
MRP0 started with pid=27, OS id=29219
MRP0: Background Managed Standby Recovery process started (primary_db)
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC2: Becoming the heartbeat ARCH
ARC2: Becoming the active heartbeat ARCH
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
This is the rest of the log:
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:52
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:55
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
started logmerger process
Mon Dec 23 17:13:56 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT FROM SESSION
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:13:58
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Mon Dec 23 17:14:01 2013
Fatal NI connect error 12543, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 23-DEC-2013 17:14:01
Tracing not turned on.
Tns error struct:
ns main err code: 12543
TNS-12543: TNS:destination host unreachable
ns secondary err code: 12560
nt main err code: 513
TNS-00513: Destination host unreachable
nt secondary err code: 113
nt OS err code: 0
Error 12543 received logging on to the standby
FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
Archiver process freed from errors. No longer stopped
Mon Dec 23 17:15:07 2013
Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
Mon Dec 23 17:19:51 2013
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance primary_db - Archival Error
ORA-16014: log 4 sequence# 7 not archived, no available destinations
ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
Mon Dec 23 17:26:18 2013
RFS[1]: Assigned to RFS process 31456
RFS[1]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:26:47 2013
flashback database to scn 15921680
ORA-16157 signalled during: flashback database to scn 15921680...
Mon Dec 23 17:27:05 2013
alter database recover managed standby database using current logfile disconnect
Attempt to start background Managed Standby Recovery process (primary_db)
Mon Dec 23 17:27:05 2013
MRP0 started with pid=28, OS id=31481
MRP0: Background Managed Standby Recovery process started (primary_db)
started logmerger process
Mon Dec 23 17:27:10 2013
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 16157
Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
ORA-16157: media recovery not allowed following successful FINISH recovery
Managed Standby Recovery not using Real Time Apply
Completed: alter database recover managed standby database using current logfile disconnect
Recovery Slave PR00 previously exited with exception 16157
MRP0: Background Media Recovery process shutdown (primary_db)
Mon Dec 23 17:27:18 2013
RFS[2]: Assigned to RFS process 31492
RFS[2]: No connections allowed during/after terminal recovery.
Mon Dec 23 17:28:18 2013
RFS[3]: Assigned to RFS process 31614
RFS[3]: No connections allowed during/after terminal recovery.
Do you have any advice?
Thanks!
Alex.Hello;
What's not clear to me in your question at this point:
What I'm NOT being able to perform:
If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
Same situation happens if I manually disconnect the server from the power.
This is the alert logs I have:"
Are you trying a failover to the Standby?
Please advise.
Is it possible your "valid_for clause" is set incorrectly?
Would also review this:
ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
Best Regards
mseberg
Maybe you are looking for
-
BPM - ERROR while Activating Integration Process
Hi, I am unable to activate my Integration Process if i use Transformation step between Receiver and Sender step. The ERROR Log is given below: Activation of the change list canceled Check result for Message Mapping Msg_Mapping_BPM_File | http://s
-
Camera no longer shows as drive in IPhoto
hello, I have succesfully downloaded over the past 2 years over 4000 photos from my Nikon D50. Now all of a sudden the D50 no longer shows up automatically as a drive in IPhoto when I plug it in, and does not do an automatic updated download. The D50
-
Finish painting before Robot captures screen
Hi. I have a little math practice program on a JFrame. When a child gets a problem right, I use Robot to capture a screen shot and then use that image on a new JFrame as a backdrop for crazy effects. My problem is that I want all swing painting on th
-
USB Adapter for Express Card??
I have a Verizon ExpressCard (PC770) that I use with my Dell laptop. Is there an adapter that will allow me to use it through my USB port on my MacBook?
-
I've been struggling to set up my package structure properly. Here's the situation: C:\project\impl2 :contains the files in "package impl2" C:\project\impl :contains classes that import impl2.* (let's call these the "child classes") (OS is Win2000) W