Clarification on Data Guard(Physical Standyb db)

Hi guys,
I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
However I need clarification on the setup and whether or not it is working as expected.
My environment is Windows 32bit (Windows 2003)
Oracle 10.2.0.2 (Client/Server)
2 Physical machines
Here is what I have done.
Machine 1
1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
2. Modify the pfile to include the following:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgp'
*.fal_server='oradgs'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
*.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgp
The locations on the harddisk are all available and archived redo are created (e:\archlogs)
3. I then add the necessary (4) standby logs on primary.
4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
RMAN> run
{allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
backup database plus archivelog delete input;
5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
6. Then created a standby controlfile. (At this time the db was in open/write mode).
7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
Machine2
8. I created an Oracle service called the same as primary (oradgp).
9. Created a listener also.
9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
10. I then copied over the pfile from the primary to standby and created an spfile with this one.
It looks like this:-
oradgp.__db_cache_size=436207616
oradgp.__java_pool_size=4194304
oradgp.__large_pool_size=4194304
oradgp.__shared_pool_size=159383552
oradgp.__streams_pool_size=0
*.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
*.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
*.compatible='10.2.0.3.0'
*.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
*.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='oradgp'
*.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=21474836480
*.fal_client='oradgs'
*.fal_server='oradgp'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
*.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
*.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=30
*.nls_territory='IRELAND'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
*.service_names=oradgs
log_file_name_convert='junk','junk'
11. User RMAN to restore the db as:-
RMAN> startup mount;
RMAN> restore database;
Then RMAN created the datafiles.
12. I then added the same number (4) of standby redo logs to machine2.
13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
I wanted to enable realtime apply so, I cancelled the recover by :-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
and issued:-
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
Also performed a log swith on primary and it got transported to the standby and was applied (YES).
Also ensured that there are no gaps via some queries where no rows were returned.
15. I now wanted to perform a switchover, hence issued:-
Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
All the archivers stopped as expected.
16. Now on machine2:
Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
17. On machine1:
Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
Primary_Now_Standby_SQL>STARTUP MOUNT;
Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
17. On machine2:
Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
However, here are my questions for clarifications:-
Q1. There is a question about ONLINE REDO LOGS within "#" characters.
Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
MRP0 APPLYING_LOG 1 47 452 1024000
but :
SQL> select max(sequence#) from v$archived_log;
46
Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
42 NO
43 YES
44 YES
45 YES
46 YES
What could be the possible reasons why sequence# 42 didn't get applied but the others did?
After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
Thank you very much in advance.
Regards,
Bharath
Edited by: Bharath3 on Jan 22, 2010 2:13 AM

Parameters:
Missing on the Primary:
DB_UNIQUE_NAME=oradgp
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
Missing on the Standby:
DB_UNIQUE_NAME=oradgs
LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
Your questions (Q1 answered above):
You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
Up to you. Not a requirement.
You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
Could this be due to inactivity on the primary database as I am not doing anything on it?
Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

Similar Messages

  • Data Guard Physical Standby Failover

    I need clarification on Physical Standby Failovers. Say I have a Primary db (A) and a physical standby (B), (A) fails and (B) is now the primary, can (A) be setup to become a standby of (B)? I've read 2 different statements in Oracle 10g Data Guard Concepts and Admin
    "During failovers involving a physical standby database: In all cases, after a failover, the original primary database can no longer participate in the Data Guard configuration."
    vice
    "After a fast-start failover occurs, the old primary database will automatically reconfigure itself as a new standby database upon reconnection to the configuration."
    Much thanks.

    In all cases, after a failover, the original primary database can no longer participate in the Data Guard configuration.
    TRUE ! But this doesn't mean that you cannot make your failed primary a new standby database !.
    You can achieve this using FLASHBACK DATABASE option:
    From the documentation:
    After a failover occurs, the original primary database can no longer participate in the Data Guard configuration until it is repaired and established as a standby database in the new configuration. To do this, you can use the Flashback Database feature to recover the failed primary database to a point in time before the failover occurred, and then convert it into a physical or logical standby database in the new configuration.
    Read the following documents:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/role_management.htm#sthref995
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1049997
    In addition to this I'd like to mention that you can achieve the same with doing an incomplete recovery instead of using FLASHBACK DATABASE.
    "After a fast-start failover occurs, the old primary database will automatically reconfigure itself as a new standby database upon reconnection to the configuration."
    TRUE. If you enable Fast-Start Failover:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14230/sofo.htm#CHDDFFEC
    Cheers!

  • Oracle 10gR2 Data guard physical or logical standby server?

    Hi
    We are planing to implement an Oracle 10gR2 data guard standby server for DR purposes, I found out that there are two type of standby server which is logical and physical standby server. I want to know which one is preferable? in term of complexity of setup and maintenance?
    regards

    Well it depends on what you mean by maintenance. I found the physical standby to be very little trouble at all ; however the logical standby has restrictions on it that the physical standby does not. In essence the physical standby merely digest archive logs; where as the logical standby uses logminer like functionality to process sql statements much like Oracle streams.
    Hope that helps,
    -JR jr.

  • About Data Guard - Physical Standby Database

    Dear All,
    I have read many documents regading Data Guard.
    I am about to setup Data Guard in our current environment but want to clear few tings.
    I have a confusion between physical and logical standby database.
    What I have read from different documents is:
    -Physical Standby Database in (11g)
    1) It is the most efFicient
    2) I can be either in mount or open state
    3) Select queries and reporting can be done to improve performance of the primary database.
    4) Schema on primary and standby database is always the same
    -- Logical Standby Database
    1) You can create additional tables, indexes, etc.
    2) Always in open mode
    3) Select queries and reporting can be done to improve performance of the primary database
    Now our scenerio is, that we have one server at the moment, OS is linux and operating system is 11g. We want to setup an other server in another country, will also be 11g on Linux in a way that it acts as standby/backup server. Schema and data is always the same. In case due to unavailability of primary server, standby server acts as the primary server (This has to be automated). The reason for unavailability could be any like maintenance work on primary server, network or hardware failure at primary server. The last and the most important thing is that users from this country where we will setup standby database will insert/update data on the primary server BUT queries and reporting will be done from this newly created standby data.
    Kindly recomend the best Data Guard in this scenerio and kindly correct me where I am wrong.
    Thanks, Imran

    A logical standby has various limitations on things like data types. It's also a much more complex architecture, which makes it more likely that something will break periodically and require attention. Applying redo to a physical standby is code that has been around forever and is as close to bullet-proof as you'll get. And you would generally prefer to fail over to a physical standby-- if you do things like create new objects in the logical standby, you may have to get rid of those objects during a failover to get acceptable OLTP performance.
    Justin

  • Read-only agent synching to a Data Guard physical standby?

    Hi all,
    we are trying to use TimesTen 11.2.2.4.1 as a read-only memory cache for an Oracle 11.2.3.0.7 schema on Linus RedHat 6.3, while using Oracle Data Guard to replicate the Oracle instance over geographically remote sites. On each site we would like to have two TT instances synchronizing with the local Oracle 11g instance. This works fine against the master DB, but are the TT agents going to be able to synchronize against physical standby instances?
    The problem it seems is that the TT agent uses dedicated structures in the Oracle master instance (related to the cache grid), which are going to be replicated into the standby instances. Is  the TT agent able to use the read-only, replicated structures to complete synchronization, or is this approach unworkable? What would be your advise as how to achieve this?
    Thanks for your help,
    Chris

    Hi again,
    so after testing a little bit it appears that this approach works indeed, at least against a limited number of manual DML operations. What I needed to do on the slave instance to have it working is the following:
    1 - Entirely exclude TTADMIN and TIMESTEN schemas from the Data Guard replication:
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TIMESTEN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TIMESTEN', object_name => '%');
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    2 - Erase both schemas from the local instance:
    DROP USER TTADMIN CASCADE;
    DROP USER TIMESTEN CASCADE;
    CREATE USER TTADMIN etc
    3 - Temporarily disable the database guard while creating the local ttCache structures, as the scripts seem to need to set a table-level lock on the source table:
    ALTER DATABASE GUARD NONE;
    ttIsql> CREATE READONLY CACHE GROUP etc
    ALTER DATABASE GUARD STANDBY;
    4 - Unset the "Fire_Once_Only" property for the local TTADMIN triggers:
    execute dbms_ddl.set_trigger_firing_property(trig_owner=> 'TTADMIN', trig_name=> 'TT_06_70560_T', fire_once => FALSE);
    At that point the cache seems to replicate properly in the most simple cases. I will try to test with some substantial load and against DG failovers to see how this behaves.
    Regards,
    Chris

  • Using Flashback with Data Guard (Physical)

    We are running Oracle 11g R2 on RHEL 5.5 with a simple Physical and Standby Data Guard configuration.
    When you flashback database the Primary database does the Standby also perform aflashback database?

    Hi damorgan,
    Sorry to reopen this thread but can you please point to the documentation your refer to?
    I went through this scenario on my 11R1 data guard setup and had different outcome.
    insert into tabb values (1,'aedfgadg');
    insert into tabb values (2,'adfkafgafdadf');
    commit;
    create restore point restore_me;
    insert into tabb values (3,'sabjadfjdfjasdvgjasdfav');
    insert into tabb values (4,'asd,fbadmfbadbfadbfafa');
    insert into tabb values (5,'dddddddddddddddddddd');
    commit;Then I flashback to restore_me and after open reset logs I query the database. Only first two rows are there. Also media recovery is broken.
    I open readonly standby and query the table. All five rows are there. Obviously the flashback has not affected standby.
    I logged a call with Oracle and they advised that I have to manually flashback standby database to that point in time. same applies to 11R2.
    Thanks,
    Ali

  • 11g Data guard 建physical standby database配置相关问题

    General information
    OS:red hat Linux 2.6.32-200.13.1.el5uek x86_64(primary,standby)
    Home version:11.2.0.3(primary,standby)
    Situation:
    实验用库,想学习data guard,在另一个服务器上克隆了一个HOME并按照官方文档http://docs.oracle.com/cd/E11882_01/server.112/e25608/create_ps.htm#i1225703
    中去实验,步骤均按照官方文档来,最后在mount standby库时校验select sequence# from v$archived_log没有记录,出错日志如下(从mount开如截取)
    standby alert_dbacoe.log
    Error logs
    alter database mount
    Completed: alter database mount
    Error 604 received logging on to the standby
    +FAL[client, ARC2]: Error 604 connecting to PRIMARYSV for fetching gap sequence+
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo01_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u03/app/oracle/oradata/dbacoe/redo01.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo01_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u03/app/oracle/oradata/dbacoe/redo01.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo02_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 2 thread 1: '/u03/app/oracle/oradata/dbacoe/redo02.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo02_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 2 thread 1: '/u03/app/oracle/oradata/dbacoe/redo02.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo03_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 3 thread 1: '/u03/app/oracle/oradata/dbacoe/redo03.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo03_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 3 thread 1: '/u03/app/oracle/oradata/dbacoe/redo03.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ARC3: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    Addition:
    因为是实验用库,所以结构有点怪,大体如下:
    Primary
    数据文件在BASE所在的oradata下 即/u03/app/oracle下,
    redo文件每组两份,一份在oradata下,另一份在同机的/u04/app/oracle/redundancy中(以_s结尾命名)
    archive文件在/u04/app/oracle/fast_recovery_area中
    standby
    数据文件在/u05/oracle/app/oracle/oradata下
    redundancy中的redo文件随着该目录放到oradata下的red中了
    archive文件目录则是/u05/oracle/app/oracle/fast_recovery_area中
    接下来是对应的init文件与TNS信息
    Primary端的init主要信息
    db_unique_name='PRIMARY'
    fal_client='PRIMARYSV'
    fal_server='STANDBYSV'
    log_archive_config='DG_CONFIG=(PRIMARY,STANDBY)'
    log_archive_dest_1='location=/u04/app/oracle/fast_recovery_area/DBACOE/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=PRIMARY'
    log_archive_dest_2='service=STANDBY ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=STANDBY'
    log_archive_dest_state_1='enable'
    log_archive_dest_state_2='enable'
    db_file_name_convert='/u05/oracle/app/oracle/oradata/dbacoe','/u03/app/oracle/oradata/dbacoe',/u05/oracle/app/oracle/oradata/red','/u04/app/oracle/redundancy'
    log_file_name_convert='/u05/oracle/app/oracle/fast_recovery_area/DBACOE/archivelog','/u04/app/oracle/fast_recovery_area/DBACOE/archivelog'
    Standby端的init信息
    log_archive_config='DG_CONFIG=(PRIMARY,STANDBY)'
    db_unique_name='STANDBY'
    db_file_name_convert='/u03/app/oracle/oradata/dbacoe','/u05/oracle/app/oracle/oradata/dbacoe','/u04/app/oracle/redundancy','/u05/oracle/app/oracle/oradata/red'
    log_file_name_convert='/u04/app/oracle/fast_recovery_area/DBACOE/archivelog','/u05/oracle/app/oracle/fast_recovery_area/DBACOE/archivelog'
    log_archive_dest_1='location=/u05/oracle/app/oracle/fast_recovery_area/DBACOE/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=STANDBY'
    log_archive_dest_state_1=enable
    log_archive_format=log%t_%s_%r.arc
    log_archive_dest_2='SERVICE=PRIMARYSV ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    log_archive_dest_state_2=enable
    TNS信息如下(均能tnsping通)
    PRIMARYSV =
    +(DESCRIPTION =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = db1.ad.xxxxxx.com)(PORT = 1521))+
    +(CONNECT_DATA =+
    +(SERVER = DEDICATED)+
    +(SERVICE_NAME = dbacoe)+
    +)+
    +)+
    STANDBYSV =
    +(DESCRIPTION =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = db2.ad.xxxxx.com)(PORT = 1529))+
    +(CONNECT_DATA =+
    +(SERVER = DEDICATED)+
    +(SERVICE_NAME = dbacoe)+
    +)+
    +)+
    PS:请大家花费一点点时间阅读一下这冗长的段子,刚开学data guard,参数上设置都很生疏,(光照着文档做了,是否有redo log相关的操作遗漏?)敬请指教
    Edited by: 961394 on Dec 10, 2012 1:52 AM
    Edited by: 961394 on 2012-12-10 上午4:58

    结贴了,感谢maclean一群的Jesse Lui帮我解决问题

  • Oracle Data Guard: Physical and Logical

    I have a Primary database and have created a Physical Standby on another node. The physical standby is kept in synch via REDO Aply - online redo logs.
    QUESTION: is it possible to create a Logical Standby off of the Physical Standby? I dont think so since the logical is kept in synch from a primary via SQL- Aply. CAN SOMEONE PLEASE CONFIRM.
    I thought that a logical standby MUST be created from a Primary and not a Physical.
    Thanks!!

    Documentation is your friend,Orace does not not hide the information,how to create a logical standby:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10700/create_ls.htm#g105412
    Werner

  • Data Guard - Physical Standby

    I have two questions here, or I would say two cases here to discuss.
    1. I have shutdown my standby db and unmount archive location from server for 2 hours. After that mount the file system again and bring up the standby DB. As we we ASYNC Standby recovered successfully, but my worry is we didn't get any of the physical archive file in the standby location for those 2 hours. Can't this automatically get this from Primary DB or two we have to manually copy them from Primary Server. Though we won't require them as Standby DB is in sync. Kindly suggest.
    2. Why we have to create 4 standby redo logs in Primary server. Can't we create less number of standby redo logs in Primary.

    1) Yes, the archives that don't get transferred will be retried until they do. You could look into Oracle Archive Log gap detection and resolution for more info.
    http://www.sc.ehu.es/siwebso/KZCC/Oracle_10g_Documentacion/server.101/b10823/log_transport.htm
    2) According to Oracle they have to be exactly the same size but the minimum requirement is to have one more than the primary.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#SBYDB00426

  • Data Guard - Physical Standby Setup by RMAN

    I am trying to set up a physical standby using RMAN. After I brought up the standby db, I failed to turn on redo apply. I found out that the standby redo log files were not copied over to the standby.
    The back up script I use is as follows.
    run {
    allocate channel ch2 type disk;
    backup current controlfile for standby format '/backups/sitv/cf_%d_t%t_s%s_p%p';
    backup format '/backups/sitv/%U.bkup' database;
    sql 'alter system archive log current';
    backup format '/backups/sitv/al_t%t_s%s_p%p.bkup' archivelog all;
    release channel ch2;
    What command should I use to be able to make a backup of the standby log files as well?
    Thanks.

    Hi Sybrand,
    Thank for the reply. Let me give a little bit more detailed information on what I did.
    I followed the documentation as follows to set up the physical standby
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#g88234
    I created the standby log files per section 3.1.3 on the primary db before the standby is created.
    I used the following command to restore the standby on a different server.
    run {
    allocate auxiliary channel ch1 device type disk;
    duplicate target database for standby dorecover nofilenamecheck;}
    My question is the standby log files did not show up on the standby server. However, the log files showed in the v$logfile table. After I tured on the redo apply, the changes I made in primary db did not happen in the standby db.
    Any idea on how to fix it?
    Thanks.

  • Problem with data guard Creating a Physical Standby Database turorial

    There is a tutorial of Creating a data guard Physical Standby Database:
    http://www.oracle.com/technology/obe/11gr1_db/ha/dataguard/physstby/physstdby.htm
    I tried to install it on two servers. One for primary database second for physical standby.
    I have error on C. Creating the standby database over the network, action #6:
    "On the standby system, set the ORACLE_SID environment variable to your <physical standby SID> (i.e. orclsby1) and start the instance in NOMOUNT mode with the text initialization parameter file."
    When I try to connect to idle instance there is an error pops up:
    C:\>sqlplus / as sysdba
    SQL*Plus: Release 11.1.0.7.0 - Production on Thu May 21 16:28:10 2009
    Copyright (c) 1982, 2008, Oracle. All rights reserved.
    ERROR:
    ORA-12560: TNS:protocol adapter error
    I'v checked listener and it is runned. There is no service for database because there is no database yet.
    The question is did some one installed data guard configuration using this tutorial? Is there any errors in it? What should I do to finish this installation?

    On Windhose for every instance a service must have been created using the oradim command.
    Oracle tutorials are usually Unix-centric, as Windhose is an odd man out, so they don't discuss that bit.
    'Kindly do the needful' and create the service prior to starting the instance in nomount mode
    Hint: oradim is documented and has a help=y clause.
    IIRC there is an option in database control (in the maintenance part) which automates everything.
    Sybrand Bakker
    Senior Oracle DBA
    Experts: those who do read documentation

  • A Data Guard Question

    Dear experts,
    This time I have a question regarding Data Guard.
    Database:      Oracle 10g Release 2 (10.2.0.3)
    OS:          IBM - AIX 5.3 - ML-5
    Data Guard:     Physical Standby
    We have multiple data guard configuraiton in place and all of them are configured in "MAXIMUM PERFORMANCE" mode.
    Currently, we have a separate mount point for archive logs (say /dbarch) on both primary and standby servers.
    Once log is archived on primary, it is shipped to standby server and applied.
    I think we are wasting space by allocating (/dbarch) on standby server, instead we can share "/dbarch" of primary with standby using NFS.
    I remember reading such document. I tried to search in Oracle documentation, google, and metalink for the same but failed :((
    Any help in this regard will be very helpful.
    Thanks in advance.
    Regards

    From a DR perpespective, this sounds like a recipe for losing data.
    If your primary site has a disaster, and there are logs that have not been applied to the standby then you will never be able to apply them as they will have been lost in the crash.
    The point of having the standby is to eliminate a single point of failure - and this mechanism is reintroducing it!
    jason.
    http://jarneil.wordpress.com

  • Configure listener for data guard

    HI everyone,
    I am currently setting data guard (Physical standby database) for my database. But I have problem to configure the listener on both servers. Can anyone provide me some example?
    Oracle: 10g R2
    O/S: Windows
    Primary database ken10g
    standby database: ken10gbk
    Following is the content of my current listener files on both of servers:
    Primary server:
    # listener.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\listener.ora
    # Generated by Oracle configuration tools.
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
    (PROGRAM = extproc)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (ADDRESS = (PROTOCOL = TCP)(HOST = Primary_server)(PORT = 1521))
    Standby Server:
    # listener.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\listener.ora
    # Generated by Oracle configuration tools.
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
    (PROGRAM = extproc)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = standby_server)(PORT = 1521))
    Thanks in advance.
    Ken

    Hi Ken,
    You need to configure this on both primary and standby, I would have kept different listener name on primary and standby. Also if you are going to use dataguard broker you would need to set GLOBAL_DBNAME in your listener.ora file
    I have give a sample entry for tnsnames.ora and listener.ora
    TNSNAMES.ORA on primary
    STNDBY =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-prv)(PORT = 10521))
    (CONNECT_DATA =
    (SERVICE_NAME = STNDBY)
    PRIM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-prv)(PORT = 10521)))
    PRIMARY =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-prv)(PORT = 10521))
    (CONNECT_DATA =
    (SERVICE_NAME = PRIMARY)
    EXTPROC_CONNECTION_DATA =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    (CONNECT_DATA =
    (SID = PLSExtProc)
    (PRESENTATION = RO)
    Copy the same file to the standby server and adjust it based on the listener.ora file. Also update the listener.ora file so that it listen the SIDs mentioned in the tnsnames.ora file.
    Listener.ora
    LISTENER_STBY =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-prv)(PORT = 10521))
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    SID_LIST_LISTENER_STBY =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db10g)
    (PROGRAM = extproc)
    (SID_DESC =
    (SID_NAME = stndby)
    (GLOBAL_DBNAME = stndby_DGMGRL)
    (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db10g)
    )

  • Applying recommended patch bundles to a Data Guard environment.

    I've learned that there are 3 recommended patch bundles for 10.2.0.4 Data Guard, as follows:
    7937113 - 10.2.0.4 Data Guard Logical Recommended Patch Bundle #1
    7936993 - 10.2.0.4 Data Guard Physical Recommended Patch Bundle #1
    7936793 - 10.2.0.4 Data Guard Broker Recommended Patch Bundle #1
    My question is this...should I apply the logical and physical patch bundles to the primary database as well?
    Thanks,
    Michael Anderson
    OCP - Bank of the West

    That's what I suspected. Now, in my environment, I could never conceive of a situation where I would switch roles between the primary and the logical standby database, so I would only apply the DG physical patch bundle to the primary database but not the DG Logical patch bundle. Agree?

  • Data Guard logical standby Versus Streams

    I'm referring to both Oracle 10g/9i
    If a Data Guard logical database uses similar technology to Streams (Log Mining and SQL apply), why can't you stand up a standby database on a different platform, or at least I have found nothing on the subject.
    But in 11g Oracle Data Guard (physical standby database) is a solution for same endianness platform migration.
    I will appreciate any insight on the subject.

    Yes...thats true...both uses same technology...
    REDO LOGMINER
    SQL------------------------>BLOCK LEVEL CHANGES---------------------------------->SQL
    But there are serious implemetation diff..
    1) Oracle Data Guard is designed for protecting from data failure and disasters.
    Streams is designed for information sharing and distribution but can also provide a very efficient high availability solution.
    2) Streams is configured from the bottom up — individual tables, schemas, capture processes, apply processes, queues.
    Logical Standby is configured from the top down — start with entire database, then specify only what you don’t want.
    As logical standby is top down and changes are capture at remote location (logical standby db) and for that archive log need to be shipped using FAL client/server to remote and to ship the archivelog in Data Guard configuration, all members should be running on same platform.
    As said before, Streams is configured from the bottom up, it start with tables--> Schemas--->Database and we can capture changes at local/remote location. If we capture changes locally, target streams db can be on diff platform. But for downstream capture must need same platform as logical standby database needed so that archivelogs can transport from source to downstream db.

Maybe you are looking for

  • Enabling a button only after all other boxes on the slide have been clicked

    Hi, I am using Captivate 5 but I am fairly new to it and have very little scripting experience so I think what I am looking for is possible but I am not sure how to go about it! I have a slide with 10 click boxes which each link to a different slide

  • How can I stop my iPhone from randomly, persistently, making a noise it should not be making?

    This has been happening for about 1-2 months. My iPhone makes a sound which is a normal iOS sound, but without any reason for it to be making the sound. It happens randomly, perhaps 1 day per week although when it happens it happens several times in

  • XsltCollaboration code file path exceeds 255 characters

    All, I'm trying to build a Deployment Profile for a CAPS project containing an XSLT Collaboration Definition. I managed to build the Deployment Profile previously with persistence turned off. After turning on persistence (Persistence for Reporting se

  • 3.50mm to 6.35mm plug adapter, music in Midi Sequence for

    Hey all, I just bought the X-FI Platinum card to go with my Logitech Z-5500s. The problem is the platinum card only comes with ONE 3.50mm to 6.35mm plug adapter, and two are needed to connect to a headset's sound jack and microphone jack, on the X-FI

  • XK01 creating a vendor  issues

    Creating a vendor in XK01 , system allow me to fill up all the screens for any account group when I try to save it  there is a message saying that "account 100025 already exist" and yes already exist and I do not know why the system is trying to assi