Using Flashback with Data Guard (Physical)

We are running Oracle 11g R2 on RHEL 5.5 with a simple Physical and Standby Data Guard configuration.
When you flashback database the Primary database does the Standby also perform aflashback database?

Hi damorgan,
Sorry to reopen this thread but can you please point to the documentation your refer to?
I went through this scenario on my 11R1 data guard setup and had different outcome.
insert into tabb values (1,'aedfgadg');
insert into tabb values (2,'adfkafgafdadf');
commit;
create restore point restore_me;
insert into tabb values (3,'sabjadfjdfjasdvgjasdfav');
insert into tabb values (4,'asd,fbadmfbadbfadbfafa');
insert into tabb values (5,'dddddddddddddddddddd');
commit;Then I flashback to restore_me and after open reset logs I query the database. Only first two rows are there. Also media recovery is broken.
I open readonly standby and query the table. All five rows are there. Obviously the flashback has not affected standby.
I logged a call with Oracle and they advised that I have to manually flashback standby database to that point in time. same applies to 11R2.
Thanks,
Ali

Similar Messages

  • Problem with data guard Creating a Physical Standby Database turorial

    There is a tutorial of Creating a data guard Physical Standby Database:
    http://www.oracle.com/technology/obe/11gr1_db/ha/dataguard/physstby/physstdby.htm
    I tried to install it on two servers. One for primary database second for physical standby.
    I have error on C. Creating the standby database over the network, action #6:
    "On the standby system, set the ORACLE_SID environment variable to your <physical standby SID> (i.e. orclsby1) and start the instance in NOMOUNT mode with the text initialization parameter file."
    When I try to connect to idle instance there is an error pops up:
    C:\>sqlplus / as sysdba
    SQL*Plus: Release 11.1.0.7.0 - Production on Thu May 21 16:28:10 2009
    Copyright (c) 1982, 2008, Oracle. All rights reserved.
    ERROR:
    ORA-12560: TNS:protocol adapter error
    I'v checked listener and it is runned. There is no service for database because there is no database yet.
    The question is did some one installed data guard configuration using this tutorial? Is there any errors in it? What should I do to finish this installation?

    On Windhose for every instance a service must have been created using the oradim command.
    Oracle tutorials are usually Unix-centric, as Windhose is an odd man out, so they don't discuss that bit.
    'Kindly do the needful' and create the service prior to starting the instance in nomount mode
    Hint: oradim is documented and has a help=y clause.
    IIRC there is an option in database control (in the maintenance part) which automates everything.
    Sybrand Bakker
    Senior Oracle DBA
    Experts: those who do read documentation

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Oracle 9i Backup/Restore with Data Guard Issues

    Greetings,
    I am currently using Oracle 9i Data Guard to have a Primary/Standby model.
    The current implementation compares the backup piece id from the backed up image (taken from the Primary) with the backup piece id on the Standby db. If the number are not equal then the restore will fail.
    1- Is this the best implementation for restoring on a standby?
    2- Would it be better to give a permitted gap range between the backup piece ids between the two databases? and if so what would be an acceptable range?
    3- In case the gap range discussed in #2 above is allowed, when the Standby is started as primary and the primary set to be standby, would the Data Guard automatically tries to retrieve all the missing archive/redo log files from the primary db?
    Thanks

    Hi,
    I am not sure I am following you here. Are you trying to create a physical standby, by using a backup taken from the primary?
    If this is what you're trying to do, you can take any hot backup from the primary and use it to sync the standby with your primary. The older the backup you're using to create the standby, the more archive logs you'll need to get synchronized.
    As long as you have all the archive logs needed in the archive_log_dest, data guard will automatically retrieve all the missing logs, and apply them on the standby. Again, only if they are available on the primary.
    Not sure If this is what you were asking...
    Idan.

  • Implemeting 11gR2 RAC with Data Guard

    Hi ,
    Could any one provide the steps on how to setup 11gR2 two node RAC With Dataguard . Could the 11R2 Active database duplication can be used in setting up the standby ?
    I just need the order of steps to be followed to set up the environment.
    Thanks,
    shashi.

    Hi Fiedi ,
    Thanks for the reply .
    I know how to build the oracle dataguard . But , I'm looking for the order of steps that I need to follow to build 11gR2 RAC with data guard.
    1] Set up the Grid Infrsatructure for the 2 node RAC .
    2] Create the database .
    3] Modify the init.ora prameter to chage the above created database as primary .
    4] Set up the grid infrastructure for the 2 node RAC on the DR site.
    5] Create the standby database using 11gR2 active database dupication.
    Is the above order correct ? If not , let me know the correct order of steps that needs to be followed to setup 11gR2 RAC with dataguard.

  • Repository with Data Guard

    Can i use a database with Data Guard for HA with "fast connect failover" for your EM repository database?

    Yes, it is the preferred method. Take advantage of the DG ability to execute a trigger on switchover/failover to move the service used by the OMS from the primary to the standby.
    Please check the chapter advance config

  • How to use Count with Date Parameters

    Hello,
    I am having issues using the Count() function in conjunction with date parameters.
    This is a Siebel report and in my report I have 2 date parameters(From Date, To Date). In a nutshell I am basically trying to count Opportunities that has a start date within the given date period. However I don't see a reasonable way to put my date parameters within the Count() function. The reason being is that I need to have a huge chunk of code to convert the dates into a common format that can be compared, and it won't even fit within the code block in my rtf template. I am not even sure how to put multiple conditional statements inside a Count() function since all the examples I have seen are very simple.
    Anyone have a suggestion on how to use Count() with date parameters?
    Thanks.

    Any chance you can get the date formats in the correct format from siebel?
    I don't know Siebel - so I can't help you with that. If you get the correct format it is just
    <?count(row[(FromDate>=date) and  (date<=ToDate))?>
    Otherwise the approach would probably need to use string function to get year/monthd/day from the date
    and store it into a varialbe and compare later the same way
    <?variable@incontext:from; ....?>
    <?variable@incontext:to; ...?>
    <?count(row[($from>=date) and  (date<=$to))?>
    Potentially you can use the date functions such as xdofx:to_date to do the conversion
    [http://download.oracle.com/docs/cd/E12844_01/doc/bip.1013/e12187/T421739T481158.htm]
    But I am not sure if they are available in your siebel implementation.
    Hope that helps

  • Rolling upgrade with Data Guard

    I'm interesting if this table of possible upgrade from documentation (http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/preup.htm#i1007814) is true, if I have a plan to do rolling upgrade with Data Guard?

    I thinking no, that table does not appear to have Rolling Upgrade Information.
    If I read the document correctly you can only this with a logical standby database in Data Guard.
    Larry Carpenter's book has some information on this in Chapter 11.
    There is also a separate Data Guard section here where you might find more information.
    Data Guard
    Best Regards
    mseberg

  • Data Guard Physical Standby Failover

    I need clarification on Physical Standby Failovers. Say I have a Primary db (A) and a physical standby (B), (A) fails and (B) is now the primary, can (A) be setup to become a standby of (B)? I've read 2 different statements in Oracle 10g Data Guard Concepts and Admin
    "During failovers involving a physical standby database: In all cases, after a failover, the original primary database can no longer participate in the Data Guard configuration."
    vice
    "After a fast-start failover occurs, the old primary database will automatically reconfigure itself as a new standby database upon reconnection to the configuration."
    Much thanks.

    In all cases, after a failover, the original primary database can no longer participate in the Data Guard configuration.
    TRUE ! But this doesn't mean that you cannot make your failed primary a new standby database !.
    You can achieve this using FLASHBACK DATABASE option:
    From the documentation:
    After a failover occurs, the original primary database can no longer participate in the Data Guard configuration until it is repaired and established as a standby database in the new configuration. To do this, you can use the Flashback Database feature to recover the failed primary database to a point in time before the failover occurred, and then convert it into a physical or logical standby database in the new configuration.
    Read the following documents:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/role_management.htm#sthref995
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1049997
    In addition to this I'd like to mention that you can achieve the same with doing an incomplete recovery instead of using FLASHBACK DATABASE.
    "After a fast-start failover occurs, the old primary database will automatically reconfigure itself as a new standby database upon reconnection to the configuration."
    TRUE. If you enable Fast-Start Failover:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14230/sofo.htm#CHDDFFEC
    Cheers!

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Upgrade 11.1 to 11.2 with Data Guard In Place

    I am planing on upgrading my primary and standby servers to 11gR2 and was wondering what are the best steps. I do not plan on doing a switchover during the process because down time is only about an hour and we can afford it.
    Here is what I plan on doing:
    1. On Primary - ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER SCOPE=BOTH;
    2. On Standby - ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    3. Upgrade the standby database then the primary database
    4. On Standby - ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    5. On Primary - ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    Let me know if these steps don't sound right. But my question is should I disable archive during the upgrade processes? if I do could this mess up sending archive logs over to primary because there will be missing sequences numbers?
    And my second question is do I need to do anything to the Data Guard broker after the upgrade. I currently use Enterprise Manager to manager the setup.
    Thanks in advance for any responses. Oh and I am running on Server 2003 if it matters.
    Jeff

    Is this a physical or logical standby ? If logical, the steps in MOS Doc 437276.1 (Upgrading Oracle Database with a Logical Standby Database In Place) may be of ehlp
    HTH
    Srini

  • Read-only agent synching to a Data Guard physical standby?

    Hi all,
    we are trying to use TimesTen 11.2.2.4.1 as a read-only memory cache for an Oracle 11.2.3.0.7 schema on Linus RedHat 6.3, while using Oracle Data Guard to replicate the Oracle instance over geographically remote sites. On each site we would like to have two TT instances synchronizing with the local Oracle 11g instance. This works fine against the master DB, but are the TT agents going to be able to synchronize against physical standby instances?
    The problem it seems is that the TT agent uses dedicated structures in the Oracle master instance (related to the cache grid), which are going to be replicated into the standby instances. Is  the TT agent able to use the read-only, replicated structures to complete synchronization, or is this approach unworkable? What would be your advise as how to achieve this?
    Thanks for your help,
    Chris

    Hi again,
    so after testing a little bit it appears that this approach works indeed, at least against a limited number of manual DML operations. What I needed to do on the slave instance to have it working is the following:
    1 - Entirely exclude TTADMIN and TIMESTEN schemas from the Data Guard replication:
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TIMESTEN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TIMESTEN', object_name => '%');
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    2 - Erase both schemas from the local instance:
    DROP USER TTADMIN CASCADE;
    DROP USER TIMESTEN CASCADE;
    CREATE USER TTADMIN etc
    3 - Temporarily disable the database guard while creating the local ttCache structures, as the scripts seem to need to set a table-level lock on the source table:
    ALTER DATABASE GUARD NONE;
    ttIsql> CREATE READONLY CACHE GROUP etc
    ALTER DATABASE GUARD STANDBY;
    4 - Unset the "Fire_Once_Only" property for the local TTADMIN triggers:
    execute dbms_ddl.set_trigger_firing_property(trig_owner=> 'TTADMIN', trig_name=> 'TT_06_70560_T', fire_once => FALSE);
    At that point the cache seems to replicate properly in the most simple cases. I will try to test with some substantial load and against DG failovers to see how this behaves.
    Regards,
    Chris

  • Oracle 10gR2 Data guard physical or logical standby server?

    Hi
    We are planing to implement an Oracle 10gR2 data guard standby server for DR purposes, I found out that there are two type of standby server which is logical and physical standby server. I want to know which one is preferable? in term of complexity of setup and maintenance?
    regards

    Well it depends on what you mean by maintenance. I found the physical standby to be very little trouble at all ; however the logical standby has restrictions on it that the physical standby does not. In essence the physical standby merely digest archive logs; where as the logical standby uses logminer like functionality to process sql statements much like Oracle streams.
    Hope that helps,
    -JR jr.

  • 9i RAC to single standby DB with Data guard

    hello gurus,
    I have a question, Im trying to set up a standby database.
    The primary DB would be a two node 9i RAC R2, to a one single standby DB.
    I done this already. My question is if I can use data guard with this type of configuration. I have try to configured it with OEM and the GUI does not support this.
    does any one know if this can be achieve using dataguard's CLI??
    thanks

    well that is something out of my reach for the moment.
    I have to do everything on 9i.
    I have the standby environment running already, i just want to make my life easier trying to ad min this environment using data guard

  • What is the major plus with Data Guard compare to standby

    Hi,
    We don't need active active replication between our prod and DR, our DR can be 20 minutes RPO (Recovery Point Objective) so what is the main advantage of configuring and installing oracle data guard compare to a simple standby server?
    My understanding of data guard is a that oracle will ship your log auto on the DR and apply it instead of me, doing it with 2 script (one on primary server that ship log over, the second on standby that apply the archive log).

    1. Your Primary site get hit by something you can failover to the standby.
    2. If you have a large group of "Reader" users on the Primary you can switch them to the Standby using Active Data Guard.
    3. You can do a switchover and avoid an outage if your Primary server needs work of any kind.
    4. You can preform backup at either site taking even more load off you primary if needed.
    or as the books says:
    Disaster recovery, data protection, and high availability.
    Down Side
    1. Cost
    2. Network Load (or additional load)
    Edited by: mseberg on Feb 7, 2011 10:23 AM

Maybe you are looking for

  • How to add fields in POHEADER

    Hi Experts, In my requirement I 've to add two custom fields in ME21N Transaction at header level in customer tab. I 've added two fields say F1 and F2  using Screen exit.. Now I need to add condition based on F2 in ZXMEWU08... But there is no field

  • How do i find my iphone if its off?

    someone stole my iphone and i need to find it.

  • Cannot download remaining images in mail

    . I have selected the option to download remote images in settings, but some emails will not open everything.

  • HT4796 My PC is not installing the Windows Migration Assistant.

    After I run the setup wizard the only thing that shows up is a shortcut in the start menu. No program. Any ideas on what is going wrong? Tried everything I  could think of and no success.

  • Log group

    Hello, ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K; In the above statements, there is no group mentioned. What group does it take when group is mentioned? Are these two logs log1c.rdoand log2c.rdo for multi