Exadata to non Exadata Data Guard

I was wondering if I can make a configuration of an Exadata RAC to non Exadata single node Data Guard.
I made some RAC to single node data guards without problem, but I wonder if it is posible with exadata since
exadata is intel based, maybe I can save some thousands with a DR site with a non exadata hardware.
FJA

In theory, it should work.  However, I suggest that you ask your Exadata Support.  Or raise a query in the Exadata forum space.
Just realised that if you use Exadata specific features (EHCC -- Hybrid Columnar Compression), you'd have difficulties. 
Hemant K Chitale

Similar Messages

  • Data Guard and EHCC on Exadata over to *Non*-Exadata box -- does it work?

    Storage folks are proposing to DG the Exadata over to a non-Exadata box for backup purposes. I have been told it wont work. Seems to make sense that it wouldnt. Any confirmation out there in the docs?
    Daryl.

    Dan.Norris wrote:
    While true that the base 11.2.0.1 release was not able to decompress EHCC tables with alter table move, that functionality was made available to 11.2.0.1 starting with Exadata BP4 for 11.2.0.1 (via bug 9074066). MOS note 1316026.1 shows that 11.2.0.1 BP4 is when that functionality was added to 11.2.0.1. Technically, it wasn't "added" functionality, but rather the bug was fixed :)
    Thanks for the insight on this, Dan. I hadn't tested it again after BP4 came out. That's great to hear that they were able to get that fixed up.

  • Exadata + Physical Standby (non-exadata) + Backups

    Some folks are asking me for my opinion on a backup strategy - I don't think its possible/feasible, but I will put it out here for comments..
    Proposal:
    Create physical dg db (non exadata) with netapp storage of an Exadata (full rac v2).
    Create rman backups of the physical dg using netapp snapshot technologies.
    I cant see how this would work. Sure we can take the snaps on the pdg and restore the pdg, but ..
    a) what if we lose a datafile on the exadata?
    Its asm - what rman command would you run to restore the datafile? a simple restore datafile? I would think it would be more complicated.
    b) what if we lost all of the exadata - corruption, physical,logical whatever..
    rman restore database wouldn't work would it? Would it know to restore from the backups done on the pdg?
    EIther way - it seems that you are inevitably restoring upwards of ~100tb from the pdg back to the Exadata via some pipe (nic/Ib).
    I see one comment that says you can use pdg backups if they went to tape, but not if they went to disk. I assume the netapp snaps would be considered disk backups.
    Thoughts?

    Objectives -- quick restore times (< 8hrs) - not to rely on tapes - full backups - reduce impact on source db
    Size being upwards of 50-100Tb
    The vendor has outlined the product which "seems" like a great thing - but will it work with Exadata
    See this thread on the RAC-RMAN with snapshot on NetApp

  • Data Guard Failover after primary site network failure or disconnect.

    Hello Experts:
    I'll try to be clear and specific with my issue:
    Environment:
    Two nodes with NO shared storage (I don't have an Observer running).
    Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
    Two single instance databases, one per node. NO RAC.
    What I'm being able to perform with no issues:
    Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
    Automatic fail(over) when primary node is rebooted with "reboot" or "init"
    Automatic fail(over) when primary node is shut down with "shutdown".
    What I'm NOT being able to perform:
    If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
    Same situation happens if I manually disconnect the server from the power.
    This is the alert logs I have:
    This is the portion of the alert log at Standby site when Real Time Replication is working fine:
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
      Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
    At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
    Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
    Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
    Media Recovery Complete (primary_db)
    Terminal Recovery: successful completion
    Forcing ARSCN to IRSCN for TR 0:15922544
    Mon Dec 23 17:13:22 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Resetting standby activation ID 2071848820 (0x7b7de774)
    Completed:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Mon Dec 23 17:13:33 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Terminal Recovery: applying standby redo logs.
    Terminal Recovery: thread 1 seq# 7 redo required
    Terminal Recovery:
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
      Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
    Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
    Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
    Media Recovery Complete (primary_db)
    Terminal Recovery: successful completion
    Forcing ARSCN to IRSCN for TR 0:15922544
    Mon Dec 23 17:13:22 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Resetting standby activation ID 2071848820 (0x7b7de774)
    Completed:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Mon Dec 23 17:13:33 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Attempt to do a Terminal Recovery (primary_db)
    Media Recovery Start: Managed Standby Recovery (primary_db)
    started logmerger process
    Mon Dec 23 17:13:33 2013
    Managed Standby Recovery not using Real Time Apply
    Media Recovery failed with error 16157
    Recovery Slave PR00 previously exited with exception 283
    ORA-283 signalled during:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
    Mon Dec 23 17:13:34 2013
    Shutting down instance (immediate)
    Shutting down instance: further logons disabled
    Stopping background process MMNL
    Stopping background process MMON
    License high water mark = 38
    All dispatchers and shared servers shutdown
    ALTER DATABASE CLOSE NORMAL
    ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
    ALTER DATABASE DISMOUNT
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:38 2013
    Mon Dec 23 17:13:38 2013
    Mon Dec 23 17:13:38 2013
    ARCH shutting downARCH shutting down
    ARCH shutting down
    ARC0: Relinquishing active heartbeat ARCH role
    ARC2: Archival stopped
    ARC0: Archival stopped
    ARC1: Archival stopped
    Completed: ALTER DATABASE DISMOUNT
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:40 2013
    Stopping background process VKTM
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:43 2013
    Instance shutdown complete
    Mon Dec 23 17:13:44 2013
    Adjusting the default value of parameter parallel_max_servers
    from 1280 to 470 due to the value of parameter processes (500)
    Starting ORACLE instance (normal)
    ************************ Large Pages Information *******************
    Per process system memlock (soft) limit = 64 KB
    Total Shared Global Region in Large Pages = 0 KB (0%)
    Large Pages used by this instance: 0 (0 KB)
    Large Pages unused system wide = 0 (0 KB)
    Large Pages configured system wide = 0 (0 KB)
    Large Page size = 2048 KB
    RECOMMENDATION:
      Total System Global Area size is 3762 MB. For optimal performance,
      prior to the next instance restart:
      1. Increase the number of unused large pages by
    at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
      get 100% of the System Global Area allocated with large pages
      2. Large pages are automatically locked into physical memory.
    Increase the per process memlock (soft) limit to at least 3770 MB to lock
    100% System Global Area's large pages into physical memory
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Initial number of CPU is 32
    Number of processor cores in the system is 16
    Number of processor sockets in the system is 2
    CELL communication is configured to use 0 interface(s):
    CELL IP affinity details:
        NUMA status: NUMA system w/ 2 process groups
        cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
    CELL communication will use 1 IP group(s):
        Grp 0:
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =88
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    NUMA system with 2 nodes detected
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    ORACLE_HOME = /u01/oracle/product/11.2.0.4
    System name:    Linux
    Node name:      node2.localdomain
    Release:        2.6.32-131.0.15.el6.x86_64
    Version:        #1 SMP Tue May 10 15:42:40 EDT 2011
    Machine:        x86_64
    Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
    System parameters with non-default values:
      processes                = 500
      sga_target               = 3760M
      control_files            = "/u02/oracle/orafiles/primary_db/control01.ctl"
      control_files            = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
      db_file_name_convert     = "standby_db"
      db_file_name_convert     = "primary_db"
      log_file_name_convert    = "standby_db"
      log_file_name_convert    = "primary_db"
      control_file_record_keep_time= 40
      db_block_size            = 8192
      compatible               = "11.2.0.4.0"
      log_archive_dest_1       = "location=/u02/oracle/archivelogs/primary_db"
      log_archive_dest_2       = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
      log_archive_dest_state_2 = "ENABLE"
      log_archive_min_succeed_dest= 1
      fal_server               = "primary_db"
      log_archive_trace        = 0
      log_archive_config       = "DG_CONFIG=(primary_db,standby_db)"
      log_archive_format       = "%t_%s_%r.dbf"
      log_archive_max_processes= 3
      db_recovery_file_dest    = "/u02/oracle/fast_recovery_area"
      db_recovery_file_dest_size= 30G
      standby_file_management  = "AUTO"
      db_flashback_retention_target= 1440
      undo_tablespace          = "UNDOTBS1"
      remote_login_passwordfile= "EXCLUSIVE"
      db_domain                = ""
      dispatchers              = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
      job_queue_processes      = 0
      audit_file_dest          = "/u01/oracle/admin/primary_db/adump"
      audit_trail              = "DB"
      db_name                  = "primary_db"
      db_unique_name           = "standby_db"
      open_cursors             = 300
      pga_aggregate_target     = 1250M
      dg_broker_start          = FALSE
      diagnostic_dest          = "/u01/oracle"
    Mon Dec 23 17:13:45 2013
    PMON started with pid=2, OS id=29108
    Mon Dec 23 17:13:45 2013
    PSP0 started with pid=3, OS id=29110
    Mon Dec 23 17:13:46 2013
    VKTM started with pid=4, OS id=29125 at elevated priority
    VKTM running at (1)millisec precision with DBRM quantum (100)ms
    Mon Dec 23 17:13:46 2013
    GEN0 started with pid=5, OS id=29129
    Mon Dec 23 17:13:46 2013
    DIAG started with pid=6, OS id=29131
    Mon Dec 23 17:13:46 2013
    DBRM started with pid=7, OS id=29133
    Mon Dec 23 17:13:46 2013
    DIA0 started with pid=8, OS id=29135
    Mon Dec 23 17:13:46 2013
    MMAN started with pid=9, OS id=29137
    Mon Dec 23 17:13:46 2013
    DBW0 started with pid=10, OS id=29139
    Mon Dec 23 17:13:46 2013
    DBW1 started with pid=11, OS id=29141
    Mon Dec 23 17:13:46 2013
    DBW2 started with pid=12, OS id=29143
    Mon Dec 23 17:13:46 2013
    DBW3 started with pid=13, OS id=29145
    Mon Dec 23 17:13:46 2013
    LGWR started with pid=14, OS id=29147
    Mon Dec 23 17:13:46 2013
    CKPT started with pid=15, OS id=29149
    Mon Dec 23 17:13:46 2013
    SMON started with pid=16, OS id=29151
    Mon Dec 23 17:13:46 2013
    RECO started with pid=17, OS id=29153
    Mon Dec 23 17:13:46 2013
    MMON started with pid=18, OS id=29155
    Mon Dec 23 17:13:46 2013
    MMNL started with pid=19, OS id=29157
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    ORACLE_BASE from environment = /u01/oracle
    Mon Dec 23 17:13:46 2013
    ALTER DATABASE   MOUNT
    ARCH: STARTING ARCH PROCESSES
    Mon Dec 23 17:13:50 2013
    ARC0 started with pid=23, OS id=29210
    ARC0: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    ARC0: STARTING ARCH PROCESSES
    Successful mount of redo thread 1, with mount id 2071851082
    Mon Dec 23 17:13:51 2013
    ARC1 started with pid=24, OS id=29212
    Allocated 15937344 bytes in shared pool for flashback generation buffer
    Mon Dec 23 17:13:51 2013
    ARC2 started with pid=25, OS id=29214
    Starting background process RVWR
    ARC1: Archival started
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    Mon Dec 23 17:13:51 2013
    RVWR started with pid=26, OS id=29216
    Physical Standby Database mounted.
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Mon Dec 23 17:13:51 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
             USING CURRENT LOGFILE DISCONNECT FROM SESSION
    Attempt to start background Managed Standby Recovery process (primary_db)
    Mon Dec 23 17:13:51 2013
    MRP0 started with pid=27, OS id=29219
    MRP0: Background Managed Standby Recovery process started (primary_db)
    ARC2: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC2: Becoming the heartbeat ARCH
    ARC2: Becoming the active heartbeat ARCH
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival Error
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
    This is the rest of the log:
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:52
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:55
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    started logmerger process
    Mon Dec 23 17:13:56 2013
    Managed Standby Recovery starting Real Time Apply
    MRP0: Background Media Recovery terminated with error 16157
    Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
    ORA-16157: media recovery not allowed following successful FINISH recovery
    Managed Standby Recovery not using Real Time Apply
    Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
             USING CURRENT LOGFILE DISCONNECT FROM SESSION
    Recovery Slave PR00 previously exited with exception 16157
    MRP0: Background Media Recovery process shutdown (primary_db)
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:58
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Mon Dec 23 17:14:01 2013
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:14:01
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Error 12543 received logging on to the standby
    FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
    Archiver process freed from errors. No longer stopped
    Mon Dec 23 17:15:07 2013
    Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
    Mon Dec 23 17:19:51 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival Error
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Mon Dec 23 17:26:18 2013
    RFS[1]: Assigned to RFS process 31456
    RFS[1]: No connections allowed during/after terminal recovery.
    Mon Dec 23 17:26:47 2013
    flashback database to scn 15921680
    ORA-16157 signalled during: flashback database to scn 15921680...
    Mon Dec 23 17:27:05 2013
    alter database recover managed standby database using current logfile disconnect
    Attempt to start background Managed Standby Recovery process (primary_db)
    Mon Dec 23 17:27:05 2013
    MRP0 started with pid=28, OS id=31481
    MRP0: Background Managed Standby Recovery process started (primary_db)
    started logmerger process
    Mon Dec 23 17:27:10 2013
    Managed Standby Recovery starting Real Time Apply
    MRP0: Background Media Recovery terminated with error 16157
    Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
    ORA-16157: media recovery not allowed following successful FINISH recovery
    Managed Standby Recovery not using Real Time Apply
    Completed: alter database recover managed standby database using current logfile disconnect
    Recovery Slave PR00 previously exited with exception 16157
    MRP0: Background Media Recovery process shutdown (primary_db)
    Mon Dec 23 17:27:18 2013
    RFS[2]: Assigned to RFS process 31492
    RFS[2]: No connections allowed during/after terminal recovery.
    Mon Dec 23 17:28:18 2013
    RFS[3]: Assigned to RFS process 31614
    RFS[3]: No connections allowed during/after terminal recovery.
    Do you have any advice?
    Thanks!
    Alex.

    Hello;
    What's not clear to me in your question at this point:
    What I'm NOT being able to perform:
    If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
    Same situation happens if I manually disconnect the server from the power.
    This is the alert logs I have:"
    Are you trying a failover to the Standby?
    Please advise.
    Is it possible your "valid_for clause" is set incorrectly?
    Would also review this:
    ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
    Best Regards
    mseberg

  • How do i find dataloss in Data Guard?

    We are using redo log, in async mode, following is our setting.
    SERVICE=xxx_sb max_failure=100 reopen=600 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=xxx_sb
    When i query V$managed_Standby for delay_mins, its always zero. Meaning there is no delay in copying a log. I have 2 questions..
    1. How can i communicate to business that in worst case we will lose x Mintues of data? Its an OLTP, where the transactions are less then 2mins. Also during the night there are some batch jobs where the transactions are 60mins longs.
    2. Most of the time during peak hours there is a log switch happening every 10-15mins but during non-peak it may not happen for a long period of time, is it advisable to set ARCHIVE_LAG_TIME to 10 mins? as im not using archiver , we are using log writer for standby.
    any explanation or point to documentation would be appreciated,
    Thanks,

    Production databases who are running with fully fined configured Data Guard, do'nt have any dataloss because failover operation ensures zero data loss if dataguard is configured with maximum protection mode or maximum availability mode at failover time.
    http://www.dbazone.com/docs/oracle_10gDataGuard_overview.pdf
    The above pdf is oracle white paper which too confirmed it.
    LGWR SYNC AFFIRM in Oracle Data Guard is used for zero data loss. How does one ensure zero data loss? Well, the redo block generated at the primary has to reach the standby across the network (that's where the SYNC part comes in - i.e. it is a synchronous network call), and then the block has to be written on disk on the standby (that's where the AFFIRM part comes in) - typically on a standby redo log.
    Can you have LGWR SYNC NOAFFIRM? Yes sure. Then you will have synchronous network transport, but the only thing you are guaranteed is that the block has reached the remote standby's memory. It has not been written on to disk yet. So not really a zero data loss solution (e.g. what if the standby instance crashes before the disk I/O).
    To sum up -> LGWR SYNC AFFIRM means primary transaction commits are waiting for ntk I/O + disk I/O acks. LGWR SYNC NOAFFIRM means primary transaction commits are waiting for ntk I/O only.
    Source:http://www.dbasupport.com/forums/showthread.php?t=54467
    HTH
    Girish Sharma

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Delay=60 not working in 11.2.0.2 Data Guard

    Hi Friends,
    I am using 11.2.0.2 Data Guard.
    I had set Delay=60 for Standby Database in init parameters of Primary Database and bounced both Primary and Standby DB.
    But as soon as i perform log switch in Primary DB it is being applied in Standby DB immediately ignoring my Delay parameter.
    The Physical Standby is mounted and redo apply is enabled.
    Please let me know the reason.
    Parameters:
    LOG_ARCHIVE_DEST_1=
    'LOCATION=/data/dg/arch1/chicago/
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=chicago'
    LOG_ARCHIVE_DEST_2=
    'SERVICE=boston ASYNC DELAY=60
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=boston'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    Standby:
    SEQUENCE# APPLIED
    1208 YES
    1209 YES
    1210 YES
    1211 YES
    1212 YES
    1213 YES
    1214 YES
    1215 YES
    1216 IN-MEMORY
    Regards,
    DB

    Hello;
    There must be some small mistake.
    Test
    Release 11.2.0.3.0
    Test of Sync before
    DB_NAME    HOSTNAME       LOG_ARCHIVED LOG_APPLIED APPLIED_TIME   LOG_GAP
    PRIMARY    MYHOST                  221         221 20-MAR/08:33         0
    1 row selected.
    Setting of log_archive_dest_n
    log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC DELAY=90 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'Perform several log switches
    Check the time
    SQL> !date
    Wed Mar 20 08:44:15 CDT 2013
    Changes to Standby = None
    Query used to check apply :
    http://www.visi.com/~mseberg/data_guard/monitor_data_guard_transport.html
    Check the time again
    SQL> !date
    Wed Mar 20 08:49:45 CDT 2013
    Test of Sync after
    DB_NAME    HOSTNAME       LOG_ARCHIVED LOG_APPLIED APPLIED_TIME   LOG_GAP
    PRIMARY    MYHOST                  226         221 20-MAR/08:33         5
    1 row selected.Works.
    Test after 10 minutes
    SQL> !date
    Wed Mar 20 08:54:54 CDT 2013
    DB_NAME    HOSTNAME       LOG_ARCHIVED LOG_APPLIED APPLIED_TIME   LOG_GAP
    PRIMARY    MYHOST                  226         221 20-MAR/08:33         5
    1 row selected.
    half hour check
    SQL> !date
    Wed Mar 20 09:16:25 CDT 2013
    DB_NAME    HOSTNAME       LOG_ARCHIVED LOG_APPLIED APPLIED_TIME   LOG_GAP
    PRIMARY    MYHOST                    226         221 20-MAR/08:33         5
    1 row selected.
    Much later after the delay has past the logs are applied as expected.
    SQL> !date
    Wed Mar 20 12:12:53 CDT 2013
    DB_NAME    HOSTNAME       LOG_ARCHIVED LOG_APPLIED APPLIED_TIME   LOG_GAP
    PRIMARY    MYHOST                   226         226 20-MAR/08:44         0
    1 row selected.
    Standby alert log
    Media Recovery Delayed for 88 minute(s) (thread 1 sequence 222)
    Wed Mar 20 10:01:12 2013
    Media Recovery Log /u01/app/oracle/flash_recovery_area/STANDBY/archivelog/2013_03_20/o1_mf_1_222_8nmgnnjk_.arc
    Media Recovery Log /u01/app/oracle/flash_recovery_area/STANDBY/archivelog/2013_03_20/o1_mf_1_223_8nmgno26_.arc
    Media Recovery Delayed for 89 minute(s) (thread 1 sequence 224)Best Regards
    mseberg

  • Data Guard Administration Question.... (10gR2)

    After considerable trial and error, I have a running logical standby between 2 10gR2 databases.
    1) During the install of the primary database, I didn't comply fully to the OFA standard (I was slightly off on the placement of my database devices). During the Data Guard configuration, the option of "converting to ofa" was selected (per a metalink article that I read regarding a problem choosing to keep the filenames of the primary the same). Of course now I have an issue creating a tablespace on the Primary when keeping the non-OFA directory stucture. When it attempts to do the same on the Standby I'm getting the error that it cannot create the datafile. Makes sense, but what should I do in the future? Create the non-OFA directory structure on the Standby (assuming it would then create the file)? Is't there a filename conversion parameter that handles this as well?
    2) I got myself into a pinch this afternoon, partly due to #1. I am importing a file from another instance onto the Primary to begin testing reports on the Secondary. Prior to the import I created a tablespace (which is what got me to problem #1), proceeded to create the owner of the schema that's going to be imported, then performed the import. Now the apply process is erroring and going off line every few seconds as it works it's way through the "cannot create table" errors that the import is running into on the Secondary. How do I handle a large batch of transactions like this? Ultimately I would like to get back to square 1... no user, and no imported data in the Primary and the apply process online.
    Thanks:
    Chris

    So what I finally did was turned dg offline. Created the tablespace on the secondary, and then the user and then turned apply back online. The import proceeded fairly smoothly. Problem resolved.
    However, that I still need some insight as to exactly how the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters work. I have LOG_FILE_NAME_CONVERT setup (correctly I think) but I get a warning message in DG that sez the configuration is inconsistent with the actual setup.
    Here's the way things are setup:
    I have 3 redo logs:
    primary (non-ofa):
    /opt/oracle10/product/oradata/ICCORE10G2/redo01.log
    ... redo02.log
    ... redo03.log
    secondary (ofa):
    /opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/redo01.log
    ... redo02.log
    ... redo03.log
    LOG_FILE_NAME_CONVERT=('/opt/oracle10/product/oradata/ICCORE10G2/', '/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/')
    Is the above parameter set correctly?
    DB_NAME_FILE_CONVERT is unset as of now, but the directory structure above is the same. I assume the parameter needs to be set just like LOG_FILE_NAME_CONVERT above.
    Thanks

  • Oracle EBS R12 - DR setup using Oracle Data Guard

    Hi,
    Our customer has implemented Oracle EBS R12 (12.0.6) on a Two Node RAC on HP-UX env. The application Tier also has Two Nodes(Admin Tier and Web Tier).
    Oracle Clusterware - OCR and Vote Disk are on Raw Devices and the EBS database is on ASM.
    Customer wants to implement a DR solution with Oracle Data Guard with only 2 servers - 1 for the Database Tier and 1 for the Application Tier on the DR Site.
    I would like to know if this could be done by following note: 452056.1? I would also like to know if there are other useful docs.
    Thanks.
    Thiru

    Hi,
    Customer escalated this issue to Oracle and they came up with this reply:
    They can implement Disaster recovery solution from RAC to NON RAC using Solution A which use RMAN utilities for backup and recovery.
    While now AMP Application Management Pack gives the capablitiy to build non RAC from RAC Envioment, In future AMP will be capable also to support cloning of DB with Data Guard.
    Details
    Solution A - Using RMAN:
    For Release 12 customers, you can clone from a RAC to RAC (like to like) or RAC to non-RAC. This is using RMAN scripts to take a copy of the db when it’s in archive mode. if you’re leveraging any disaster recovery tools like data guard, the above solution should work fine…perhaps some fine tuning to the procedures
    Solution B - Using AMP in the future release
    For Release 11i , Release 12.x customers, AMP would be offering a new cloning solution that wouldn’t have dependency on Rapid Clone. This solution leverages the strengths of EM Grid Control in provisioning or cloning targets such as databases. This would be an advance solution that would support:
    o A full-fledged scale down cloning
    o Cloning of EBS deployed on Shared File System
    o Hot and Cold mode cloning
    o Quantifiable reduction in clone time for the entire EBS system
    This solution would be leveraging EM grid Control’s DB provisioning pack’s clone utilities, that are quite advanced and support cloning of DB with Data Guard.
    The recommendation is to start trialing AMP version 3.0, making the purchase and implementing within the Enterprise. As we release the new version of AMP (release 3.0.1), Customer would be in a better position to quickly implement the latest features
    Can anyone let me know if the solution A suggested will work.
    Rgds,
    Thiru

  • 10.2 EM: Wrong host in data guard page

    Hi.
    I've installed 10.2 OEM on my host to control my oracle environment.
    I've a fail over cluster on 2 Solaris 9 host and some 10.1 instances on 2 logical host.
    I've configured one agent per host and one agent per logical host on every physical host.
    All works.
    Next I've another host on which there is some standby database; also in this host I've installed agent.
    Now the problem:
    In my logical host I've a primary database, the OEM database instance page tell that the database in on logical host but if I enter in the data guard page i find that the primary database is on physical host and if I click on link of the primary database I got an error page that tell:
    Database ... is not discovered.
    The link to the home page is not available.
    I've try to remove database from OEM and rediscover it but none change.
    Suggestion?

    If it is RAC, the listener should be configured not from Database Home, but from Grid Home. You can use 'srvctl add listener' command to add listener into cluster registry.

  • Data Guard - Grid Control - Standby database

    Heys,
    now I have to RAC Clusters. The next step would be to setup a standby database to prepare a graceful switchover.
    But what are the next steps ?
    I am currently installing GridControl on a seperate host with the option: Enterprise Manager10g Grid Control Using an Existing Database
    The installation is still running....
    What are the steps ? Any documentation ? Any tutorials ?
    Do I need to install agents on each client in the cluster ?
    What about the name resolution ?
    Any listener configuration necessary ?
    Is the configuration done via data guard command line ?
    Christian

    Bear in mind that the standby creation wizard in Grid Control will only create a non-RAC standby. Once you get to Grid Control 10.2.0.5 and your databases are in 11g you will be able to use the Grid Control convert to RAC wizard on your standby. Prior to that you will have to convert the standby to RAC manually or follow the above mentioned paper and create the standby by hand and then import it into Grid Control.
    Larry

  • Oracle data guard configuration for primary and standby db_name

    I am working on configuring an active data guard for one primary DB and one standby DB. I have a few questions:
    1. Can I use different db_name, db_unique_name and instance_name for primary and standby. For example: primary(db_name, db_unique_name and instance_name)=chicago. When I create standby DB with Rman backup and copy of pfile and control file from primary DB or use Grid control to create standby database. Oracle document or Grid control all keep standby db_name=chicago. Only make standby db_unique_name and instance_name=boston. Due to my application system condition, I want to make db_name=boston, not keep it as the same as primary=chicago. Is this valid configuration?
    2. In primary datafiles, application system generate datafile name like this: hr_chicago_01.dbf, fn_chicago_01.dbf. When I move datafiles to standby server, if I plan to use db_name=boston for standby DB, can I rename datafiles as
    hr_boston_01.dbf, fn_boston_01.dbf? In this way, datafile name match up with db_name. but I will create standby log group and members on primary and standby identically. If in future switching over, DB will not have problems.
    3. If I don't use primary DB backup. Instead, I copy all datafiles, redo_log files (no control files) to standby. Then "alter database backup controlfile to trace" from promary and also " create pfile='/xxx/initSTANDBY.ora' from primary. Then modify init.ora and controlfile. Then run control.sql to bring standby DB up. After that, configure redo log shipping and apply with data guard or SQL. Is this a acceptable way to create physical standby DB?
    Please advise your comments. Thanks in advance.

    I want to make db_name=boston, not keep it as the same as primary=chicago. Is this valid configuration?NO. DB_NAME must be the same ("chicago") at both sites. The Standby will be using a different DB_UNIQUE_NAME (e.g. "boston") and can be using a different Instance name / SID (e.g. "boston").
    can I rename datafiles Yes. The database file names can be changed.
    If I don't use primary DB backup. Instead, I copy all datafiles, redo_log files (no control files) to standbyWhat is the difference between the first sentence (a backup of the primary) and the second sentence (a copy of the primary) ? A Copy is a backup.
    Are you intending to differentiate between an RMAN Backup and a User-Managed (aka "scripted") backup ?
    Normally, for DataGuard, tou can use non-RMAN methods to copy the database but there's no value add in this.
    You'd still have to setup DataGuard ! (and I wonder if you'd have complications setting up Active DataGuard).
    But remember that you MUST create the Standby controlfile from the Primary and copy it over to your Standby -- particularly as you are planning to use DataGuard. This is not created by 'alter database backup controlfile to trace' , but by 'ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'filename''
    Hemant K Chitale

  • Oracle9i Data Guard - Filtering for a Logical Standby DB?

    Hello All
    1) When using Oracle9i data guard with a logical standby database is it possible to "screen" the sql statements that are executed? For example if I don't want any "delete" commands to be replicated on the standby box can I filter them out?.
    2) Are there any unlogged transactions that don't get "replicated"? Can I get a list of these commands/transactions?
    Any insight would be greatly appreciated
    Thanks in advance
    ...anik

    I want to make db_name=boston, not keep it as the same as primary=chicago. Is this valid configuration?NO. DB_NAME must be the same ("chicago") at both sites. The Standby will be using a different DB_UNIQUE_NAME (e.g. "boston") and can be using a different Instance name / SID (e.g. "boston").
    can I rename datafiles Yes. The database file names can be changed.
    If I don't use primary DB backup. Instead, I copy all datafiles, redo_log files (no control files) to standbyWhat is the difference between the first sentence (a backup of the primary) and the second sentence (a copy of the primary) ? A Copy is a backup.
    Are you intending to differentiate between an RMAN Backup and a User-Managed (aka "scripted") backup ?
    Normally, for DataGuard, tou can use non-RMAN methods to copy the database but there's no value add in this.
    You'd still have to setup DataGuard ! (and I wonder if you'd have complications setting up Active DataGuard).
    But remember that you MUST create the Standby controlfile from the Primary and copy it over to your Standby -- particularly as you are planning to use DataGuard. This is not created by 'alter database backup controlfile to trace' , but by 'ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'filename''
    Hemant K Chitale

  • Recommendation for setting up Data Guard for EBS R12 12.1.3

    Hello Experts,
    I would like to get some experts opinion and recommendation on setting up Data GUARD for EBS R12.1.3 running on Linux based systems.
    Firstly I would wanna let you all know that I am just geek in this subject matter so do excuse me for any mistakes.
    While we are planning on setting up Data Gurad fro R12 what are the best practices to consider ?.
    We have 2 node rac enabled database and would wanna have non-rac based physical standby, based on this I believe that having db tier on remote or standby site is enough ?
    if db node is enough for standby location what are things to be done when a fail over / switch over / role transitioned to standby for the app node from primary site to point db on the
    standby location or do we have to have both apps and db tier @ stand by location
    --However having a apps tier/node at standby will be of no use as we are planing to run db on mounstate meaning not a active data guard setup.
    Please do reply with your recommendation/suggestions/pointers
    Thanks in Advance.

    Service Provider Access resulted in exception 'oracle.apps.fnd.soa.util.SOAException: SystemError: Error while sending message to server. http://ec2-107-22-78-224.compute-1.amazonaws.com:8000/webservices/SOAProvider/EbizAuth?Generate=1132&soa_ticket=Tu2z7GYoWPwq-VdgimKRFg..' when attempting to perform 'GENERATE'. Please view Service Provider logs for more details
    Can you find any details about the error in the log?
    I have looked at the following Note as well:
    Error using the Generate WSDL Button in Oracle E-Business Suite Integrated Soa Gateway Release 12.1.1 [ID 1090946.1]Did the doc help?
    Any one know what may be causing this issue? Do we have to do additional setup for SOA gateway after the standard install of the OVM? I tried to follow the steps in the note above but I do not see any entry for "<jdbc_url oa_var="s_apps_jdbc_connect_descriptor"/>" in the file data-sources.xml.
    Anyone have any ideas?Have you reviewed these docs?
    Oracle E-Business Suite Integrated SOA Gateway Troubleshooting Guide, Release 12 [ID 726414.1]
    Oracle E-Business Suite Integrated SOA Gateway 12.1.1 Consolidated One-Off [ID 815196.1]
    Thanks,
    Hussein

  • 11.2 DB & Data Guard : ORA-16014 how to archive a sequence?

    Hi,
    I've installed 11.2 Oracle Database in my laptop with Oracle Enterprise Linux 5.3 and I have created two standby databases, orcl (primary) and orclstby (physical standby). I performed a switchover to orclstby and consequently, orclstby was the new primary and orcl the physical standby, I checked those values with SQLPlus executing the select database_role from v$database statement, so, there were no problems during the switchover.I also shutteddown and started both primary and stanby databases to check all was fine.
    Today, I've tried to start the environment again and I've encountered the following problem during the startup of the primary database:
    [oracle@mredon-es ~]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.1.0 Production on Wed Oct 21 17:34:21 2009
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 839282688 bytes
    Fixed Size 2217992 bytes
    Variable Size 507512824 bytes
    Database Buffers 322961408 bytes
    Redo Buffers 6590464 bytes
    Database mounted.
    ORA-03113: end-of-file on communication channel
    Process ID: 5562
    Session ID: 9 Serial number: 3
    I've checked the newest log created and this is its content:
    Dump file /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/log/orclstby_ora_5382.trc
    *** 2009-10-21 17:34:30.467
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
    System name: Linux
    Node name: mredon-es.us.oracle.com
    Release: 2.6.18-128.el5
    Version: #1 SMP Wed Jan 21 08:45:05 EST 2009
    Machine: x86_64
    Instance name: orclstby
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Unix process pid: 5382, image: [email protected]
    *** 2009-10-21 17:34:30.526
    2009-10-21 17:34:30.457: [ default]utgdv:2:ocr loc file /etc/oracle/olr.loc cann
    ot be opened. errno 2
    2009-10-21 17:34:30.527: [ default]utgdv:2:ocr loc file /etc/oracle/ocr.loc cann
    ot be opened. errno 2
    I've searched in Metalink information about ocr.loc and olr.loc and they seem to be part of an Oracle RAC installation, I don't know why the database needs these files to start if I am using single instance...
    Any idea would be kindly appreciated because I'm a bit confused and I don't really know what steps I am suppposed to take in order to solve this problem.
    Thanks in advance.
    Edited by: mredon on Oct 26, 2009 5:22 AM
    Reason: title of the post changed

    sorry for the delay, here is the alert log:
    <msg time='2009-10-21T23:03:09.576+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module='[email protected] (TNS V1-V3)'
    pid='6582'>
    <txt>Data Guard: version check completed
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:09.642+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module=''
    pid='6486'>
    <txt>LGWR: STARTING ARCH PROCESSES
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:09.709+02:00' org_id='oracle' comp_id='rdbms'
    msg_id='ksbrdp:3833:3697353022' type='NOTIFICATION' group='process start'
    level='16' host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130'
    pid='6588'>
    <txt>RSM0 started with pid=23, OS id=6588
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:09.831+02:00' org_id='oracle' comp_id='rdbms'
    msg_id='ksbrdp:3833:3697353022' type='NOTIFICATION' group='process start'
    level='16' host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130'
    pid='6590'>
    <txt>ARC0 started with pid=24, OS id=6590
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:10.832+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module=''
    pid='6486'>
    <txt>ARC0: Archival started
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:10.832+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module=''
    pid='6486'>
    <txt>LGWR: STARTING ARCH PROCESSES COMPLETE
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:10.833+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module=''
    pid='6590'>
    <txt>ARC0: STARTING ARCH PROCESSES
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:11.180+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module='[email protected] (TNS V1-V3)'
    pid='6582'>
    <txt>ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log switch
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:11.183+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module='[email protected] (TNS V1-V3)'
    pid='6582'>
    <txt>ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_1 after log switch
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:11.258+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module='[email protected] (TNS V1-V3)'
    pid='6582'>
    <txt>Errors in file /u01/app/oracle/product/diag/rdbms/orclstby/orclstby/trace/orclstby_ora_6582.trc:
    ORA-16014: log 1 sequence# 27 not archived, no available destinationsORA-00312: online log 1 thread 1: &apos;/u01/app/oracle/product/oradata/orclstby/redo01.log&apos;
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:11.263+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module='[email protected] (TNS V1-V3)'
    pid='6582'>
    <txt>USER (ospid: 6582): terminating the instance due to error 16014 </txt>
    </msg>
    <msg time='2009-10-21T23:03:11.429+02:00' org_id='oracle' comp_id='rdbms'
    msg_id='ksbrdp:3833:3697353022' type='NOTIFICATION' group='process start'
    level='16' host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130'
    pid='6592'>
    <txt>ARC1 started with pid=25, OS id=6592
    </txt>
    </msg>
    <msg time='2009-10-21T23:03:12.728+02:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='mredon-es.us.oracle.com' host_addr='192.168.237.130' module='[email protected] (TNS V1-V3)'
    pid='6582'>
    <txt>Instance terminated by USER, pid = 6582
    </txt>
    </msg>
    I've marked in bold the errors which may be causing the problem, ¿what do you think?

Maybe you are looking for

  • CRM-Web UI-Filtering transaction types based on Workcenter

    Hi, I have the following requirement: There are 2 Business roles and these have their separate navigation bar profile.But the 2 navigation bar profile have the same workcenters assigned. Now the requirement is when the user logs on with Business role

  • Issue with Multithreading and vertical scroll bar - help needed to debug!!!

    I have been working on a desktop Visual Studio 2010 application for quite a few years. It is written in C++ and MFC. This code is a combination of code I have written and code I inherited. It worked great for years on Windows XP, but when I ported it

  • Google calendar sends e-mail reminders, iCal shall not trigger Mail

    I am using several Google calendars. I configured Google calendar to send reminders via e-mail. Unfortunately, iCal tries to do the same now. So for every entry in one of my Google calendars, Mail appears and wants to send a reminder e-mail ... Very

  • Parsing single chars to a vector.

    I'm reading from a file, using BufferedReader. Readig a line at a time. Currently i have a StringTokenizer which breaks up the line using whitespace. How can I break up the "input" line by single chars, and how do I get the delim to work using the St

  • Enable Smart Lists

    Hi, the HP 9.3.1 documentation show this option: • Display missing values as blank-Leave data form cells empty where data does not exist. If this option is not selected, empty cells display the text "#missing." • Enable account annotations -Enable us