DB_FILE_NAME_CONVERT in DATA GUARD

Hi All,
when can i use the below scenario in Data gurad (standby parameter file) ?
DB_FILE_NAME_CONVERT='<DB_UNIQUE_NAME of PRIMARY>','<DB_UNIQUE_NAME of STANDBY>'
LOG_FILE_NAME_CONVERT='<DB_UNIQUE_NAME of PRIMARY>','<DB_UNIQUE_NAME of STANDBY>'
Thanks

Hi,
No worries. In the same doc it states:
Specify the path name and filename location of the primary database datafiles followed by the standby location. This parameter converts the path names of the primary database datafiles to the standby datafile path names. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required. Note that this parameter is used only to convert path names for physical standby databases. Multiple pairs of paths may be specified by this parameter.I've got data guard running on two sites which are physically separate to the primary site. At one site we have the same paths for the data files so we have nothing for the db_file_name_convert parameter. At the other site we have different paths so we specify the full path of the primary, followed by the full path of the standby. Just the way that I have in the example above.
Have you configured your standby yet? If I were you I would keep the file paths the same if possible. It leads to fewer issues in the long run. If that's not possible, for whatever reason, put the full path of the primary followed by the full path of the standby.
Hope that makes sense?
Rob

Similar Messages

  • Data Guard: db_file_name_convert/log_file_name_convert when using ASM/OMF

    All,
    I have a call currently open with Oracle regarding the setting of the parameters db_file_name_convert and log_file_name_convert in a data guard environment. We use ASM / OMF for storage and file naming and my question is basically do these parameters have to be set. The documentation says they do where the file structure is different between PRIMARY and STANDBY.
    I have successfully tested failover and switchover without these parameters. I have also added a new tablespace on the PRIMARY and watched it create a new OMF datafile on standby when the logs are switched.
    I just can't see a reason for setting them when using ASM / OMF.
    I'm hoping someone can enlighten me here because I'm getting nowhere whith support. The following is our Data Guard setup:
    PRIMARY
    DB_NAME=IBSLIVE
    DB_UNIQUE_NAME=IBSLIVE
    ASM Disk Groups:
    +PRODDATA (Data Files, Control Files, Redo Logs)
    +PRODFLASH (Archive Logs, Flashback Logs, RMAN backups)
    +PRODLOGS (Multiplexed Control & Redo Logs)
    STANDBY
    DB_NAME=IBSLIVE
    DB_UNIQUE_NAME=IBSDR
    ASM Disk Groups:
    +DRDATA (Data Files, Control Files, Redo Logs)
    +DRFLASH (Archive Logs, Flashback Logs, RMAN backups)
    +DRREDO (Multiplexed Control & Redo Logs)
    Many Thanks,
    Ian.

    Ian,
    I'm having similar thoughts.
    I have created a new instance with files in asm under +datadisk/obosact (this is the smae name as primary)
    I then modify the db_unique_name from obosact to obosactdr as is required for standby to work
    When I recover (duplicate target database for standby; ) I find that the files are in datadisk/obosactdr not in the datadisk/obosact area
    I found this reference http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimaryRACPhysicalStandby.pdf
    4. Connect to the ASM instance on one standby host, and create a directory within the DATA disk group that has the same name as the DB_UNIQUE_NAME of the standby database. For example: SQL> ALTER DISKGROUP data ADD DIRECTORY '+DATA/BOSTON';
    This step seems to indicate that the location of the files is determined by the db_unique_name not the db_file_name_convert paramenter
    DId you ever resolve the issue?

  • Data Guard Failover after primary site network failure or disconnect.

    Hello Experts:
    I'll try to be clear and specific with my issue:
    Environment:
    Two nodes with NO shared storage (I don't have an Observer running).
    Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
    Two single instance databases, one per node. NO RAC.
    What I'm being able to perform with no issues:
    Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
    Automatic fail(over) when primary node is rebooted with "reboot" or "init"
    Automatic fail(over) when primary node is shut down with "shutdown".
    What I'm NOT being able to perform:
    If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
    Same situation happens if I manually disconnect the server from the power.
    This is the alert logs I have:
    This is the portion of the alert log at Standby site when Real Time Replication is working fine:
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
      Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
    At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
    Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
    Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
    Media Recovery Complete (primary_db)
    Terminal Recovery: successful completion
    Forcing ARSCN to IRSCN for TR 0:15922544
    Mon Dec 23 17:13:22 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Resetting standby activation ID 2071848820 (0x7b7de774)
    Completed:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Mon Dec 23 17:13:33 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Terminal Recovery: applying standby redo logs.
    Terminal Recovery: thread 1 seq# 7 redo required
    Terminal Recovery:
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
      Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
    Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
    Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
    Media Recovery Complete (primary_db)
    Terminal Recovery: successful completion
    Forcing ARSCN to IRSCN for TR 0:15922544
    Mon Dec 23 17:13:22 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Resetting standby activation ID 2071848820 (0x7b7de774)
    Completed:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Mon Dec 23 17:13:33 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Attempt to do a Terminal Recovery (primary_db)
    Media Recovery Start: Managed Standby Recovery (primary_db)
    started logmerger process
    Mon Dec 23 17:13:33 2013
    Managed Standby Recovery not using Real Time Apply
    Media Recovery failed with error 16157
    Recovery Slave PR00 previously exited with exception 283
    ORA-283 signalled during:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
    Mon Dec 23 17:13:34 2013
    Shutting down instance (immediate)
    Shutting down instance: further logons disabled
    Stopping background process MMNL
    Stopping background process MMON
    License high water mark = 38
    All dispatchers and shared servers shutdown
    ALTER DATABASE CLOSE NORMAL
    ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
    ALTER DATABASE DISMOUNT
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:38 2013
    Mon Dec 23 17:13:38 2013
    Mon Dec 23 17:13:38 2013
    ARCH shutting downARCH shutting down
    ARCH shutting down
    ARC0: Relinquishing active heartbeat ARCH role
    ARC2: Archival stopped
    ARC0: Archival stopped
    ARC1: Archival stopped
    Completed: ALTER DATABASE DISMOUNT
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:40 2013
    Stopping background process VKTM
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:43 2013
    Instance shutdown complete
    Mon Dec 23 17:13:44 2013
    Adjusting the default value of parameter parallel_max_servers
    from 1280 to 470 due to the value of parameter processes (500)
    Starting ORACLE instance (normal)
    ************************ Large Pages Information *******************
    Per process system memlock (soft) limit = 64 KB
    Total Shared Global Region in Large Pages = 0 KB (0%)
    Large Pages used by this instance: 0 (0 KB)
    Large Pages unused system wide = 0 (0 KB)
    Large Pages configured system wide = 0 (0 KB)
    Large Page size = 2048 KB
    RECOMMENDATION:
      Total System Global Area size is 3762 MB. For optimal performance,
      prior to the next instance restart:
      1. Increase the number of unused large pages by
    at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
      get 100% of the System Global Area allocated with large pages
      2. Large pages are automatically locked into physical memory.
    Increase the per process memlock (soft) limit to at least 3770 MB to lock
    100% System Global Area's large pages into physical memory
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Initial number of CPU is 32
    Number of processor cores in the system is 16
    Number of processor sockets in the system is 2
    CELL communication is configured to use 0 interface(s):
    CELL IP affinity details:
        NUMA status: NUMA system w/ 2 process groups
        cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
    CELL communication will use 1 IP group(s):
        Grp 0:
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =88
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    NUMA system with 2 nodes detected
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    ORACLE_HOME = /u01/oracle/product/11.2.0.4
    System name:    Linux
    Node name:      node2.localdomain
    Release:        2.6.32-131.0.15.el6.x86_64
    Version:        #1 SMP Tue May 10 15:42:40 EDT 2011
    Machine:        x86_64
    Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
    System parameters with non-default values:
      processes                = 500
      sga_target               = 3760M
      control_files            = "/u02/oracle/orafiles/primary_db/control01.ctl"
      control_files            = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
      db_file_name_convert     = "standby_db"
      db_file_name_convert     = "primary_db"
      log_file_name_convert    = "standby_db"
      log_file_name_convert    = "primary_db"
      control_file_record_keep_time= 40
      db_block_size            = 8192
      compatible               = "11.2.0.4.0"
      log_archive_dest_1       = "location=/u02/oracle/archivelogs/primary_db"
      log_archive_dest_2       = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
      log_archive_dest_state_2 = "ENABLE"
      log_archive_min_succeed_dest= 1
      fal_server               = "primary_db"
      log_archive_trace        = 0
      log_archive_config       = "DG_CONFIG=(primary_db,standby_db)"
      log_archive_format       = "%t_%s_%r.dbf"
      log_archive_max_processes= 3
      db_recovery_file_dest    = "/u02/oracle/fast_recovery_area"
      db_recovery_file_dest_size= 30G
      standby_file_management  = "AUTO"
      db_flashback_retention_target= 1440
      undo_tablespace          = "UNDOTBS1"
      remote_login_passwordfile= "EXCLUSIVE"
      db_domain                = ""
      dispatchers              = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
      job_queue_processes      = 0
      audit_file_dest          = "/u01/oracle/admin/primary_db/adump"
      audit_trail              = "DB"
      db_name                  = "primary_db"
      db_unique_name           = "standby_db"
      open_cursors             = 300
      pga_aggregate_target     = 1250M
      dg_broker_start          = FALSE
      diagnostic_dest          = "/u01/oracle"
    Mon Dec 23 17:13:45 2013
    PMON started with pid=2, OS id=29108
    Mon Dec 23 17:13:45 2013
    PSP0 started with pid=3, OS id=29110
    Mon Dec 23 17:13:46 2013
    VKTM started with pid=4, OS id=29125 at elevated priority
    VKTM running at (1)millisec precision with DBRM quantum (100)ms
    Mon Dec 23 17:13:46 2013
    GEN0 started with pid=5, OS id=29129
    Mon Dec 23 17:13:46 2013
    DIAG started with pid=6, OS id=29131
    Mon Dec 23 17:13:46 2013
    DBRM started with pid=7, OS id=29133
    Mon Dec 23 17:13:46 2013
    DIA0 started with pid=8, OS id=29135
    Mon Dec 23 17:13:46 2013
    MMAN started with pid=9, OS id=29137
    Mon Dec 23 17:13:46 2013
    DBW0 started with pid=10, OS id=29139
    Mon Dec 23 17:13:46 2013
    DBW1 started with pid=11, OS id=29141
    Mon Dec 23 17:13:46 2013
    DBW2 started with pid=12, OS id=29143
    Mon Dec 23 17:13:46 2013
    DBW3 started with pid=13, OS id=29145
    Mon Dec 23 17:13:46 2013
    LGWR started with pid=14, OS id=29147
    Mon Dec 23 17:13:46 2013
    CKPT started with pid=15, OS id=29149
    Mon Dec 23 17:13:46 2013
    SMON started with pid=16, OS id=29151
    Mon Dec 23 17:13:46 2013
    RECO started with pid=17, OS id=29153
    Mon Dec 23 17:13:46 2013
    MMON started with pid=18, OS id=29155
    Mon Dec 23 17:13:46 2013
    MMNL started with pid=19, OS id=29157
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    ORACLE_BASE from environment = /u01/oracle
    Mon Dec 23 17:13:46 2013
    ALTER DATABASE   MOUNT
    ARCH: STARTING ARCH PROCESSES
    Mon Dec 23 17:13:50 2013
    ARC0 started with pid=23, OS id=29210
    ARC0: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    ARC0: STARTING ARCH PROCESSES
    Successful mount of redo thread 1, with mount id 2071851082
    Mon Dec 23 17:13:51 2013
    ARC1 started with pid=24, OS id=29212
    Allocated 15937344 bytes in shared pool for flashback generation buffer
    Mon Dec 23 17:13:51 2013
    ARC2 started with pid=25, OS id=29214
    Starting background process RVWR
    ARC1: Archival started
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    Mon Dec 23 17:13:51 2013
    RVWR started with pid=26, OS id=29216
    Physical Standby Database mounted.
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Mon Dec 23 17:13:51 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
             USING CURRENT LOGFILE DISCONNECT FROM SESSION
    Attempt to start background Managed Standby Recovery process (primary_db)
    Mon Dec 23 17:13:51 2013
    MRP0 started with pid=27, OS id=29219
    MRP0: Background Managed Standby Recovery process started (primary_db)
    ARC2: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC2: Becoming the heartbeat ARCH
    ARC2: Becoming the active heartbeat ARCH
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival Error
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
    This is the rest of the log:
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:52
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:55
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    started logmerger process
    Mon Dec 23 17:13:56 2013
    Managed Standby Recovery starting Real Time Apply
    MRP0: Background Media Recovery terminated with error 16157
    Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
    ORA-16157: media recovery not allowed following successful FINISH recovery
    Managed Standby Recovery not using Real Time Apply
    Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
             USING CURRENT LOGFILE DISCONNECT FROM SESSION
    Recovery Slave PR00 previously exited with exception 16157
    MRP0: Background Media Recovery process shutdown (primary_db)
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:58
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Mon Dec 23 17:14:01 2013
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:14:01
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Error 12543 received logging on to the standby
    FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
    Archiver process freed from errors. No longer stopped
    Mon Dec 23 17:15:07 2013
    Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
    Mon Dec 23 17:19:51 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival Error
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Mon Dec 23 17:26:18 2013
    RFS[1]: Assigned to RFS process 31456
    RFS[1]: No connections allowed during/after terminal recovery.
    Mon Dec 23 17:26:47 2013
    flashback database to scn 15921680
    ORA-16157 signalled during: flashback database to scn 15921680...
    Mon Dec 23 17:27:05 2013
    alter database recover managed standby database using current logfile disconnect
    Attempt to start background Managed Standby Recovery process (primary_db)
    Mon Dec 23 17:27:05 2013
    MRP0 started with pid=28, OS id=31481
    MRP0: Background Managed Standby Recovery process started (primary_db)
    started logmerger process
    Mon Dec 23 17:27:10 2013
    Managed Standby Recovery starting Real Time Apply
    MRP0: Background Media Recovery terminated with error 16157
    Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
    ORA-16157: media recovery not allowed following successful FINISH recovery
    Managed Standby Recovery not using Real Time Apply
    Completed: alter database recover managed standby database using current logfile disconnect
    Recovery Slave PR00 previously exited with exception 16157
    MRP0: Background Media Recovery process shutdown (primary_db)
    Mon Dec 23 17:27:18 2013
    RFS[2]: Assigned to RFS process 31492
    RFS[2]: No connections allowed during/after terminal recovery.
    Mon Dec 23 17:28:18 2013
    RFS[3]: Assigned to RFS process 31614
    RFS[3]: No connections allowed during/after terminal recovery.
    Do you have any advice?
    Thanks!
    Alex.

    Hello;
    What's not clear to me in your question at this point:
    What I'm NOT being able to perform:
    If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
    Same situation happens if I manually disconnect the server from the power.
    This is the alert logs I have:"
    Are you trying a failover to the Standby?
    Please advise.
    Is it possible your "valid_for clause" is set incorrectly?
    Would also review this:
    ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
    Best Regards
    mseberg

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Data guard: OMF and directory structure

    hi everybody,
    i guess this problem is not too complicated, but maybe i´m missing something.
    assumption:
    - 10.2.0.4
    - data guard with physical standby (primary: node_a, standby: node_b)
    - primary db_unique_name=primary, standby db_unique_name=standby
    - using OMF, primary db_create_file_dest=<myDir>/oradata
    - standby_file_management is set to AUTO
    - want to use same directory structure for data files on both nodes (<myDir>/oradata/PRIMARY/)
    data guard is working as expected so far, data files on both nodes are created in <myDir>/oradata/PRIMARY/datafile/
    next assumption:
    - failover is initiated
    - physical standby is recreated on former primary node (node_a)
    from now on, data files are created in <myDir>/oradata/STANDBY/datafile/, old data files remain in <myDir>/oradata/PRIMARY/datafile/.
    is there a way to avoid a second directory (and still use the benefits of OMF)? at least at the current standby node it´s possible to avoid this by setting db_file_name_convert, but what about the new primary?
    thanks for your input,
    peter
    Edited by: priffert on Sep 14, 2009 3:07 AM

    I have a similar setup with the exception that I'm using ASM for datafiles. The issue I'm having with OMF is that if I create a datafile within a disk group that is not in the location specified by db_create_file_dest then on the standby it's created in the db_create_file_dest. Apparently this will not give me the ability to maintain the exact configuration on both primary and standby without requiring modification after role changes.

  • WLS 10.3.x setup in Data Guard environment

    Hi all,
    We are setting up the following environment:
    Oracle Database 11.2.0.2 EE RAC primary and physical standby
    WebLogic Server (WLS) 10.3.0
    We are using MultiDataSource (MDS) per the documentation which is fine. However, this setup requires the data sources to specify the RAC hostnames (VIPs), instance names, and the service name. Per various documents we are not using the SCAN hostname. Considering this is a phsyical data guard environment, how do we set this up so we can have the best 'seamless' transfer of the WLS envrionment when a switchover or failover occurs from the primary to the standby database?
    Thus far we've discussed using instead of the RAC virtual hostname (racprd1), a DNS cname (wls-racnode1) which points to racprd1, but we would modify in the DNS to point to racstby1 when a DG switch or failover occurs.
    Any thoughts or recommendations otherwise?
    Thanks.

    OK...
    Alter database rename should be ok As long as both the PRIMARY directory strucures are same or if you have set the parameter db_file_name_convert
    While asking a question..
    You should post all the necessary information for us to help(is possible)
    1.Oracle version
    2.Os version
    Thread spesific
    3.Directory strucutrs of Both Priimary and Standby
    4.Init .Ora of both PRIMARY AND STANDBY

  • Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Data Guard Administration Question.... (10gR2)

    After considerable trial and error, I have a running logical standby between 2 10gR2 databases.
    1) During the install of the primary database, I didn't comply fully to the OFA standard (I was slightly off on the placement of my database devices). During the Data Guard configuration, the option of "converting to ofa" was selected (per a metalink article that I read regarding a problem choosing to keep the filenames of the primary the same). Of course now I have an issue creating a tablespace on the Primary when keeping the non-OFA directory stucture. When it attempts to do the same on the Standby I'm getting the error that it cannot create the datafile. Makes sense, but what should I do in the future? Create the non-OFA directory structure on the Standby (assuming it would then create the file)? Is't there a filename conversion parameter that handles this as well?
    2) I got myself into a pinch this afternoon, partly due to #1. I am importing a file from another instance onto the Primary to begin testing reports on the Secondary. Prior to the import I created a tablespace (which is what got me to problem #1), proceeded to create the owner of the schema that's going to be imported, then performed the import. Now the apply process is erroring and going off line every few seconds as it works it's way through the "cannot create table" errors that the import is running into on the Secondary. How do I handle a large batch of transactions like this? Ultimately I would like to get back to square 1... no user, and no imported data in the Primary and the apply process online.
    Thanks:
    Chris

    So what I finally did was turned dg offline. Created the tablespace on the secondary, and then the user and then turned apply back online. The import proceeded fairly smoothly. Problem resolved.
    However, that I still need some insight as to exactly how the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters work. I have LOG_FILE_NAME_CONVERT setup (correctly I think) but I get a warning message in DG that sez the configuration is inconsistent with the actual setup.
    Here's the way things are setup:
    I have 3 redo logs:
    primary (non-ofa):
    /opt/oracle10/product/oradata/ICCORE10G2/redo01.log
    ... redo02.log
    ... redo03.log
    secondary (ofa):
    /opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/redo01.log
    ... redo02.log
    ... redo03.log
    LOG_FILE_NAME_CONVERT=('/opt/oracle10/product/oradata/ICCORE10G2/', '/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/')
    Is the above parameter set correctly?
    DB_NAME_FILE_CONVERT is unset as of now, but the directory structure above is the same. I assume the parameter needs to be set just like LOG_FILE_NAME_CONVERT above.
    Thanks

  • Data guard sid

    Dear Gurus
    I need to implemement data guard in sap.client is asking that on standby its required that sid be same as primary because sap uses it.
    So is it possible to configure data guard with same sid's on primary and standby.
    also as i keep sid same the directory structure would be same in that case like
    on primary --E:\oracle\db\ppm
    on standby- E:\oracle\db\ppm
    so no need to use parameter db_file_name_convert and log_file_name_convert
    so would it be a fine configuration of data guard
    OS--Windows2008
    Oracle 11g

    user11221081 wrote:
    Dear Gurus
    I need to implemement data guard in sap.client is asking that on standby its required that sid be same as primary because sap uses it.
    So is it possible to configure data guard with same sid's on primary and standby.
    also as i keep sid same the directory structure would be same in that case like
    on primary --E:\oracle\db\ppm
    on standby- E:\oracle\db\ppm
    so no need to use parameter db_file_name_convert and log_file_name_convert
    so would it be a fine configuration of data guard
    OS--Windows2008
    Oracle 11g
    i already updated in earlier thread, see my post here Re: Data guard in sap

  • Multiple Location of datafiles in Data Guard Environment

    I have Data Guard setup in my oracle 10g database. Uptil now , all the datafiles were at same location, therefore db_file parameter setting in pfile was fine. Now I want to move my few of the datafiles at another location. Say, few of my datafiles would be in F:\ and some of them in G:\. How do I now set db_file parameters in Pfile ?

    OK...
    Alter database rename should be ok As long as both the PRIMARY directory strucures are same or if you have set the parameter db_file_name_convert
    While asking a question..
    You should post all the necessary information for us to help(is possible)
    1.Oracle version
    2.Os version
    Thread spesific
    3.Directory strucutrs of Both Priimary and Standby
    4.Init .Ora of both PRIMARY AND STANDBY

  • Data Guard Error

    Hi Guys,
    Can someone help me on this error? ORA-16057: DGID from server not in Data Guard configuration.
    Here are the configs of primary and standby. I just want to find out what was im missing.
    Primay config:
    **icts001.__db_cache_size=20250099712**
    **icts001.__java_pool_size=16777216**
    **icts001.__large_pool_size=16777216**
    **icts001.__shared_pool_size=1056964608**
    **icts001.__streams_pool_size=117440512**
    ***.aq_tm_processes=6**
    ***.archive_lag_target=0**
    ***.audit_file_dest='/data/oradata/admin/icts001/adump'**
    ***.audit_trail='DB'**
    ***.background_dump_dest='/data/oradata/admin/icts001/bbdump'**
    ***.compatible='10.2.0.1.0'**
    ***.control_file_record_keep_time=30**
    ***.control_files='/data/oradata/icts001/control01.ctl','/dbworkspc01/multiplex/control02.ctl','/dbworkspc02/multiplex/control03.ctl'**
    ***.core_dump_dest='/data/oradata/admin/icts001/cdump'**
    ***.cursor_sharing='SIMILAR'**
    ***.db_block_size=8192**
    ***.db_cache_size=4194304000**
    ***.db_domain=''**
    ***.db_file_multiblock_read_count=8**
    ***.db_name='icts001'**
    ***.db_recovery_file_dest='/dbworkspc02/flash_recovery_area'**
    ***.db_recovery_file_dest_size=16106127360**
    ***.db_unique_name='ICTS001'**
    ***.db_writer_processes=4**
    ***.dbwr_io_slaves=4**
    ***.dg_broker_start=FALSE**
    ***.dispatchers=''**
    ***.fal_client='icts001'**
    ***.fal_server='drs001','SMS'**
    ***.fast_start_mttr_target=30**
    ***.global_names=TRUE**
    ***.job_queue_processes=10**
    ***.log_archive_config='DG_CONFIG=(ICTS001,SMS,drcs001)'**
    **icts001.log_archive_dest_1='location="/EMC_HD/oradata/archlog"','valid_for=(ONLINE_LOGFILE,ALL_ROLES)'**
    ***.log_archive_dest_1='location=/EMC_HD/oradata/archlog valid_for=(ONLINE_LOGFILE,ALL_ROLES)'**
    ***.log_archive_dest_2='SERVICE=drcs001 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=drcs001'**
    ***.log_archive_dest_3='SERVICE=ASM LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=SMS'**
    ***.log_archive_dest_state_10='DEFER'**
    **icts001.log_archive_dest_state_1='ENABLE'**
    ***.log_archive_dest_state_2='DEFER'**
    ***.log_archive_dest_state_3='ENABLE'**
    ***.log_archive_dest_state_4='DEFER'**
    ***.log_archive_dest_state_5='DEFER'**
    ***.log_archive_dest_state_6='DEFER'**
    ***.log_archive_dest_state_7='DEFER'**
    ***.log_archive_dest_state_8='DEFER'**
    ***.log_archive_dest_state_9='DEFER'**
    ***.log_archive_format='arch%t_%s_%r.arc'**
    **icts001.log_archive_format='arch%t_%s_%r.arc'**
    ***.log_archive_max_processes=15**
    ***.log_archive_min_succeed_dest=1**
    **icts001.log_archive_trace=0**
    ***.log_checkpoint_timeout=0**
    ***.log_checkpoints_to_alert=TRUE**
    ***.nls_date_format='YYYY-MM-DD HH24:MI:SS'**
    ***.open_cursors=8000**
    ***.parallel_max_servers=13**
    ***.parallel_min_servers=10**
    ***.parallel_threads_per_cpu=6**
    ***.pga_aggregate_target=15032385536**
    ***.processes=1500**
    ***.recovery_parallelism=6**
    ***.remote_login_passwordfile='EXCLUSIVE'**
    ***.resource_limit=FALSE**
    ***.service_names='icts001'**
    ***.session_cached_cursors=200**
    ***.sessions=1500**
    ***.sga_max_size=25769803776**
    ***.sga_target=21474836480**
    ***.shared_pool_size=1048576000**
    ***.shared_servers=0**
    **icts001.standby_archive_dest=''**
    ***.standby_file_management='AUTO'**
    ***.streams_pool_size=117440512**
    Standby Config:
    icts001.__db_cache_size=754974720
    icts001.__java_pool_size=16777216
    icts001.__large_pool_size=16777216
    icts001.__shared_pool_size=436207616
    icts001.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/product/10.2.0/db_1/admin/adump'
    *.background_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='/u02/oradata/controlfile/control01.ctl','/u02/flash_recovery_area/controlfile/control02.ctl','/u02/flash_recovery_area/controlfile/control03.ctl'
    *.core_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/cdump'
    *.db_block_size=8192
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='/data/oradata/icts001','/u02/oradata','/data3/data2c/oradata/icts001','/u02/oradata','/data1/oradata/icts001','/u02/oradata'
    *.db_name='icts001'
    *.db_recovery_file_dest='/u02/flash_recovery_area'
    *.db_recovery_file_dest_size=47185920000
    *.db_unique_name='SMS'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=smsXDB)'
    *.fal_client='SMS'
    *.fal_server='PROD'
    *.instance_name='icts001'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(PROD,SMS)'
    *.log_archive_dest_1='LOCATION=use_db_recovery_file_dest VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=SMS'
    *.log_archive_dest_2='service=PROD valid_for=(online_logfiles,primary_role) db_unique_name=icts001'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='ENABLE'
    *.log_file_name_convert='/data/oradata/icts001','/u02/flash_recovery_area/onlinelog','/dbworkspc01/multiplex','/u02/flash_recovery_area/onlinelog','/data3/data2c/oradata/icts001','/u02/flash_recovery_area/standbylog'
    *.open_cursors=300
    *.pga_aggregate_target=409993216
    *.processes=5000
    *.remote_login_passwordfile='exclusive'
    *.service_names='SMS'
    *.sessions=5505
    *.sga_target=1231028224
    *.standby_file_management='auto'
    *.thread=1
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/udump'
    Regards,
    cmadiam

    The parameter log_archive_config is wrongly configured on the standby database.
    Add the database unique name (db_unique_name) of the primary database to the log_archive_config of the standby database
    On the standby, your log_archive_config should be something like
    log_archive_config='DG_CONFIG=(icts001,sms)';
    cmadiam82     
    Handle:  cmadiam82   
    Status Level:  Newbie 
    Registered:  Feb 28, 2010 
    Total Posts:  36 
    Total Questions:  13 (11 unresolved)  It is sad to see that forums is of no help to you. :(
    Please mark the threads as answered if you have got a solution and keep the forums clean. If not, then reply to your questions so that you would get back an answer rather than keeping it as unanswered.

  • Data Guard configuration-Archivelogs not being transferred

    Hi Gurus,
    I have configured data guard in Linux with 10g oracle, although I am new to this concept. My tnsping is working well both sides. I have issued alter database recover managed standby using current logfile disconnect command in standby site. But I am not receiving the archive logs in the standby site. I have attached my both pfiles below for your reference:
    Primary database name: Chennai
    Secondary database name: Mumbai
    PRIMARY PFILE:
    db_block_size=8192
    db_file_multiblock_read_count=16
    open_cursors=300
    db_domain=""
    background_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/bdump
    core_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/cdump
    user_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/udump
    db_create_file_dest=/u01/app/oracle/product/10.2.0/db_1/oradata
    db_recovery_file_dest=/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area
    db_recovery_file_dest_size=2147483648
    job_queue_processes=10
    compatible=10.2.0.1.0
    processes=150
    sga_target=285212672
    audit_file_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/adump
    remote_login_passwordfile=EXCLUSIVE
    dispatchers="(PROTOCOL=TCP) (SERVICE=chennaiXDB)"
    pga_aggregate_target=94371840
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    control_files=("/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/controlfile/o1_mf_82gl1b43_.ctl", "/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/controlfile/o1_mf_82gl1bny_.ctl")
    DB_NAME=chennai
    DB_UNIQUE_NAME=chennai
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
    LOG_ARCHIVE_DEST_1=
    'LOCATION=/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/arch/
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=chennai'
    LOG_ARCHIVE_DEST_2=
    'SERVICE=MUMBAI LGWR ASYNC
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=mumbai'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
    LOG_ARCHIVE_MAX_PROCESSES=30
    FAL_SERVER=mumbai
    FAL_CLIENT=chennai
    DB_FILE_NAME_CONVERT=(/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/,/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/)
    LOG_FILE_NAME_CONVERT='/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/'
    STANDBY_FILE_MANAGEMENT=AUTO
    SECONDARY PFILE:
    db_block_size=8192
    db_file_multiblock_read_count=16
    open_cursors=300
    db_domain=""
    db_name=chennai
    background_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/bdump
    core_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/cdump
    user_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/udump
    db_recovery_file_dest=/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/
    db_create_file_dest=/home/oracle/oracle/product/10.2.0/db_1/oradata/
    db_recovery_file_dest_size=2147483648
    job_queue_processes=10
    compatible=10.2.0.1.0
    processes=150
    sga_target=285212672
    audit_file_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/adump
    remote_login_passwordfile=EXCLUSIVE
    dispatchers="(PROTOCOL=TCP) (SERVICE=mumbaiXDB)"
    pga_aggregate_target=94371840
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    control_files="/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/controlfile/standby01.ctl","/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/controlfile/standby02.ctl"
    DB_UNIQUE_NAME=mumbai
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
    LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mumbai'
    LOG_ARCHIVE_DEST_2='SERVICE=chennai LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chennai'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
    FAL_SERVER=chennai
    FAL_CLIENT=mumbai
    DB_FILE_NAME_CONVERT=(/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/,/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/)
    LOG_FILE_NAME_CONVERT='/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/'
    STANDBY_FILE_MANAGEMENT=AUTO
    Any help would be greatly appreciated. Kindly, help me someone please..
    -Vimal.

    Thanks Balazs, Mseberg, CKPT for all your replies...
    CKPT....I just did what you said..Comes below primary output & standby output...
    PRIMARY_
    SQL> set feedback off
    SQL> set trimspool on
    SQL> set line 500
    SQL> set pagesize 50
    SQL> column name for a30
    SQL> column display_value for a30
    SQL> column ID format 99
    SQL> column "SRLs" format 99
    SQL> column active format 99
    SQL> col type format a4
    SQL> column ID format 99
    SQL> column "SRLs" format 99
    SQL> column active format 99
    SQL> col type format a4
    SQL> col PROTECTION_MODE for a20
    SQL> col RECOVERY_MODE for a20
    SQL> col db_mode for a15
    SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
    NAME DISPLAY_VALUE
    db_file_name_convert /home/oracle/oracle/product/10
    .2.0/db_1/oradata/MUMBAI/dataf
    ile/, /u01/app/oracle/product/
    10.2.0/db_1/oradata/CHENNAI/da
    tafile/
    db_name chennai
    db_unique_name chennai
    dg_broker_config_file1 /u01/app/oracle/product/10.2.0
    /db_1/dbs/dr1chennai.dat
    dg_broker_config_file2 /u01/app/oracle/product/10.2.0
    /db_1/dbs/dr2chennai.dat
    dg_broker_start FALSE
    fal_client chennai
    fal_server mumbai
    local_listener
    log_archive_config DG_CONFIG=(chennai,mumbai)
    log_archive_dest_2 SERVICE=MUMBAI LGWR ASYNC
    VALID_FOR=(ONLINE_LOGFILES,P
    RIMARY_ROLE)
    DB_UNIQUE_NAME=mumbai
    log_archive_dest_state_2 ENABLE
    log_archive_max_processes 30
    log_file_name_convert /home/oracle/oracle/product/10
    .2.0/db_1/oradata/MUMBAI/onlin
    elog/, /u01/app/oracle/product
    /10.2.0/db_1/oradata/CHENNAI/o
    nlinelog/, /home/oracle/oracle
    /product/10.2.0/db_1/flash_rec
    overy_area/MUMBAI/onlinelog/,
    /u01/app/oracle/product/10.2.0
    /db_1/flash_recovery_area/CHEN
    NAI/onlinelog/
    remote_login_passwordfile EXCLUSIVE
    standby_archive_dest ?/dbs/arch
    standby_file_management AUTO
    SQL> col name for a10
    SQL> col DATABASE_ROLE for a10
    SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE,switchover_status from v$database;
    NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
    CHENNAI chennai MAXIMUM PERFORMANCE PRIMARY READ WRITE NOT ALLOWED
    SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
    THREAD# MAX(SEQUENCE#)
    1 210
    SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
    2 FROM
    3 (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
    4 (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
    5 WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
    Thread Last Sequence Received Last Sequence Applied Difference
    1 210 210 0
    SQL> col severity for a15
    SQL> col message for a70
    SQL> col timestamp for a20
    SQL> select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;
    SEVERITY ERROR_CODE timestamp MESSAGE
    Error 16191 15-AUG-2012 12:46:02 LGWR: Error 16191 creating archivelog file 'MUMBAI'
    Error 16191 15-AUG-2012 12:46:02 FAL[server, ARC1]: Error 16191 creating remote archivelog file 'MUMBAI
    Error 16191 15-AUG-2012 12:51:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 12:56:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:01:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:06:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:11:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:16:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:21:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:26:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:31:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:36:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:41:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:47:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:52:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 13:57:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:02:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    SEVERITY ERROR_CODE timestamp MESSAGE
    16191.
    Error 16191 15-AUG-2012 14:07:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:12:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:17:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:22:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:27:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:32:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 14:37:03 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 18:21:40 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    Error 16191 15-AUG-2012 18:26:41 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
    16191.
    SQL> select ds.dest_id id
    2 , ad.status
    3 , ds.database_mode db_mode
    4 , ad.archiver type
    5 , ds.recovery_mode
    6 , ds.protection_mode
    7 , ds.standby_logfile_count "SRLs"
    8 , ds.standby_logfile_active active
    9 , ds.archived_seq#
    10 from v$archive_dest_status ds
    11 , v$archive_dest ad
    12 where ds.dest_id = ad.dest_id
    13 and ad.status != 'INACTIVE'
    14 order by
    15 ds.dest_id;
    ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
    1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 210
    2 ERROR UNKNOWN LGWR UNKNOWN MAXIMUM PERFORMANCE 0 0 0
    SQL> column FILE_TYPE format a20
    SQL> col name format a60
    SQL> select name
    2 , floor(space_limit / 1024 / 1024) "Size MB"
    3 , ceil(space_used / 1024 / 1024) "Used MB"
    4 from v$recovery_file_dest
    5 order by name;
    NAME Size MB Used MB
    /u01/app/oracle/product/10.2.0/db_1/flash_recovery_area 2048 896
    SQL> spool offspool u01/app/oracle/vimal.log
    SP2-0768: Illegal SPOOL command
    Usage: SPOOL { <file> | OFF | OUT }
    where <file> is file_name[.ext] [CRE[ATE]|REP[LACE]|APP[END]]
    SQL> spool /u01/app/oracle/vimal.log
    Standby output_
    SQL> set feedback off
    SQL> set trimspool on
    SQL> set line 500
    SQL> set pagesize 50
    SQL> set linesize 200
    SQL> column name for a30
    SQL> column display_value for a30
    SQL> col value for a10
    SQL> col PROTECTION_MODE for a15
    SQL> col DATABASE_Role for a15
    SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
    NAME DISPLAY_VALUE
    db_file_name_convert /u01/app/oracle/product/10.2.0
    /db_1/oradata/CHENNAI/datafile
    /, /home/oracle/oracle/product
    /10.2.0/db_1/oradata/MUMBAI/da
    tafile/
    db_name chennai
    db_unique_name mumbai
    dg_broker_config_file1 /home/oracle/oracle/product/10
    .2.0/db_1/dbs/dr1mumbai.dat
    dg_broker_config_file2 /home/oracle/oracle/product/10
    .2.0/db_1/dbs/dr2mumbai.dat
    dg_broker_start FALSE
    fal_client mumbai
    fal_server chennai
    local_listener
    log_archive_config DG_CONFIG=(chennai,mumbai)
    log_archive_dest_2 SERVICE=chennai LGWR ASYNC VAL
    ID_FOR=(ONLINE_LOGFILES,PRIMAR
    Y_ROLE) DB_UNIQUE_NAME=chennai
    log_archive_dest_state_2 ENABLE
    log_archive_max_processes 2
    log_file_name_convert /u01/app/oracle/product/10.2.0
    /db_1/oradata/CHENNAI/onlinelo
    g/, /home/oracle/oracle/produc
    t/10.2.0/db_1/oradata/MUMBAI/o
    nlinelog/, /u01/app/oracle/pro
    duct/10.2.0/db_1/flash_recover
    y_area/CHENNAI/onlinelog/, /ho
    me/oracle/oracle/product/10.2.
    0/db_1/flash_recovery_area/MUM
    BAI/onlinelog/
    remote_login_passwordfile EXCLUSIVE
    standby_archive_dest ?/dbs/arch
    standby_file_management AUTO
    SQL> col name for a10
    SQL> col DATABASE_ROLE for a10
    SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE from v$database;
    NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
    CHENNAI mumbai MAXIMUM PERFORM PHYSICAL S MOUNTED
    ANCE TANDBY
    SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
    SQL> select process, status,thread#,sequence# from v$managed_standby;
    PROCESS STATUS THREAD# SEQUENCE#
    ARCH CONNECTED 0 0
    ARCH CONNECTED 0 0
    MRP0 WAIT_FOR_LOG 1 152
    SQL> col name for a30
    SQL> select * from v$dataguard_stats;
    NAME VALUE UNIT TIME_COMPUTED
    apply finish time day(2) to second(1) interval
    apply lag day(2) to second(0) interval
    estimated startup time 10 second
    standby has been open N
    transport lag day(2) to second(0) interval
    SQL> select * from v$archive_gap;
    SQL> col name format a60
    SQL> select name
    2 , floor(space_limit / 1024 / 1024) "Size MB"
    3 , ceil(space_used / 1024 / 1024) "Used MB"
    4 from v$recovery_file_dest
    5 order by name;
    NAME Size MB Used MB
    /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/ 2048 150
    SQL> spool off
    -Vimal.

  • 11g Data guard 建physical standby database配置相关问题

    General information
    OS:red hat Linux 2.6.32-200.13.1.el5uek x86_64(primary,standby)
    Home version:11.2.0.3(primary,standby)
    Situation:
    实验用库,想学习data guard,在另一个服务器上克隆了一个HOME并按照官方文档http://docs.oracle.com/cd/E11882_01/server.112/e25608/create_ps.htm#i1225703
    中去实验,步骤均按照官方文档来,最后在mount standby库时校验select sequence# from v$archived_log没有记录,出错日志如下(从mount开如截取)
    standby alert_dbacoe.log
    Error logs
    alter database mount
    Completed: alter database mount
    Error 604 received logging on to the standby
    +FAL[client, ARC2]: Error 604 connecting to PRIMARYSV for fetching gap sequence+
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo01_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u03/app/oracle/oradata/dbacoe/redo01.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo01_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 1 thread 1: '/u03/app/oracle/oradata/dbacoe/redo01.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo02_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 2 thread 1: '/u03/app/oracle/oradata/dbacoe/redo02.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 2 of thread 1
    ORA-00312: online log 2 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo02_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 2 thread 1: '/u03/app/oracle/oradata/dbacoe/redo02.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo03_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 3 thread 1: '/u03/app/oracle/oradata/dbacoe/redo03.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    Errors in file /u05/oracle/app/oracle/diag/rdbms/standby/dbacoe/trace/dbacoe_lgwr_1912.trc:
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/u04/app/oracle/redundancy/dbacoe/redo03_s.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ORA-00312: online log 3 thread 1: '/u03/app/oracle/oradata/dbacoe/redo03.log'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3
    ARC3: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    Addition:
    因为是实验用库,所以结构有点怪,大体如下:
    Primary
    数据文件在BASE所在的oradata下 即/u03/app/oracle下,
    redo文件每组两份,一份在oradata下,另一份在同机的/u04/app/oracle/redundancy中(以_s结尾命名)
    archive文件在/u04/app/oracle/fast_recovery_area中
    standby
    数据文件在/u05/oracle/app/oracle/oradata下
    redundancy中的redo文件随着该目录放到oradata下的red中了
    archive文件目录则是/u05/oracle/app/oracle/fast_recovery_area中
    接下来是对应的init文件与TNS信息
    Primary端的init主要信息
    db_unique_name='PRIMARY'
    fal_client='PRIMARYSV'
    fal_server='STANDBYSV'
    log_archive_config='DG_CONFIG=(PRIMARY,STANDBY)'
    log_archive_dest_1='location=/u04/app/oracle/fast_recovery_area/DBACOE/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=PRIMARY'
    log_archive_dest_2='service=STANDBY ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=STANDBY'
    log_archive_dest_state_1='enable'
    log_archive_dest_state_2='enable'
    db_file_name_convert='/u05/oracle/app/oracle/oradata/dbacoe','/u03/app/oracle/oradata/dbacoe',/u05/oracle/app/oracle/oradata/red','/u04/app/oracle/redundancy'
    log_file_name_convert='/u05/oracle/app/oracle/fast_recovery_area/DBACOE/archivelog','/u04/app/oracle/fast_recovery_area/DBACOE/archivelog'
    Standby端的init信息
    log_archive_config='DG_CONFIG=(PRIMARY,STANDBY)'
    db_unique_name='STANDBY'
    db_file_name_convert='/u03/app/oracle/oradata/dbacoe','/u05/oracle/app/oracle/oradata/dbacoe','/u04/app/oracle/redundancy','/u05/oracle/app/oracle/oradata/red'
    log_file_name_convert='/u04/app/oracle/fast_recovery_area/DBACOE/archivelog','/u05/oracle/app/oracle/fast_recovery_area/DBACOE/archivelog'
    log_archive_dest_1='location=/u05/oracle/app/oracle/fast_recovery_area/DBACOE/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=STANDBY'
    log_archive_dest_state_1=enable
    log_archive_format=log%t_%s_%r.arc
    log_archive_dest_2='SERVICE=PRIMARYSV ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    log_archive_dest_state_2=enable
    TNS信息如下(均能tnsping通)
    PRIMARYSV =
    +(DESCRIPTION =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = db1.ad.xxxxxx.com)(PORT = 1521))+
    +(CONNECT_DATA =+
    +(SERVER = DEDICATED)+
    +(SERVICE_NAME = dbacoe)+
    +)+
    +)+
    STANDBYSV =
    +(DESCRIPTION =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = db2.ad.xxxxx.com)(PORT = 1529))+
    +(CONNECT_DATA =+
    +(SERVER = DEDICATED)+
    +(SERVICE_NAME = dbacoe)+
    +)+
    +)+
    PS:请大家花费一点点时间阅读一下这冗长的段子,刚开学data guard,参数上设置都很生疏,(光照着文档做了,是否有redo log相关的操作遗漏?)敬请指教
    Edited by: 961394 on Dec 10, 2012 1:52 AM
    Edited by: 961394 on 2012-12-10 上午4:58

    结贴了,感谢maclean一群的Jesse Lui帮我解决问题

  • Error during implenting Data Guard

    I am implementing Data guard on two different PC's by following this URL,
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/ha/dataguard/physstby/physstdby.htm
    i the last step when i execute RMAN script, it is giving errors, i need your help in resolving this.
    RMAN> run {
    allocate channel prmy1 type disk;
    allocate channel prmy2 type disk;
    allocate channel prmy3 type disk;
    allocate channel prmy4 type disk;
    allocate auxiliary channel stby type disk;
    duplicate target database for standby from active database
    spfile
    parameter_value_convert 'tmdb','tbdb'
    set db_unique_name='tbdb'
    set db_file_name_convert='/tmdb/','/tbdb/'
    set log_file_name_convert='/tmdb/','/tbdb/'
    set control_files='/u01/app/oracle/oradata/tbdb/tbdb1.ctl'
    set log_archive_max_processes='5'
    set fal_client='tbdb'
    set fal_server='tmdb'
    set standby_file_management='AUTO'
    set log_archive_config='dg_config=(tmdb,tbdb)'
    set log_archive_dest_2='service=tmdb ASYNC
    valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=tmdb'
    Output of Script
    using target database control file instead of recovery catalog
    allocated channel: prmy1
    channel prmy1: SID=16 device type=DISK
    allocated channel: prmy2
    channel prmy2: SID=21 device type=DISK
    allocated channel: prmy3
    channel prmy3: SID=152 device type=DISK
    allocated channel: prmy4
    channel prmy4: SID=24 device type=DISK
    allocated channel: stby
    channel stby: SID=135 device type=DISK
    Starting Duplicate Db at 19-APR-11
    contents of Memory Script:
    backup as copy reuse
    targetfile '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtmdb' auxiliary format
    '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtbdb' targetfile
    '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletmdb.ora' auxiliary format
    '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletbdb.ora' ;
    sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiletbdb.ora''";
    executing Memory Script
    Starting backup at 19-APR-11
    RMAN-03009: failure of backup command on prmy1 channel at 04/19/2011 14:46:28
    ORA-17627: ORA-12577: Message 12577 not found; product=RDBMS; facility=ORA
    continuing other job steps, job failed will not be re-run
    released channel: prmy1
    released channel: prmy2
    released channel: prmy3
    released channel: prmy4
    released channel: stby
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 04/19/2011 14:46:29
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-03009: failure of backup command on prmy2 channel at 04/19/2011 14:46:28
    ORA-17627: ORA-12577: Message 12577 not found; product=RDBMS; facility=ORA

    https://supporthtml.oracle.com/ep/faces/secure/km/BugDisplay.jspx?id=8406972&bugProductSource=Oracle&h=Y
    WORKAROUND:
    Use RMAN duplicate database based on backup
    https://supporthtml.oracle.com/ep/faces/secure/km/BugDisplay.jspx?id=10339515&bugProductSource=Oracle&h=Y
    WORKAROUND:
    Make sure that the raw partition on the Standby has equal size than the one
    on the Primary.

  • Data Guard ora-00314 and ora-00312

    Good afternoon. Please, I have configured a data guard on the same server and am sure the files are well configured as
    *.db_file_name_convert='C:\Oracle\product\10.2.0\oradata\test2','C:\Oracle\product\10.2.0\oradata\test1'
    log_file_name_convert='C:\Oracle\product\10.2.0\oradata\test2','C:\Oracle\product\10.2.0\oradata\test1'
    Thanks

    The Primary database was OK before I start the Standby database. I've tried many times but failed to open the Primary database after I successfuly created and opened the Standby database. Did the Standby database crash the redo log of Primary? Why?
    Error shown when I want to open the Primary database:
    ORA-00314: log 2 of thread 1, expected sequence# 11 doesn't match 0
    ORA-00312: online log 2 thread 1: '/u01/oradata/DB01/redo_log02.dbf'Part of my initDB01.ora
    DB_NAME=DB01
    DB_UNIQUE_NAME=DB01
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB01,DB02)'
    LOG_ARCHIVE_DEST_1=
    'LOCATION=/u01/oradata/DB01/arc/ MANDATORY
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=DB01'
    LOG_ARCHIVE_DEST_2=
    'SERVICE=DB02
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=DB02'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    LOG_ARCHIVE_MAX_PROCESSES=30
    FAL_SERVER=DB02
    FAL_CLIENT=DB01
    DB_FILE_NAME_CONVERT='/u01/oradata/DB02/','/u01/oradata/DB01/'
    LOG_FILE_NAME_CONVERT='/u01/oradata/DB02/arc/','/u01/oradata/DB01/arc/'
    STANDBY_FILE_MANAGEMENT=AUTO
    Part of my initDB02.ora
    CONTROL_FILES='/opt/oracle/oradata/DB02/control_primary.ctl'
    DB_NAME=DB01
    DB_UNIQUE_NAME=DB02
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB01,DB02)'
    LOG_ARCHIVE_DEST_1=
    'LOCATION=/u01/oradata/DB02/arc/ MANDATORY
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=DB02'
    LOG_ARCHIVE_DEST_2=
    'SERVICE=DB01
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=DB01'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    STANDBY_FILE_MANAGEMENT=AUTO
    FAL_SERVER=DB01
    FAL_CLIENT=DB02
    DB_FILE_NAME_CONVERT='/u01/oradata/DB01/','/u01/oradata/DB02/'
    LOG_FILE_NAME_CONVERT='/u01/oradata/DB01/arc/','/u01/oradata/DB02/arc/'

Maybe you are looking for

  • SharePoint 2013 Document Conversion from PDF to Word (docx) format

    Hello all, I know that SharePoint has the facility to convert documents from Word to PDF using the Word Conversion Service, and I've worked with it in C# using the object model. We have a client request for the opposite process. They have an email-en

  • ORDER BY(SELECT ...) for what?

    Hi, actually i'm preparing OCE - SQL exam... but now when i expect no more BIG surprises for me in the sql world something appear that i have not seen before(jajajaja)... what is the use of ORDER BY clause combining it with a SELECT statement? exampl

  • Tables for entries in cheque

    Hi all i want to display entries in cheque using script. can any one tell me the tables used and the sequence of tables and how they are corelated ? Please gsuggest link from whee i can get info regarding that . i have posted many questions regarding

  • Problems with oracle10 and solaris 10

    Hello, I been trying for 3 days to install and make oracle 10g work with solaris 10. I've downloaded oracle 10.1.0.2 (it's the only one available for solaris 64 bitS). I've followed every bit of documentation I found in this forum but still no go. I'

  • What is eating up data - very specific

    I just bought an iPhone 5s and it eats data like crazy, even when on wifi. (I had a iPhone 4 before.) So this morning I - Reset statistics in settings->Cellular.. - Turned cell data off in Settings->Cellular for all except mail and browsers (missed s