DBCONSOLE Problem for Data Guard

Dear All(s)
I am using Oracle 10.2.0 Physical Standby Data Gurad, I have failover successfully. But unable to creating DBConsole serverice. Any one can guide me how i can configure dbconsole (Web OEM).
Thanks in Advance

Naeem,
If you want and afford, you can setup database control on standby after failover in similar manner you did it on primary. Here what you can do to set it up, and fgllow the interactive instruction?
# su - oracle
-- Then set your sid
$> export ORACLE_SID=<sidname>
-- to create
$>emca -config dbcontrol db -repos create
-- to drop old one
$>emca -deconfig dbcontrol dbRegards
OrionNet

Similar Messages

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Database is configured for Data Guard

    I am run a UTF8 conversion on a development database that has been cloned from Data Guard. There is a warning in Migration Status: "Database is configured for Data Guard" What is the DMU looking at to determine this? The database is open in read-write mode and it behaving like a primary database (I run the DMU scan and run updates to fix invalid representations). I would like to know what settings I need to update.
    Is this preventing me from converting tables using CTAS? When I try to select this for all tables I get the message, "The DMU does not support the conversion method "Copy data using CREATE TABLE AS SELECT" for tables that are involved in an Oracle Streams process, like capture or apply. Use another available conversion method for the table"
    Thanks,
    Ben

    The DMU checks if the parameter DG_BROKER_START is set to true.
    The problem with CTAS is independent from Data Guard. The DMU checks for tables that:
    - are source of asynchronous Streams capture, or
    - have update conflict handlers, or
    - have DML handlers, or
    - have conflict resolution parameters
    The above tables are considered configured for Oracle Streams and are not supported by CTAS conversion method. This is because the CTAS method creates a converted copy of the table and drops the original. The DMU is not capable of moving the Streams configuration information from the old table to the new one.
    Thanks,
    Sergiusz

  • Separate listener for Data Guard

    I am setting up a best practice about using a dedicated listener for Data Guard. The idea is to maintain full functionality of Data Guard while application team is requesting to bring down listener service (according to business requirement). Need your opinion on these:
    1. I understand that there may be a very little chance when listener is required by Data Guard, but I find it no harm to do this. Do you agree with me?
    2. In RAC environment, we can only have 1 VIP to be used in listener.ora. I am thinking of using same IP but different port numbers for different listener. Any better idea than this?
    Many thanks

    It is never a bad practice to use separate listeners at the primary and at the Standby for Data Guard's use. A listener at the standby is required by Data Guard to make a connection to that standby. A listener at the Primary is required for Data Guard to make a reverse connection from the Standby to the Primary for some kinds of Gap resolution (missing log file the the Primary thinks it already sent or a corrupted log file etc). And of course, when you switch roles.
    To answer the second question could you please tell me what version of Oracle you are using and if you plan on using the Data Guard Broker or not?
    Thanks.
    Larry

  • Configure listener for data guard

    HI everyone,
    I am currently setting data guard (Physical standby database) for my database. But I have problem to configure the listener on both servers. Can anyone provide me some example?
    Oracle: 10g R2
    O/S: Windows
    Primary database ken10g
    standby database: ken10gbk
    Following is the content of my current listener files on both of servers:
    Primary server:
    # listener.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\listener.ora
    # Generated by Oracle configuration tools.
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
    (PROGRAM = extproc)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (ADDRESS = (PROTOCOL = TCP)(HOST = Primary_server)(PORT = 1521))
    Standby Server:
    # listener.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\listener.ora
    # Generated by Oracle configuration tools.
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
    (PROGRAM = extproc)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = standby_server)(PORT = 1521))
    Thanks in advance.
    Ken

    Hi Ken,
    You need to configure this on both primary and standby, I would have kept different listener name on primary and standby. Also if you are going to use dataguard broker you would need to set GLOBAL_DBNAME in your listener.ora file
    I have give a sample entry for tnsnames.ora and listener.ora
    TNSNAMES.ORA on primary
    STNDBY =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-prv)(PORT = 10521))
    (CONNECT_DATA =
    (SERVICE_NAME = STNDBY)
    PRIM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-prv)(PORT = 10521)))
    PRIMARY =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-prv)(PORT = 10521))
    (CONNECT_DATA =
    (SERVICE_NAME = PRIMARY)
    EXTPROC_CONNECTION_DATA =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    (CONNECT_DATA =
    (SID = PLSExtProc)
    (PRESENTATION = RO)
    Copy the same file to the standby server and adjust it based on the listener.ora file. Also update the listener.ora file so that it listen the SIDs mentioned in the tnsnames.ora file.
    Listener.ora
    LISTENER_STBY =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-prv)(PORT = 10521))
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    SID_LIST_LISTENER_STBY =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db10g)
    (PROGRAM = extproc)
    (SID_DESC =
    (SID_NAME = stndby)
    (GLOBAL_DBNAME = stndby_DGMGRL)
    (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db10g)
    )

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Question on db_unique_name in init.ora for Data Guard

    I need to set up only one physical standby on a different box (at a different location) for the primary db in production.
    OS: Sun Sparc Solaris 10
    Oracle: 10.2.0.3
    Can I use the same db_unique_name in init.ora for both primary and standby DBs?
    What are the minimal parameters required by Data Guard I have to specify in the init.ora in my case?
    Could anyone please post an example of init.ora for both primary and standby DBs?
    Thanks very much in advance.

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i63561

  • Steps for data guard RAC primary RAC standby

    Hi,
    I have some doubt in configuring data guard for rac DR to rac primary.
    1) In RAC DR, physical standby will be in RAC before starting data guard for RAC Primary , right?.
    2) While configuring RAC DR , should one node be down in standby?.
    3) After creating the auxiliary database in standby, i mean restoring the rman backup from staging directory in DR,
    Shall we have to register the services(database,instance,asm,etc..) to CRS(OCR)?. If yes ,Why?.
    4) After DR configure over , Shall i shut/down one node?. Should all the nodes in DR be in up , while applying logs to
    standby ?.
    If anybody have made the setup and have the prepared document , requesting you to share with me.
    Thank you very much,
    Sunand

    Hi Sunand,
    Please follow the following My Oracle Support (MOS) Document ID:
    MAA - Creating a RAC Physical Standby for a RAC Primary [ID 380449.1]
    For further details, see the referred documents in previous post. Try practicing on VMWare because it will give you the flexibility for creating snapshots to save your work. Please see the answers for your question below:
    Q1. In RAC DR, physical standby will be in RAC before starting data guard for RAC Primary , right?.
    Answer: Yes, this is recommend but not mandatory. You can add a second node later on.
    Q2. While configuring RAC DR , should one node be down in standby?
    Answer: Yes, there are some database configuration commands that should be run in exclusive mode. You can start the remaining instance(s) after DB creation. Do not forget to set CLUSTER_DATABASE parameter to TRUE. Also, all instances except one should be close while performing switchover/failover.
    Q3. After creating the auxiliary database in standby, i mean restoring the rman backup from staging directory in DR, Shall we have to register the services(database,instance,asm,etc..) to CRS(OCR)?. If yes ,Why?
    Answer: Yes, this is recommended for RAC configuration to take advantage of high availability services.
    Q4. After DR configure over , Shall i shut/down one node?. Should all the nodes in DR be in up , while applying logs to
    standby ?
    Answer: You can start up all the instances after DR creation. In this case if your one node goes down log apply services will continue to apply changes in DR.
    Hope, this helps to clear your questions.
    Regards,
    Shahid

  • Problem with data guard Creating a Physical Standby Database turorial

    There is a tutorial of Creating a data guard Physical Standby Database:
    http://www.oracle.com/technology/obe/11gr1_db/ha/dataguard/physstby/physstdby.htm
    I tried to install it on two servers. One for primary database second for physical standby.
    I have error on C. Creating the standby database over the network, action #6:
    "On the standby system, set the ORACLE_SID environment variable to your <physical standby SID> (i.e. orclsby1) and start the instance in NOMOUNT mode with the text initialization parameter file."
    When I try to connect to idle instance there is an error pops up:
    C:\>sqlplus / as sysdba
    SQL*Plus: Release 11.1.0.7.0 - Production on Thu May 21 16:28:10 2009
    Copyright (c) 1982, 2008, Oracle. All rights reserved.
    ERROR:
    ORA-12560: TNS:protocol adapter error
    I'v checked listener and it is runned. There is no service for database because there is no database yet.
    The question is did some one installed data guard configuration using this tutorial? Is there any errors in it? What should I do to finish this installation?

    On Windhose for every instance a service must have been created using the oradim command.
    Oracle tutorials are usually Unix-centric, as Windhose is an odd man out, so they don't discuss that bit.
    'Kindly do the needful' and create the service prior to starting the instance in nomount mode
    Hint: oradim is documented and has a help=y clause.
    IIRC there is an option in database control (in the maintenance part) which automates everything.
    Sybrand Bakker
    Senior Oracle DBA
    Experts: those who do read documentation

  • Steps for Data Guard with one primary and 2 standby

    Hi,
    Database :10.2.0.4, 11.2.0.1
    Os: Windows , Unix
    A ----------------> Primary database
    B ----------------> Standby Database 1
    C ----------------> Standby Database 2
    I want to configure *2 standby* databases for single primary database.
    Lets take, A ,B and C are my machines.My data guard configuration will be like,*archive logs will be moving* from A to B and A to C.
    If i do any switchover in between A and B , now B is primary and remaining A and C are standby databases.At this stage also , archive logs should move from B to A and B to C. Also, same should happen from C to A and C to B,If i do switchover in between B and C.If everything is fine , then i will do switchback to main Primary database(A).
    How do i have to mention PFILE in all machines ,the parameters like
    LOG_ARCHIVE_DEST_1=LOCATION=<PATH> -- LOCAL ARCHIVE PATH
    LOG_ARCHIVE_DEST_2=SERVICE=
    LOG_ARCHIVE_DEST_3=SERVICE=
    FAL_SERVER=
    FAL_CLIENT=
    STANDBY_FILE_MANAGEMENT=
    In my tnsnames.ora , primary,standby1 and standby2 are my service entries and these are same in all of my machines.
    Please suggest me , how do i can configure my pfiles in all machines ?.
    Thanks,
    Sunand

    Not yet, but now you have me interested.
    Please consider Flashback.
    I still have to test but here's my take:
    PRIMARY SETTINGS
    *.FAL_SERVER=STANDBY
    *.FAL_CLIENT=PRIMARY
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=PRIMARY
    *.LOG_FILE_NAME_CONVERT='STANDBY','PRIMARY'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARY'
    *.log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_3='SERVICE=STANDBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY2'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_3=ENABLE
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    STANDBY 1 SETTINGS
    *.FAL_SERVER=PRIMARY
    *.FAL_CLIENT=STANDBY
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=STANDBY
    *.LOG_FILE_NAME_CONVERT='PRIMARY','STANDBY'
    *.log_archive_dest_1=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_2='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    *.log_archive_dest_3='SERVICE=STANDBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY2'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=DEFER
    *.LOG_ARCHIVE_DEST_STATE_3=DEFER
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    STANDBY2 SETTINGS
    *.FAL_SERVER=PRIMARY
    *.FAL_CLIENT=STANDBY2
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=STANDBY2
    *.LOG_FILE_NAME_CONVERT='PRIMARY','STANDBY2'
    *.log_archive_dest_1=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY2'
    *.log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_3='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=DEFER
    *.LOG_ARCHIVE_DEST_STATE_3=DEFER
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    Edited by: mseberg on Nov 29, 2010 9:39 AM
    The first test slapped me. Looking at 409013.1 Cascaded Standby Databases
    Edited by: mseberg on Nov 29, 2010 12:49 PM

  • Use dedicated server for data guard?

    Hi All,
    I've heard from someone that it is possible to separate data guard from the database and put it to a server, such that the data guard server will be dedicated to shipping log to standby site db etc. I don't know if such architecture would work. Could anyone please clarify it? Thanks in advance.

    An Introduction to Data Guard from Oracle Doc's is here:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#i1039416

  • License for Data Guard 11g - Observer

    Hi,
    I would like to know if fast-failover with Observer requires any additional licensing then just Data Guard.
    Thanks in advance.

    My first stop would be.... [Oracle 11g Licensing Information|http://download.oracle.com/docs/cd/B28359_01/license.111/b28287/toc.htm|Oracle 11g Licensing Information]. After that I would look at the Data Guard documentation.
    HTH!

  • Mapping problem for date field

    Hi XI Friends..
    In my file to idoc scenario..
    i have field date value :2006-10-10T14:10:10
    i have convert the above field into two fields idate :20061010
    itime:141010
    i used substring datetransfer functions..
    but in static test of message mapping ..i am getting value for itime as 021010
    if we give before 12:00:00 its converting properly..after 12.:00:00 its taking 12hr format only..
    please guide me..
    regards
    ram

    Hi Ram,
    Is this still a problem?
    I think the hint will also work in this case:
    sourcefield --> replaceString(sourceField/constant[T]/constant[]) --> TransformDate(yyyy-mm-ddHH:MM:SS --> HH:MM:SS) --> targetFieldTime
    sourcefield --> replaceString(sourceField/constant[T]/constant[]) --> TransformDate(yyyy-mm-ddHH:MM:SS --> YYYYMMDD) --> targetFieldDATE
    Daniel

  • Update rule problem for date in Prod

    My scenario is like this:-
    ODS 2 is loaded from ODS 1, in ODS 1 there is a data field calendar day (DATS, time characteristic) and there is a data field posting date (DATS, characteristic) in ODS 2. In the update rule, the posting date is updated from calendar day by a formula source. 
    The problem is the posting date data field is updated into ODS 2 correctly in development box but it is not updated (blank) in production box.  I can't figure out what is the cause, hopefully someone can give me some help.  Thanks.
    Cheers!
    Cecil

    The only different is the development system is having this part of code in the update rules program compare to production system.  Do I need to compile the update rules formula manually?
    *This ABAP Code was generated automatically          *
    *Formula Calculator                                  *
    *Generated :2008:09:12-10:47
    *User: XXX
    *Calculation:
    result = COMM_STRUCTURE-CALDAY.
      ENDCATCH.
      if sy-subrc <> 0.
        perform error_message using 'RSAU' 'E' '507'
                'ROUTINE_0004' g_s_is-recno
                rs_c_false rs_c_false g_s_is-recno
                changing c_abort.
      endif.
    Cheers!

  • Force logging for data guard

    It is my understanding new to version 11g force logging is not required on the database? So we could setup a tablespace for staging tables in no logging and not generate as much redo to be shipped over to the standby?
    Is this true or was that feature added in 10? I have hunted around on OTN for some info on the subject. could anyone provide a link?
    Thanks

    I've seen nothing that indicates "force logging is not required but the idea that somehow there is an advantage to no logging is grossly overstated.
    I'd suggest that you go to http://asktom.oracle.com and read Tom's comments about it.
    If your system is so close to falling over that a little bit of logging is going to tip it over you've huge problems you'd best deal with immediately.
    Consider, for example, setting up your staging tables as global temporary tables.

Maybe you are looking for

  • Report to show all superiors of an employee

    Dear Group members, My client wants a report to show all the hierarchial superiors of an employee. For eg- his  superior,his next superior and so on to to the chief officer. Is there any report for this scenario

  • Java instance not working in ECC6

    Hi, I was installed ECC6 on Winsever2003 and oracle10g intialy both java and abap instaces works fine EP also works fine sudenly after three days Java instance is not starting but ABAP instance working fine without any problem I am posting error log

  • Java stack and ABAP stack in upgrading SP

    Hi, pls pardon if it appears silly question. I dont have much knowledge in Basis I am on XI3.0 with SP 09 and upgrading SP to SP20 from SAP service market place. How can I identify which is java stack and which is ABAP stack in evrery SP because Java

  • X-Fi Audio Console Launcher not in system t

    Since my new install of Windows, I can't get the Console Launcher for my X-Fi Elite Pro in the system tray anymore. On my previous install it was there from first install, even after updating the software and drivers, but now it seems to have dissape

  • Can't sign in to FOLIO BUILDER!?

    Hi guys I get the message below when I try to log in. Can anybody help? My collegues have tried too and they have failed too? Cheers Gemma