Data Guard Test cases

Hi All we are in process of implementing Data Guard (Physical Standby) setup. Everything has been done. Now we need some good test cases to test our setup. Can you please provide some inputs in testing this.

Simple test would be to create few test tables with some data in primary and see if they are applied in your DR site by Data Guard.
In terms of testing failover & failback or Switchover and Switch back you would need to go through oracle documents or refer the following link.
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_SwitchoverFailoverBestPractices.pdf
sbs

Similar Messages

  • My MTM version is not allowing Data Driven Test Case UI to be created???

    From in MTM I am able to use the UI Builder to create (R&P) a test case (standard login case) but when I then try to use the same process to create a Data Driven test case for login the screens to allow that feature are not showing up.  I am using
    the Ultimate package but is there something else I should be doing?

    Hi,
    What do you mean the screens to allow that feature are not showing up?
    In MTM, we could add parameters to a manual test case to run multiple times with different data.
    More information, please refer to:
    # Add Parameters to a Manual Test Case To Run Multiple Times with Different Data
    https://msdn.microsoft.com/en-us/library/vstudio/dd997832(v=vs.110).aspx
    Regards
    Starain
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Can anybody explain me difference between test cases and test data

    Hi All,
    Can anybody explain me difference between test cases and test data.
    Testing procedure for FS.
    Thanks & Regards,
    Smitha

    Hi,
    Test case is a procedure how to do the testing of particular functionality and it consists the data that to be given for testing a particular requirement of the given Functional Spec and it also consists the result whether the desired functionality is fullfilling or not.
    Regards
    Pratap

  • Use of Flashback Database in Data guard environments

    11.2.0.3/RHEL 5.8
    I've come across several docs which talk about configuring FLASHBACK DATABASE in dataguard environments. We have several
    Physical standby DBs (Single Instance & RAC) running in our shop.I would like to know two or three major(common) use of FLASHBACK DATABASE in data guard environments.
    I understood one use mentioned in the below URL ie. recovering from a logical mistake
    http://uhesse.com/2010/08/06/using-flashback-in-a-data-guard-environment/
    I would like to know what are the other major/common use of Flashback Database feauture in DataGuard environment

    A couple of other uses:
    1) You can use flashback to test your DR. So you can activate your standby. Test application/network connectivity and functionality on your DR site and when done revert this database back to a physical standby. You do however have to ensure that this is allowed in your environment. In some places I have worked this would be a big no no as there were zero data loss requirements. However some companies will allow this as long as the standby is back in place within a certain time period.
    2) In the case that you have to do a failover for whatever reason, but then what was the primary site becomes available, you can flashback what was your primary to make it the standby rather than re-instatiating the database from scratch.
    Eg. You have a power outage at your primary site so you perform a failover and your standby becomes the primary. Once what was your primary site is back online, you can convert your previous primary into a standby by doing a full back/restore (or whatever method you choose) to recreate your standby again. However you also have the option of using flashback on this database and then convert it into a standby as this would potentially be quicker than re-instantiating the standby.

  • Grid control agent and data guard in mount mode

    Hello,
    I would like to know how you manage your data guards when you do not have the license for active data guard with the grid control agents. The standby database is in mount mode, so the agents cannot query the database.
    What do you guys do in such cases? Remove the agent? Or wait till a switchover?

    Check this out with a mounted database
    [lo***p02].oracle:/home/oracle > sqlplus dbsnmp
    SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 15:52:11 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Enter password:
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    Process ID: 0
    Session ID: 0 Serial number: 0It cannot connect.
    And then
    [lo****p02].oracle:/home/oracle > sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 15:57:49 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> select open_mode from v$database;
    OPEN_MODE
    MOUNTED

  • ACLs in a Failsafe/data guard environment

    Hi,
    We have recently upgraded to Oracle Enterprise Edition 11.2.0.3
    We had a slight issue with a few batch processes which sent success emails - we hadn't configured ACL permissions (We knew about it, and was on the log, but somehow got missed). All set up fine now in our test environment, but got a couple of queries before I set up in production.
    Had a look in the documentation and tried searching
    The ACLs I believe are managed as an XML file, on the database server, correct?
    If so, how do we manage this on both a clustered, and a data guard environment? Or in the case of a database restore? Is the ACL file automagically regenerated out of some database metadata? Or do we need to recreate in each environment?
    Any ideas?
    Cheers,
    Carl

    Hi,
    there is two ACL(Access Control List) one at Db level ans one is used at UNIX command for file permission , as you mentioned you have permission issue in batch execution.
    so you can check below link
    http://www.dartmouth.edu/~rc/help/faq/permissions.html
    i am not sure which ACL you required.

  • Read-only agent synching to a Data Guard physical standby?

    Hi all,
    we are trying to use TimesTen 11.2.2.4.1 as a read-only memory cache for an Oracle 11.2.3.0.7 schema on Linus RedHat 6.3, while using Oracle Data Guard to replicate the Oracle instance over geographically remote sites. On each site we would like to have two TT instances synchronizing with the local Oracle 11g instance. This works fine against the master DB, but are the TT agents going to be able to synchronize against physical standby instances?
    The problem it seems is that the TT agent uses dedicated structures in the Oracle master instance (related to the cache grid), which are going to be replicated into the standby instances. Is  the TT agent able to use the read-only, replicated structures to complete synchronization, or is this approach unworkable? What would be your advise as how to achieve this?
    Thanks for your help,
    Chris

    Hi again,
    so after testing a little bit it appears that this approach works indeed, at least against a limited number of manual DML operations. What I needed to do on the slave instance to have it working is the following:
    1 - Entirely exclude TTADMIN and TIMESTEN schemas from the Data Guard replication:
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TIMESTEN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TIMESTEN', object_name => '%');
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    2 - Erase both schemas from the local instance:
    DROP USER TTADMIN CASCADE;
    DROP USER TIMESTEN CASCADE;
    CREATE USER TTADMIN etc
    3 - Temporarily disable the database guard while creating the local ttCache structures, as the scripts seem to need to set a table-level lock on the source table:
    ALTER DATABASE GUARD NONE;
    ttIsql> CREATE READONLY CACHE GROUP etc
    ALTER DATABASE GUARD STANDBY;
    4 - Unset the "Fire_Once_Only" property for the local TTADMIN triggers:
    execute dbms_ddl.set_trigger_firing_property(trig_owner=> 'TTADMIN', trig_name=> 'TT_06_70560_T', fire_once => FALSE);
    At that point the cache seems to replicate properly in the most simple cases. I will try to test with some substantial load and against DG failovers to see how this behaves.
    Regards,
    Chris

  • Test Cases required for BW Statistics to test in QA annd DEV.

    HI All,
    I am currently working on a support Project.  My client has completed installing of Bw statistics in DEV and transported it to QA way back in 2006.Currrently before moving the BI Statistics data to PRD we have to test it in DEV and QA.
    How to prepare sample test case for testing it in DEV and QA? Please Sugggest.

    Hi,
    this forum is for the SAP BusinessObjects BI Solution architecture. I would suggest you post your question to the BW forum.
    ingo

  • Data guard synchronization after link down b/w primary and physical standby

    Hi All,
    I have configured data guard on oracle 11gr2 db. Normally switchover between my primary and physical standby happens smoothly and the Apply lag would be zero. Recently We had to test a scenario when the network link between Primary and Physical standby is completely down and Physical standby is isolated completely for more than half an hour.
    When we brought up the link every thing worked smoothly but apply lag started increasing from 0 to around 3 hrs. And then it started reducing to 0. Currently Apply lag and transport lag shows 0.
    But is this normal behaviour of oracle data guard that when the link between primary and physical standby is completely down, It requires 3-4 hrs for resynchronization ??? Even when during isolation, there were very few transactions happend on primary database ??
    Are there any documents available for this scenario??
    Thanks

    Hi, after the link is up, if there were some transactions and produced archive logs it's normal to take some time for resync. To check if 3-4 hours is normal or not, you can repeat the scenario and this time check
    - how many archivelogs does primary produce in this period.
    - after the link is up, does archivelog transfer immediately starts from primary to standby? Is primary able to send these archivelogs parallel?
    - Is there anything wrong with the apply process?
    check primary & standby alert log files, and run this query on standby to check the transport and apply processes:
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    regards

  • Data Guard Failover after primary site network failure or disconnect.

    Hello Experts:
    I'll try to be clear and specific with my issue:
    Environment:
    Two nodes with NO shared storage (I don't have an Observer running).
    Veritas Cluser Server (VCS) with Data Guar Agent. (I don't use the Broker. Data Guard agent "takes care" of the switchover and failover).
    Two single instance databases, one per node. NO RAC.
    What I'm being able to perform with no issues:
    Manual switch(over) of the primary database by running VCS command "hagrp -switch oraDG_group -to standby_node"
    Automatic fail(over) when primary node is rebooted with "reboot" or "init"
    Automatic fail(over) when primary node is shut down with "shutdown".
    What I'm NOT being able to perform:
    If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
    Same situation happens if I manually disconnect the server from the power.
    This is the alert logs I have:
    This is the portion of the alert log at Standby site when Real Time Replication is working fine:
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
      Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
    At this moment, node1 (Primary) is completely disconnected from the network. SEE at the end when the database (standby which should be converted to PRIMARY) is not getting all the archived logs from the Primary due to the abnormal disconnect from the network:
    Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
    Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
    Media Recovery Complete (primary_db)
    Terminal Recovery: successful completion
    Forcing ARSCN to IRSCN for TR 0:15922544
    Mon Dec 23 17:13:22 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Resetting standby activation ID 2071848820 (0x7b7de774)
    Completed:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Mon Dec 23 17:13:33 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Terminal Recovery: applying standby redo logs.
    Terminal Recovery: thread 1 seq# 7 redo required
    Terminal Recovery:
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 7 Reading mem 0
      Mem# 0: /u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log
    Identified End-Of-Redo (failover) for thread 1 sequence 7 at SCN 0xffff.ffffffff
    Incomplete Recovery applied until change 15922544 time 12/23/2013 17:12:48
    Media Recovery Complete (primary_db)
    Terminal Recovery: successful completion
    Forcing ARSCN to IRSCN for TR 0:15922544
    Mon Dec 23 17:13:22 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival ErrorAttempt to set limbo arscn 0:15922544 irscn 0:15922544
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Resetting standby activation ID 2071848820 (0x7b7de774)
    Completed:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Mon Dec 23 17:13:33 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH
    Attempt to do a Terminal Recovery (primary_db)
    Media Recovery Start: Managed Standby Recovery (primary_db)
    started logmerger process
    Mon Dec 23 17:13:33 2013
    Managed Standby Recovery not using Real Time Apply
    Media Recovery failed with error 16157
    Recovery Slave PR00 previously exited with exception 283
    ORA-283 signalled during:  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH...
    Mon Dec 23 17:13:34 2013
    Shutting down instance (immediate)
    Shutting down instance: further logons disabled
    Stopping background process MMNL
    Stopping background process MMON
    License high water mark = 38
    All dispatchers and shared servers shutdown
    ALTER DATABASE CLOSE NORMAL
    ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...
    ALTER DATABASE DISMOUNT
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:38 2013
    Mon Dec 23 17:13:38 2013
    Mon Dec 23 17:13:38 2013
    ARCH shutting downARCH shutting down
    ARCH shutting down
    ARC0: Relinquishing active heartbeat ARCH role
    ARC2: Archival stopped
    ARC0: Archival stopped
    ARC1: Archival stopped
    Completed: ALTER DATABASE DISMOUNT
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:40 2013
    Stopping background process VKTM
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Mon Dec 23 17:13:43 2013
    Instance shutdown complete
    Mon Dec 23 17:13:44 2013
    Adjusting the default value of parameter parallel_max_servers
    from 1280 to 470 due to the value of parameter processes (500)
    Starting ORACLE instance (normal)
    ************************ Large Pages Information *******************
    Per process system memlock (soft) limit = 64 KB
    Total Shared Global Region in Large Pages = 0 KB (0%)
    Large Pages used by this instance: 0 (0 KB)
    Large Pages unused system wide = 0 (0 KB)
    Large Pages configured system wide = 0 (0 KB)
    Large Page size = 2048 KB
    RECOMMENDATION:
      Total System Global Area size is 3762 MB. For optimal performance,
      prior to the next instance restart:
      1. Increase the number of unused large pages by
    at least 1881 (page size 2048 KB, total size 3762 MB) system wide to
      get 100% of the System Global Area allocated with large pages
      2. Large pages are automatically locked into physical memory.
    Increase the per process memlock (soft) limit to at least 3770 MB to lock
    100% System Global Area's large pages into physical memory
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Initial number of CPU is 32
    Number of processor cores in the system is 16
    Number of processor sockets in the system is 2
    CELL communication is configured to use 0 interface(s):
    CELL IP affinity details:
        NUMA status: NUMA system w/ 2 process groups
        cellaffinity.ora status: cannot find affinity map at '/etc/oracle/cell/network-config/cellaffinity.ora' (see trace file for details)
    CELL communication will use 1 IP group(s):
        Grp 0:
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =88
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    NUMA system with 2 nodes detected
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    ORACLE_HOME = /u01/oracle/product/11.2.0.4
    System name:    Linux
    Node name:      node2.localdomain
    Release:        2.6.32-131.0.15.el6.x86_64
    Version:        #1 SMP Tue May 10 15:42:40 EDT 2011
    Machine:        x86_64
    Using parameter settings in server-side spfile /u01/oracle/product/11.2.0.4/dbs/spfileprimary_db.ora
    System parameters with non-default values:
      processes                = 500
      sga_target               = 3760M
      control_files            = "/u02/oracle/orafiles/primary_db/control01.ctl"
      control_files            = "/u01/oracle/fast_recovery_area/primary_db/control02.ctl"
      db_file_name_convert     = "standby_db"
      db_file_name_convert     = "primary_db"
      log_file_name_convert    = "standby_db"
      log_file_name_convert    = "primary_db"
      control_file_record_keep_time= 40
      db_block_size            = 8192
      compatible               = "11.2.0.4.0"
      log_archive_dest_1       = "location=/u02/oracle/archivelogs/primary_db"
      log_archive_dest_2       = "SERVICE=primary_db ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary_db"
      log_archive_dest_state_2 = "ENABLE"
      log_archive_min_succeed_dest= 1
      fal_server               = "primary_db"
      log_archive_trace        = 0
      log_archive_config       = "DG_CONFIG=(primary_db,standby_db)"
      log_archive_format       = "%t_%s_%r.dbf"
      log_archive_max_processes= 3
      db_recovery_file_dest    = "/u02/oracle/fast_recovery_area"
      db_recovery_file_dest_size= 30G
      standby_file_management  = "AUTO"
      db_flashback_retention_target= 1440
      undo_tablespace          = "UNDOTBS1"
      remote_login_passwordfile= "EXCLUSIVE"
      db_domain                = ""
      dispatchers              = "(PROTOCOL=TCP) (SERVICE=primary_dbXDB)"
      job_queue_processes      = 0
      audit_file_dest          = "/u01/oracle/admin/primary_db/adump"
      audit_trail              = "DB"
      db_name                  = "primary_db"
      db_unique_name           = "standby_db"
      open_cursors             = 300
      pga_aggregate_target     = 1250M
      dg_broker_start          = FALSE
      diagnostic_dest          = "/u01/oracle"
    Mon Dec 23 17:13:45 2013
    PMON started with pid=2, OS id=29108
    Mon Dec 23 17:13:45 2013
    PSP0 started with pid=3, OS id=29110
    Mon Dec 23 17:13:46 2013
    VKTM started with pid=4, OS id=29125 at elevated priority
    VKTM running at (1)millisec precision with DBRM quantum (100)ms
    Mon Dec 23 17:13:46 2013
    GEN0 started with pid=5, OS id=29129
    Mon Dec 23 17:13:46 2013
    DIAG started with pid=6, OS id=29131
    Mon Dec 23 17:13:46 2013
    DBRM started with pid=7, OS id=29133
    Mon Dec 23 17:13:46 2013
    DIA0 started with pid=8, OS id=29135
    Mon Dec 23 17:13:46 2013
    MMAN started with pid=9, OS id=29137
    Mon Dec 23 17:13:46 2013
    DBW0 started with pid=10, OS id=29139
    Mon Dec 23 17:13:46 2013
    DBW1 started with pid=11, OS id=29141
    Mon Dec 23 17:13:46 2013
    DBW2 started with pid=12, OS id=29143
    Mon Dec 23 17:13:46 2013
    DBW3 started with pid=13, OS id=29145
    Mon Dec 23 17:13:46 2013
    LGWR started with pid=14, OS id=29147
    Mon Dec 23 17:13:46 2013
    CKPT started with pid=15, OS id=29149
    Mon Dec 23 17:13:46 2013
    SMON started with pid=16, OS id=29151
    Mon Dec 23 17:13:46 2013
    RECO started with pid=17, OS id=29153
    Mon Dec 23 17:13:46 2013
    MMON started with pid=18, OS id=29155
    Mon Dec 23 17:13:46 2013
    MMNL started with pid=19, OS id=29157
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    ORACLE_BASE from environment = /u01/oracle
    Mon Dec 23 17:13:46 2013
    ALTER DATABASE   MOUNT
    ARCH: STARTING ARCH PROCESSES
    Mon Dec 23 17:13:50 2013
    ARC0 started with pid=23, OS id=29210
    ARC0: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    ARC0: STARTING ARCH PROCESSES
    Successful mount of redo thread 1, with mount id 2071851082
    Mon Dec 23 17:13:51 2013
    ARC1 started with pid=24, OS id=29212
    Allocated 15937344 bytes in shared pool for flashback generation buffer
    Mon Dec 23 17:13:51 2013
    ARC2 started with pid=25, OS id=29214
    Starting background process RVWR
    ARC1: Archival started
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    Mon Dec 23 17:13:51 2013
    RVWR started with pid=26, OS id=29216
    Physical Standby Database mounted.
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Mon Dec 23 17:13:51 2013
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
             USING CURRENT LOGFILE DISCONNECT FROM SESSION
    Attempt to start background Managed Standby Recovery process (primary_db)
    Mon Dec 23 17:13:51 2013
    MRP0 started with pid=27, OS id=29219
    MRP0: Background Managed Standby Recovery process started (primary_db)
    ARC2: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC2: Becoming the heartbeat ARCH
    ARC2: Becoming the active heartbeat ARCH
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival Error
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    At this moment, I've lost service and I have to wait until the prmiary server goes up again to receive the missing log.
    This is the rest of the log:
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:52
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:55
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    started logmerger process
    Mon Dec 23 17:13:56 2013
    Managed Standby Recovery starting Real Time Apply
    MRP0: Background Media Recovery terminated with error 16157
    Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_29230.trc:
    ORA-16157: media recovery not allowed following successful FINISH recovery
    Managed Standby Recovery not using Real Time Apply
    Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
             USING CURRENT LOGFILE DISCONNECT FROM SESSION
    Recovery Slave PR00 previously exited with exception 16157
    MRP0: Background Media Recovery process shutdown (primary_db)
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:13:58
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Mon Dec 23 17:14:01 2013
    Fatal NI connect error 12543, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=primary_db)(CID=(PROGRAM=oracle)(HOST=node2.localdomain)(USER=oracle))))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 23-DEC-2013 17:14:01
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12543
    TNS-12543: TNS:destination host unreachable
        ns secondary err code: 12560
        nt main err code: 513
    TNS-00513: Destination host unreachable
        nt secondary err code: 113
        nt OS err code: 0
    Error 12543 received logging on to the standby
    FAL[client, ARC0]: Error 12543 connecting to primary_db for fetching gap sequence
    Archiver process freed from errors. No longer stopped
    Mon Dec 23 17:15:07 2013
    Using STANDBY_ARCHIVE_DEST parameter default value as /u02/oracle/archivelogs/primary_db
    Mon Dec 23 17:19:51 2013
    ARCH: Archival stopped, error occurred. Will continue retrying
    ORACLE Instance primary_db - Archival Error
    ORA-16014: log 4 sequence# 7 not archived, no available destinations
    ORA-00312: online log 4 thread 1: '/u02/oracle/fast_recovery_area/standby_db/onlinelog/o1_mf_4_9c3tk3dy_.log'
    Mon Dec 23 17:26:18 2013
    RFS[1]: Assigned to RFS process 31456
    RFS[1]: No connections allowed during/after terminal recovery.
    Mon Dec 23 17:26:47 2013
    flashback database to scn 15921680
    ORA-16157 signalled during: flashback database to scn 15921680...
    Mon Dec 23 17:27:05 2013
    alter database recover managed standby database using current logfile disconnect
    Attempt to start background Managed Standby Recovery process (primary_db)
    Mon Dec 23 17:27:05 2013
    MRP0 started with pid=28, OS id=31481
    MRP0: Background Managed Standby Recovery process started (primary_db)
    started logmerger process
    Mon Dec 23 17:27:10 2013
    Managed Standby Recovery starting Real Time Apply
    MRP0: Background Media Recovery terminated with error 16157
    Errors in file /u01/oracle/diag/rdbms/standby_db/primary_db/trace/primary_db_pr00_31486.trc:
    ORA-16157: media recovery not allowed following successful FINISH recovery
    Managed Standby Recovery not using Real Time Apply
    Completed: alter database recover managed standby database using current logfile disconnect
    Recovery Slave PR00 previously exited with exception 16157
    MRP0: Background Media Recovery process shutdown (primary_db)
    Mon Dec 23 17:27:18 2013
    RFS[2]: Assigned to RFS process 31492
    RFS[2]: No connections allowed during/after terminal recovery.
    Mon Dec 23 17:28:18 2013
    RFS[3]: Assigned to RFS process 31614
    RFS[3]: No connections allowed during/after terminal recovery.
    Do you have any advice?
    Thanks!
    Alex.

    Hello;
    What's not clear to me in your question at this point:
    What I'm NOT being able to perform:
    If I manually unplug the network cables from the primary site (all the network, not only the link between primary and standby node so, it's like a server unplug from the energy source).
    Same situation happens if I manually disconnect the server from the power.
    This is the alert logs I have:"
    Are you trying a failover to the Standby?
    Please advise.
    Is it possible your "valid_for clause" is set incorrectly?
    Would also review this:
    ORA-16014 and ORA-00312 Messages in Alert.log of Physical Standby
    Best Regards
    mseberg

  • How do i find dataloss in Data Guard?

    We are using redo log, in async mode, following is our setting.
    SERVICE=xxx_sb max_failure=100 reopen=600 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=xxx_sb
    When i query V$managed_Standby for delay_mins, its always zero. Meaning there is no delay in copying a log. I have 2 questions..
    1. How can i communicate to business that in worst case we will lose x Mintues of data? Its an OLTP, where the transactions are less then 2mins. Also during the night there are some batch jobs where the transactions are 60mins longs.
    2. Most of the time during peak hours there is a log switch happening every 10-15mins but during non-peak it may not happen for a long period of time, is it advisable to set ARCHIVE_LAG_TIME to 10 mins? as im not using archiver , we are using log writer for standby.
    any explanation or point to documentation would be appreciated,
    Thanks,

    Production databases who are running with fully fined configured Data Guard, do'nt have any dataloss because failover operation ensures zero data loss if dataguard is configured with maximum protection mode or maximum availability mode at failover time.
    http://www.dbazone.com/docs/oracle_10gDataGuard_overview.pdf
    The above pdf is oracle white paper which too confirmed it.
    LGWR SYNC AFFIRM in Oracle Data Guard is used for zero data loss. How does one ensure zero data loss? Well, the redo block generated at the primary has to reach the standby across the network (that's where the SYNC part comes in - i.e. it is a synchronous network call), and then the block has to be written on disk on the standby (that's where the AFFIRM part comes in) - typically on a standby redo log.
    Can you have LGWR SYNC NOAFFIRM? Yes sure. Then you will have synchronous network transport, but the only thing you are guaranteed is that the block has reached the remote standby's memory. It has not been written on to disk yet. So not really a zero data loss solution (e.g. what if the standby instance crashes before the disk I/O).
    To sum up -> LGWR SYNC AFFIRM means primary transaction commits are waiting for ntk I/O + disk I/O acks. LGWR SYNC NOAFFIRM means primary transaction commits are waiting for ntk I/O only.
    Source:http://www.dbasupport.com/forums/showthread.php?t=54467
    HTH
    Girish Sharma

  • Error encountered while running a test case using MTM

    Hello,
    Has anyone encountered this error while running a test case using MTM 2013? 
    When I click on OK, the message is again shown until that time wherein the popup error is no longer shown.
    When I try to continue to the next pages wherein a modal is being open and I need to enter some data on a text field, the IE would stop working and seems to crash.
    Thanks!

    Hi Kiyaruh,
    Base on the error message, the error is threw by internet explorer.
    Does it have the same issue when access that web site/page in IE directly or it just occurs when run tests through MTM?
    What’s the version of your browser? You may try it with other browser and check whether it has same issue.
    Regards
    Starain
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

  • Error in juit test case for struts2 action class using StrutsSpringTestCase

    Hi
    I am getting some error when i am running my Struts2 Action class junit test case using StrutsSpringTestCase class (which is in struts2 junit plugin 2.1.8 version)
    Here is the code....
    package ipl.admin.action.test;
    import java.sql.Timestamp;
    import java.util.Date;
    import org.apache.struts2.StrutsSpringTestCase;
    import ipl.admin.action.RoleMasterNewAction;
    import ipl.admin.beans.RoleMasterNewBeanRemote;
    import com.opensymphony.xwork2.ObjectFactory;
    public class RoleMasterNewActionTest extends StrutsSpringTestCase {
         //XmlBeanFactory bf = new XmlBeanFactory(new ClassPathResource("applicationContext.xml", getClass()));
         RoleMasterNewAction action = new RoleMasterNewAction();
         RoleMasterNewBeanRemote roleMasterBeanNewRemote;
         public RoleMasterNewBeanRemote getRoleMasterBeanNewRemote() {
              return roleMasterBeanNewRemote;
         public void setRoleMasterBeanNewRemote(
                   RoleMasterNewBeanRemote roleMasterBeanNewRemote) {
              this.roleMasterBeanNewRemote = roleMasterBeanNewRemote;
         public void setUp() throws Exception {
              super.setUp();
              ObjectFactory.setObjectFactory(new ObjectFactory());
         public void testDoSomeThing() throws Exception {
              //System.out.println(bf.getBean("loginIntercepter"));
              assertTrue(action.doSomeThing());
    }I am getting error for this code. Below is the error details.
    2010-05-03 17:28:14,671 INFO [com.opensymphony.xwork2.config.providers.XmlConfigurationProvider] - Parsing configuration file [struts-default.xml]
    2010-05-03 17:28:14,859 INFO [com.opensymphony.xwork2.config.providers.XmlConfigurationProvider] - Parsing configuration file [struts-plugin.xml]
    2010-05-03 17:28:14,999 INFO [com.opensymphony.xwork2.config.providers.XmlConfigurationProvider] - Parsing configuration file [struts.xml]
    2010-05-03 17:28:15,015 WARN [org.apache.struts2.config.Settings] - Settings: Could not parse struts.locale setting, substituting default VM locale
    2010-05-03 17:28:15,015 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.comman-Lookup
    2010-05-03 17:28:15,015 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.comman-label
    2010-05-03 17:28:15,015 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.comman-headings
    2010-05-03 17:28:15,015 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.comman-messages
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.comman-setup
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.common-errors
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.admin.resources.admin-label
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.admin.resources.admin-lookup
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.admin.resources.admin-headings
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.admin.resources.admin-jndinames
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.birthCertificate.resources.birth-jndinames
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.birthCertificate.resources.birth-labels
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.birthCertificate.resources.birth-headings
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.admin.resources.admin-alert
    2010-05-03 17:28:15,046 INFO [org.apache.struts2.config.BeanSelectionProvider] - Loading global messages from ipl.comm.resources.form
    2010-05-03 17:28:15,140 INFO [org.apache.struts2.spring.StrutsSpringObjectFactory] - Initializing Struts-Spring integration...
    2010-05-03 17:28:15,171 FATAL [org.apache.struts2.spring.StrutsSpringObjectFactory] - ********** FATAL ERROR STARTING UP STRUTS-SPRING INTEGRATION **********
    Looks like the Spring listener was not configured for your web app!
    Nothing will work until WebApplicationContextUtils returns a valid ApplicationContext.
    You might need to add the following to web.xml:
        <listener>
            <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
        </listener> But already i have the listener configurations in my web.xml .
    Thanks in advance...

    I guess it wasn't reading the right web.xml file.
    You may prove it by deleting your web.xml file (don't forget to make a backup), and then run the test again to see whether the error is the same.

  • Need suggestion on Active data guard or Logical Stand by

    Hi All,
    Need a suggestion of on below scenario.
    We have a production database ( oracle version 11g R2 ) and planning to have a Logical standby or physical standy (Active data guard). Our usage of the standby database is below.
    1) Planning to run online reports (100+) 24x7. So might create additional indexes,materialized views etc.
    2) daily data feed ( around 300+ data files ) to data warehouse. daily night, jobs will be scheduled to extract data and send to warehouse. Might need additional tables for jobs usage.
    Please suggest which one is good.
    Regards,
    vara.

    Hello,
    In active dataguad Whig is feature from 11gRx ,
    If you choose active dataguard, you have couple of good options, one is you can make a high availability of your production database, which can act as image copy of production, as you are asking in 11g you have more advantage where you can open in read only mode and at the sometime MRP will be active, so you can redirect users to connect standby to perform select operations for reporting purpose. So that you can control much load on production ,
    Even uou can perform switchover in case of role change, perform failover if your primary is completely lost. Also you can convert to physical to logical standby databases & you can configure FSFO
    You have plenty of options with active dataguard.
    Refer http://www.orafaq.com/node/957
    consider closing the thread if answered and keep the forum clean.
    >
    User Profile for user11261773
    user11261773     
    Handle:     user11261773  
    Status Level:     Newbie
    Registered:     Jul 14, 2011
    Total Posts:     12
    Total Questions:     6 (5 unresolved)
    >
    Edited by: CKPT on Mar 18, 2012 8:14 PM

Maybe you are looking for