Physical DataGuard issue

Hello,
- Primary Database and Standby versions: 11.2.0.3. Linux Redhat 5.
- Physical Standby database created using rman active duplication command.
- Archive logs are shipping and applied to standby.
I am trying to setup the broker to manage the DG envirmement.
from Primary, I connected to dgmgrl
create configuration 'DGConfigCTT' as primary database is 'cttdb' connect identifier is cttdb;
add database 'cttstby' as connect identifier is cttstby;Error: I cannot enable the standby database from dgmgrl
dgmgrl
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password:
Connected.
DGMGRL>  show database cttstby 'StatusReport';
Error: ORA-16548: database not enabled
DGMGRL> enable database cttstby;
Enabled.
DGMGRL> show database cttstby 'StatusReport';
Error: ORA-16548: database not enabled
DGMGRL>
DGMGRL>
DGMGRL>
DGMGRL>Please help.

DBA wrote:
Yes, I did enabled itgive results of
DGMGRL> show configuration;You should enable configuration , later if you add any database of course your step applicables. Below is sample
DGMGRL> enable configuration;
Enabled.
DGMGRL>
DGMGRL> show configuration;
Configuration - CKPT
  Protection Mode: MaxPerformance
  Databases:
    PRIM - Primary database
    STAND  - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
DGMGRL>Edited by: CKPT on Jan 11, 2013 7:59 PM

Similar Messages

  • Physical query issued by Obiee when cache is on is different and slow

    When the same report runs in OBIEE 10g and cache is OFF it takes less then 1min to get results. If cache is turned ON physical query issued by Obiee is totally different and it takes 2h to get results. Has anyone experienced this with having cache on that some queries are performing poorly.
    Thanks,
    Tatjana

    We are using BI Apps Order Management and Fulfillment Analytics and all tables are cached anyway. Dimensions used are not that huge up to 40K rows. What should I check when it comes to DB query? As I said is different than one generated when cache is disabled although both have the almost the explain plan.

  • Dataguard 9iR2 using same SID for primary and physical standby issues

    Hello, I have been following the Creating Physical Standby Database document from Oracle setting up a physical clone for a 9iR2 database on Solaris 9 environment between two identical workstations. I want to databases to have the same SID for relatively seemless failover however the redo logs are not pushed to the physical clone. when I switch a logfile on the primary and then query v$archive_dest on dest_id=2 I get a status of ERROR and the ERROR is the ORA-12154 TNS service name error.
    Here is a summary of my setup:
    Primary - SID = PROD1, located on WS001
    Clone - SID = PROD1, located on WS002
    SPFILE settings on Primary:
    *.db_name='PROD1'
    *.dg_broker_start=true
    *.fal_client='PROD1_node2'
    *.fal_server='PROD1_node1'
    *.log_archive_dest_2='service=PROD1_node2 optional lgwr async=20480 noaffirm reopen=15 max_failure=10 delay=30 net_timeout=30'
    tnsnames.ora on Primary:
    PROD1_node2 =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP) (HOST = WS002) (PORT = 1521))
    (CONNECT_DATA =
    (SID = PROD1)
    listener.ora on secondary:
    PROD1_node2 =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP) (HOST = WS002) (PORT = 1521))
    SID_LIST_PROD1_node2 =
    (SID_LIST =
    (SID_DESC =
    (ORACLE_HOME = /export/oracle/oracle_9.2/PROD1)
    (SID_NAME = PROD1)
    Any input would be appreciated. I can provide whatever else you might need. I thought this was the most pertinent to this problem.
    Thanks.
    Jim

    I have been looking at the sqlnet.ora files for the two databases
    for WS001:
    Names.Default_Domain = WS001.example.com
    Names.Directory_Path = (TNSNAMES, ONAMES,HOSTNAME)
    for WS002:
    Names.Default_Domain = WS002.example.com
    Names.Directory_Path = (TNSNAMES, ONAMES,HOSTNAME)
    I cannot connect with sqlplus from WS001 to WS002 on the other machine.

  • Dataguard Issue

    Hello Everyone,
    I am facing a issue with Dataguard setup. Following is the description:
    Purpose:
    Setup a Dataguard using Oracle Data Guard Solution between Production & DR(Physical standby) databases.
    Problem Statement:
    In case the network connectivity interrupted between Primary database and Physical Standby database, the Primary database is unable to respond to application servers. This issue occurred when the log shipment process is on. However, if the log shipment of Oracle Data Guard is stopped then production database/ system is working fine even if the connectivity between Primary database and Physical Standby database is interrupted.
    Standby database is configured in high performance mode.
    Environment:
    Database Software Primary and Standby Server – Oracle10g Enterprise with Partition option, 64 bit, Version – 10.2.0.4
    Primary Database server is configured with Two Sun M5000 nodes in OS cluster environment, Active and Passive Mode, Sun Cluster Suite 3.2 and OS Solaris 10
    Standby Database Server is configured, Server – V890, OS Solaris 10
    Java based multiple application are connected with Primary database using JDBC type 4 driver to processed the request.
    Two independent IPMP are configured on Primary database server, one for application network and second for data guard network.
    Application network are configured with dedicated switch and data guard network is connected with different switch.
    Single listener is configured on Physical IP and Application is connecting to database through virtual IP dynamically assigned through cluster service

    SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;
    PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
    MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PRIMARY
    SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;
    PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
    MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY
    And this is the Alert log file snapshot and all other necessary information.
    Errors in file /oracle/admin/prtp/udump/prtp_rfs_3634.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 00:01:49 2010
    Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 00:01:49 2010
    FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
    FAL[server, ARC0]: FAL archive failed, see trace file.
    Sat Aug 28 00:01:49 2010
    Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Sat Aug 28 00:01:49 2010
    ORACLE Instance prtp - Archival Error. Archiver continuing.
    Sat Aug 28 00:01:49 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 3636
    RFS[2]: Not using real application clusters
    Sat Aug 28 00:01:49 2010
    Errors in file /oracle/admin/prtp/udump/prtp_rfs_3636.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 01:29:41 2010
    Thread 1 advanced to log sequence 24582 (LGWR switch)
    Current log# 6 seq# 24582 mem# 0: /oradata1/prtp/redo-log/redo06_1.log
    Current log# 6 seq# 24582 mem# 1: /oradata2/prtp/redo-log/redo06_2.log
    LGWR: Standby redo logfile selected for thread 1 sequence 24583 for destination LOG_ARCHIVE_DEST_2
    Sat Aug 28 01:29:42 2010
    Thread 1 advanced to log sequence 24583 (LGWR switch)
    Current log# 7 seq# 24583 mem# 0: /oradata1/prtp/redo-log/redo07_1.log
    Current log# 7 seq# 24583 mem# 1: /oradata2/prtp/redo-log/redo07_2.log
    Sat Aug 28 01:44:38 2010
    LGWR: Standby redo logfile selected for thread 1 sequence 24584 for destination LOG_ARCHIVE_DEST_2
    Sat Aug 28 01:44:38 2010
    Thread 1 advanced to log sequence 24584 (LGWR switch)
    Current log# 8 seq# 24584 mem# 0: /oradata1/prtp/redo-log/redo08_1.log
    Current log# 8 seq# 24584 mem# 1: /oradata2/prtp/redo-log/redo08_2.log
    Sat Aug 28 01:59:39 2010
    LGWR: Standby redo logfile selected for thread 1 sequence 24585 for destination LOG_ARCHIVE_DEST_2
    Sat Aug 28 01:59:39 2010
    Thread 1 advanced to log sequence 24585 (LGWR switch)
    Current log# 1 seq# 24585 mem# 0: /oradata1/prtp/redo-log/redo01_1.log
    Current log# 1 seq# 24585 mem# 1: /oradata2/prtp/redo-log/redo01_2.log
    Sat Aug 28 02:14:38 2010
    LGWR: Standby redo logfile selected for thread 1 sequence 24586 for destination LOG_ARCHIVE_DEST_2
    Sat Aug 28 02:14:38 2010
    Thread 1 advanced to log sequence 24586 (LGWR switch)
    Current log# 2 seq# 24586 mem# 0: /oradata1/prtp/redo-log/redo02_1.log
    Current log# 2 seq# 24586 mem# 1: /oradata2/prtp/redo-log/redo02_2.log
    Sat Aug 28 02:29:39 2010
    LGWR: Standby redo logfile selected for thread 1 sequence 24587 for destination LOG_ARCHIVE_DEST_2
    Sat Aug 28 02:29:39 2010
    Thread 1 advanced to log sequence 24587 (LGWR switch)
    Current log# 3 seq# 24587 mem# 0: /oradata1/prtp/redo-log/redo03_1.log
    Current log# 3 seq# 24587 mem# 1: /oradata2/prtp/redo-log/redo03_2.log
    Sat Aug 28 02:44:38 2010
    LGWR: Standby redo logfile selected for thread 1 sequence 24588 for destination LOG_ARCHIVE_DEST_2
    Errors in file /oracle/admin/prtp/udump/prtp_rfs_9611.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 01:27:56 2010
    Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 01:27:56 2010
    FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
    FAL[server, ARC0]: FAL archive failed, see trace file.
    Sat Aug 28 01:27:56 2010
    Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Sat Aug 28 01:27:56 2010
    ORACLE Instance prtp - Archival Error. Archiver continuing.
    Sat Aug 28 01:27:56 2010
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[18]: Assigned to RFS process 9613
    RFS[18]: Not using real application clusters
    Sat Aug 28 01:27:56 2010
    Errors in file /oracle/admin/prtp/udump/prtp_rfs_9613.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 01:27:56 2010
    Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Sat Aug 28 01:27:56 2010
    FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
    FAL[server, ARC0]: FAL archive failed, see trace file.
    Sat Aug 28 01:27:56 2010
    Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Sat Aug 28 01:27:56 2010
    ORACLE Instance prtp - Archival Error. Archiver continuing.
    Sat Aug 28 01:29:39 2010
    Thread 1 cannot allocate new log, sequence 24581
    Private strand flush not complete
    Current log# 4 seq# 24580 mem# 0: /oradata1/prtp/redo-log/redo04_1.log
    Current log# 4 seq# 24580 mem# 1: /oradata2/prtp/redo-log/redo04_2.log
    NAME TYPE VALUE
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    active_instance_count integer
    aq_tm_processes integer 1
    archive_lag_target integer 900
    asm_diskgroups string
    asm_diskstring string
    asm_power_limit integer 1
    audit_file_dest string /oracle/ora10g/rdbms/audit
    audit_sys_operations boolean FALSE
    audit_syslog_level string
    audit_trail string NONE
    background_core_dump string partial
    background_dump_dest string /oracle/admin/prtp/bdump
    backup_tape_io_slaves boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    circuits integer
    cluster_database boolean FALSE
    cluster_database_instances integer 1
    cluster_interconnects string
    commit_point_strength integer 1
    commit_write string
    compatible string 10.2.0
    control_file_record_keep_time integer 7
    control_files string /oradata1/prtp/control/control
    01.ctl, /oradata2/prtp/control
    /control02.ctl, /oradata3/prtp
    /control/control03.ctl
    core_dump_dest string /oracle/admin/prtp/cdump
    cpu_count integer 48
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_sharing string FORCE
    cursor_space_for_time boolean TRUE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_block_buffers integer 0
    db_block_checking string FALSE
    db_block_checksum string TRUE
    db_block_size integer 8192
    db_cache_advice string ON
    db_cache_size big integer 6G
    db_create_file_dest string
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain string
    db_file_multiblock_read_count integer 16
    db_file_name_convert string
    db_files integer 200
    db_flashback_retention_target integer 0
    db_keep_cache_size big integer 0
    db_name string prtp
    db_recovery_file_dest string
    db_recovery_file_dest_size big integer 0
    db_recycle_cache_size big integer 0
    db_unique_name string prtp
    db_writer_processes integer 6
    dbwr_io_slaves integer 0
    ddl_wait_for_locks boolean FALSE
    dg_broker_config_file1 string /oracle/ora10g/dbs/dr1prtp.dat
    dg_broker_config_file2 string /oracle/ora10g/dbs/dr2prtp.dat
    dg_broker_start boolean FALSE
    disk_asynch_io boolean TRUE
    dispatchers string
    distributed_lock_timeout integer 60
    dml_locks integer 19380
    drs_start boolean FALSE
    event string 10511 trace name context forev
    er, level 2
    fal_client string prtp
    fal_server string stndby
    fast_start_io_target integer 0
    fast_start_mttr_target integer 600
    fast_start_parallel_rollback string LOW
    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options string asynch
    fixed_date string
    gc_files_to_locks string
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    hash_area_size integer 131072
    hi_shared_memory_address integer 0
    hs_autoregister boolean TRUE
    ifile file
    instance_groups string
    instance_name string prtp
    instance_number integer 0
    instance_type string RDBMS
    java_max_sessionspace_size integer 0
    java_pool_size big integer 160M
    java_soft_sessionspace_limit integer 0
    job_queue_processes integer 10
    large_pool_size big integer 560M
    ldap_directory_access string NONE
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    local_listener string
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string
    log_archive_dest string
    log_archive_dest_1 string location=/archive/archive-log/
    MANDATORY
    log_archive_dest_10 string
    log_archive_dest_2 string service=stndby LGWR
    log_archive_dest_3 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    log_archive_dest_state_1 string enable
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string ENABLE
    log_archive_dest_state_3 string enable
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    log_archive_duplex_dest string
    log_archive_format string arc_%t_%s_%r.arc
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 2
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    log_buffer integer 20971520
    log_checkpoint_interval integer 0
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean FALSE
    log_file_name_convert string
    logmnr_max_persistent_sessions integer 1
    max_commit_propagation_delay integer 0
    max_dispatchers integer
    max_dump_file_size string UNLIMITED
    max_enabled_roles integer 150
    max_shared_servers integer
    nls_calendar string
    nls_comp string
    nls_currency string
    nls_date_format string
    nls_date_language string
    nls_dual_currency string
    nls_iso_currency string
    nls_language string AMERICAN
    nls_length_semantics string BYTE
    nls_nchar_conv_excp string FALSE
    nls_numeric_characters string
    nls_sort string
    nls_territory string AMERICA
    nls_time_format string
    nls_time_tz_format string
    nls_timestamp_format string
    nls_timestamp_tz_format string
    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 4500
    open_links integer 30
    open_links_per_instance integer 30
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.4
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    os_authent_prefix string ops$
    os_roles boolean FALSE
    parallel_adaptive_multi_user boolean TRUE
    parallel_automatic_tuning boolean FALSE
    parallel_execution_message_size integer 2152
    parallel_instance_group string
    parallel_max_servers integer 960
    parallel_min_percent integer 0
    parallel_min_servers integer 0
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_threads_per_cpu integer 2
    pga_aggregate_target big integer 3G
    plsql_ccflags string
    plsql_code_type string INTERPRETED
    plsql_compiler_flags string INTERPRETED, NON_DEBUG
    plsql_debug boolean FALSE
    NAME TYPE VALUE
    plsql_native_library_dir string
    plsql_native_library_subdir_count integer 0
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings string DISABLE:ALL
    pre_11g_enable_capture boolean FALSE
    pre_page_sga boolean FALSE
    processes integer 4000
    query_rewrite_enabled string TRUE
    query_rewrite_integrity string enforced
    rdbms_server_dn string
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    recyclebin string OFF
    remote_archive_enable string true
    remote_dependencies_mode string TIMESTAMP
    remote_listener string
    remote_login_passwordfile string EXCLUSIVE
    remote_os_authent boolean FALSE
    remote_os_roles boolean FALSE
    replication_dependency_tracking boolean TRUE
    resource_limit boolean TRUE
    resource_manager_plan string
    resumable_timeout integer 0
    rollback_segments string
    serial_reuse string disable
    service_names string prtp
    session_cached_cursors integer 0
    session_max_open_files integer 10
    sessions integer 4405
    sga_max_size big integer 20G
    sga_target big integer 20G
    shadow_core_dump string partial
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 214748364
    shared_pool_size big integer 4G
    shared_server_sessions integer
    shared_servers integer 0
    skip_unusable_indexes boolean TRUE
    smtp_out_server string smtp.banglalinkgsm.com
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spfile string /oradata1/prtp/pfile/spfileprt
    p.ora
    sql92_security boolean FALSE
    sql_trace boolean FALSE
    sql_version string NATIVE
    sqltune_category string DEFAULT
    standby_archive_dest string ?/dbs/arch
    standby_file_management string AUTO
    star_transformation_enabled string FALSE
    statistics_level string TYPICAL
    streams_pool_size big integer 0
    tape_asynch_io boolean TRUE
    thread integer 0
    timed_os_statistics integer 0
    timed_statistics boolean TRUE
    trace_enabled boolean TRUE
    tracefile_identifier string
    transactions integer 4845
    transactions_per_rollback_segment integer 5
    undo_management string AUTO
    undo_retention integer 15000
    undo_tablespace string UNDOTBS
    use_indirect_data_buffers boolean FALSE
    user_dump_dest string /oracle/admin/prtp/udump
    utl_file_dir string
    workarea_size_policy string AUTO

  • Next log sequence to archive in Standby Database (RAC Dataguard Issue)

    Hi All,
    I just had implemented Data Guard in our server. My primary Database is RAC configured, but it is only single node. The other Instance was removed and converted it to Developement Instance. For the reason I kept the primary as RAC is when I will implement dataguard, my Primary Database is RAC with 7 nodes.
    The first test is successful, and I was able to "switchover" from my primary to standby. I failed in the 'FAILOVER" test.
    I restore my primary server and redo the setup.
    BTW, my standby DB is physical standby.
    When I try to switchover again and issue archive log list, below is my output.
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 38
    *Next log sequence to archive 0*
    Current log sequence 38
    SQL> select open_mode, database_role from v$database;
    OPEN_MODE DATABASE_ROLE
    MOUNTED PHYSICAL STANDBY
    ===============================================
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 38
    *Next log sequence to archive 38*
    Current log sequence 38
    SQL> select open_mode, database_role from v$database;
    OPEN_MODE DATABASE_ROLE
    READ WRITE PRIMARY
    In my first attempt to switchover before I failed in "FAILOVER" test, I also issue +archive log list+ in both primary and standby database, and if I remember it right, next log sequence on both should be identical. Am I right on this?
    Thanks in Advance.
    Jay A

    Or Am i just overthinking on this?
    Is dataguard only looking for the current and oldest log sequence?

  • Oracle Dataguard issue

    Hi
    Could you please help me on this...
    We started Oracle data guard setup for a production database where primary is located in Europe Region and standby is located in Latin America.
    We configured dataguard parameters on primary and performed cold backup of DB size nearly one terabyte. At the same time we kept standby server kept ready.
    But it took 15 days to reach cold backup and get restored on to standby location. In these 15 days we shipped manually all the archive logs to standby which were generated on primary.
    After restoration on standby server, today i created standby contro lfile on primary and transferred to standby using scp and then copied to standby control file locations. As per the standby database setup procedure i performed all the steps...
    I am able to mount the database in standby mode. After that i given the command "RECOVER STANDBY DATABASE;" to apply all the logs which were shipped manually in these 15 days....I am getting error given below:
    Physical Standby Database mounted.
    Completed: alter database mount standby database
    Mon Jun 28 07:53:33 2010
    Starting Data Guard Broker (DMON)
    INSV started with pid=22, OS id=1246
    Mon Jun 28 07:54:35 2010
    ALTER DATABASE RECOVER standby database
    Mon Jun 28 07:54:35 2010
    Media Recovery Start
    Managed Standby Recovery not using Real Time Apply
    Mon Jun 28 07:54:35 2010
    Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
    ORA-01157: cannot identify/lock data file 322 - see DBWR trace file
    ORA-01110: data file 322: '/oracle/P19/sapdata1/sr3_289/sr3.data289'
    ORA-27037: unable to obtain file status
    HPUX-ia64 Error: 2: No such file or directory
    Additional information: 3
    Mon Jun 28 07:54:35 2010
    Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
    ORA-01157: cannot identify/lock data file 323 - see DBWR trace file
    ORA-01110: data file 323: '/oracle/P19/sapdata1/sr3_290/sr3.data290'
    ORA-27037: unable to obtain file status
    HPUX-ia64 Error: 2: No such file or directory
    Additional information: 3
    Mon Jun 28 07:54:35 2010
    Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
    ORA-01157: cannot identify/lock data file 324 - see DBWR trace file
    ORA-01110: data file 324: '/oracle/P19/sapdata2/sr3_291/sr3.data291'
    ORA-27037: unable to obtain file status
    HPUX-ia64 Error: 2: No such file or directory
    Additional information: 3
    Mon Jun 28 07:54:35 2010
    Errors in file /oracle/P19/saptrace/background/p19_dbw0_1173.trc:
    ORA-01157: cannot identify/lock data file 325 - see DBWR trace file
    ORA-01110: data file 325: '/oracle/P19/sapdata3/sr3_292/sr3.data292'
    ORA-27037: unable to obtain file status
    HPUX-ia64 Error: 2: No such file or directory
    The above datafiles are added after the cold backup on primary....I am getting this error due to the latest standby controlfile used on standby....so i used the below commands on standby database:
    alter database datafile '/oracle/P19/sapdata1/sr3_289/sr3.data289' offline drop;
    alter database datafile '/oracle/P19/sapdata1/sr3_290/sr3.data290' offline drop;
    alter database datafile '/oracle/P19/sapdata2/sr3_291/sr3.data291' offline drop;
    alter database datafile '/oracle/P19/sapdata3/sr3_292/sr3.data292' offline drop;
    and then recovery started by applying the logs...please find the details from alert log file given below:
    Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401780_624244340.dbf
    Mon Jun 28 08:37:22 2010
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Mon Jun 28 08:37:22 2010
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Mon Jun 28 08:37:22 2010
    Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401781_624244340.dbf
    Mon Jun 28 08:38:02 2010
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Mon Jun 28 08:38:02 2010
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Mon Jun 28 08:38:02 2010
    Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401782_624244340.dbf
    Mon Jun 28 08:38:32 2010
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Mon Jun 28 08:38:32 2010
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Mon Jun 28 08:38:32 2010
    Media Recovery Log /oracle/P19/oraarch/bkp_arch_dest/P19arch1_401783_624244340.dbf
    Mon Jun 28 08:39:05 2010
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Mon Jun 28 08:39:05 2010
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    After the manual shipped logs applying completion, i started the dataguard setup....logs are shipping and applying perfectly.....
    Media Recovery Waiting for thread 1 sequence 407421
    Fetching gap sequence in thread 1, gap sequence 407421-407506
    Thu Jul 1 00:26:41 2010
    RFS[2]: Archived Log: '/oracle/P19/oraarch/P19arch1_407529_624244340.dbf'
    Thu Jul 1 00:26:49 2010
    RFS[3]: Archived Log: '/oracle/P19/oraarch/P19arch1_407530_624244340.dbf'
    Thu Jul 1 00:27:17 2010
    RFS[1]: Archived Log: '/oracle/P19/oraarch/P19arch1_407531_624244340.dbf'
    Thu Jul 1 00:28:41 2010
    RFS[2]: Archived Log: '/oracle/P19/oraarch/P19arch1_407532_624244340.dbf'
    Thu Jul 1 00:29:14 2010
    RFS[3]: Archived Log: '/oracle/P19/oraarch/P19arch1_407421_624244340.dbf'
    Thu Jul 1 00:29:19 2010
    Media Recovery Log /oracle/P19/oraarch/P19arch1_407421_624244340.dbf
    Thu Jul 1 00:29:24 2010
    RFS[1]: Archived Log: '/oracle/P19/oraarch/P19arch1_407422_624244340.dbf'
    Thu Jul 1 00:29:51 2010
    Media Recovery Log /oracle/P19/oraarch/P19arch1_407422_624244340.dbf
    But the above files showing as recover as status...could you please tell how to go ahead on this....
    NAME
    STATUS
    /oracle/P19/sapdata1/sr3_289/sr3.data289
    RECOVER
    /oracle/P19/sapdata1/sr3_290/sr3.data290
    RECOVER
    /oracle/P19/sapdata2/sr3_291/sr3.data291
    RECOVER
    NAME
    STATUS
    /oracle/P19/sapdata3/sr3_292/sr3.data292
    RECOVER
    can i recover these files in standby mount mode? Any other solution is there? All archivelogs applied, log shipping and applying is going on...
    Thank You....

    try this out
    1.On the Primary server issue this command
    SQL> Alter database backup control file to trace;
    2.Go to your Udump directory and look for the trace file that has been generated by this comman.
    3.This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another withOUT RESETLOGS. use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql
    4.Now open a file named as c.sql file in text editor and set the database name from [example:ica] to [example:prod] shown in an example below
    CREATE CONTROLFILE
       SET DATABASE prod   
       LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
                        '/u01/oracle/ica/redo01_02.log'),
               GROUP 2 ('/u01/oracle/ica/redo02_01.log',
                        '/u01/oracle/ica/redo02_02.log'),
               GROUP 3 ('/u01/oracle/ica/redo03_01.log',
                        '/u01/oracle/ica/redo03_02.log')
       RESETLOGS
       DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
                '/u01/oracle/ica/rbs01.dbs' SIZE 5M,
                '/u01/oracle/ica/users01.dbs' SIZE 5M,
                '/u01/oracle/ica/temp01.dbs' SIZE 5M
       MAXLOGFILES 50
       MAXLOGMEMBERS 3
       MAXLOGHISTORY 400
       MAXDATAFILES 200
       MAXINSTANCES 6
       ARCHIVELOG5. Start the database in NOMOUNT STATE
    SQL>Stratup Nomount;
    6.Now execute the script to create a new control file
    SQL> @/u01/oracle/c.sql
    7. Now open the database
    SQL>Alter database open resetlogs;
    IMPORTANT NOTE: Before implementing this suggested solution try it out on ur laptop or PC if possible
    Edited by: Suhail Faraaz on Jun 30, 2010 3:00 PM
    Edited by: Suhail Faraaz on Jun 30, 2010 3:03 PM

  • Physical keyboard issues

    I have had my D2 replaced 4 times since the GB update. Not because of the software. It has been replaced due to the physical keyboard tearing, peeling, and bubbling. Certified Preowned. I am tired of having the phones replaced. 

        Hi xerinx!
    I hate to learn of an issue with using the keyboard on your Droid 3! I have some troubleshooting steps for you. Please use the following steps:
    Menu > Settings > Applications > Manage Applications > "All" tab. Then, scroll down to Multi touch keyboard and push "Force Stop" and "Clear Cache." Once these steps are completed, re-test and post back with your results. Thanks!
    Christina B
    VZW Support
    Follow us on Twitter @VZWSupport

  • Physical Filename Issue ( got past this ) and New Invalid Guideline Issue

    I had a problem ( from before ) getting a file called --> test_oracle.edi into the system. I am using a set up of EDI X12 over Generic Exchange together with FILE 1.0 protocol.
    The file has a sender of ACME and a receiver of GLOBALCHIPS, which to be parsed correctly from the b2b.log file; however, the system had problems finding the TPA, since at this point, its FROM PARTY was set to "test".
    I renamed the file from above to --> ACME_oracle.edi, and I now got past this issue, but now have a new --> invalid guideline problem, as seen in the b2b.log file.
    Note that a filename of --> ACME.edi also fails.
    Can someone tell me if there is a restriction on filenames and secondly can someone help me out on this --> "invalid guideline" issue.
    I have emailed SANKAR my EDI file together with the b2b.log file.
    I need to get past this stage, and into SOA.
    Thanks as always.
    Arthur (203-921-5925)

    Arthur,
    Some findings that I can share with you on this-
    When using Generic Protocols like File and FTP you may want to set the following property in tip.properties file
    oracle.tip.adapter.b2b.allTPInOneDirectory - If set to true, it identifies trading partner based on File Name, if false, then identifies trading partners based on FolderName.
    The logic behind it explained well by Ramesh in an old post, probably you could dig-it up on the Forum.
    But its surprising that you are getting an invalid guideline error, ideally it should throw
    "trading partner agreement not found" error. You might also want to look at the this thread - Re: EDI over FTP
    To confirm the issue - please rename your sample file with this naming convention <TPNAME>_<DOCTYPE>_<DOCREVISION>.dat, ideally if this error is because of file naming issue, it should get resolved by using this file naming convention.
    and a technote from Oracle ( B2B_TN_010_Transport_File_FTP_Internal.pdf) is available on OTN, which details out the File naming conventions.
    hope this helps,
    Shailesh

  • Re: View physical query issued by Obiee 11g

    Guys,
    I want to view the physical query generated by the report in OBIEE 11g, but in the logs, i can only see these queries: (Just writing the first line to illustrate)
    SELECT
    0 s_0,
    "Marketing Contact List"."- Account Address"."Billing Abode Name" s_1,
    And this query:
    RqList distinct
    0 as c1 GB,
    Account.Billing Abode Name as c2 GB,
    Both these queries are NOT the physical ones.
    I have the appropriate log level, and even set it manually from the front end, but I'm unable to view the physical query. Please help!

    A.Budd,
    After running a report in OBIEE, click the Administration Link in the top right corner. New window appears and select "Manage Sessions". Look for the latest time executed that applies to the time you executed your report and click View Logs. Scroll almost to the bottom of the log.
    What you are looking for is the generated logic that starts with "*With*"
    WITH
    SAWITH0 AS (select sum(T443244.AGING030) as c1,
    sum(T443244.AGING6190) as c2,
    sum(case when T443244.CURRENCYCODE = 'C
    Copy the whole logic from WITH to the end of the logic (typically ends with Order By) (not the log) and paste it in your query tool. This is based on the physical layer.
    Antexity

  • Consistent gets/physical reads issue

    Dear all,
    There are not any of DML on table tb_hxl_user.
    First Select:
    SQL> select count(1) from tb_hxl_user;
      COUNT(1)                                                                     
    286435658                                                                     
    Elapsed: 00:04:03.67                                                       
    Statistics
            357  recursive calls                                                   
              0  db block gets                                                     
        2106478  consistent gets                                                   
        2106316  physical reads                                                    
              0  redo size                                                         
            422  bytes sent via SQL*Net to client                                  
            416  bytes received via SQL*Net from client                            
              2  SQL*Net roundtrips to/from client                                 
              6  sorts (memory)                                                    
              0  sorts (disk)                                                      
              1  rows processed                                                     Second Select:
    SQL> select count(1) from tb_hxl_user;
      COUNT(1)                                                                     
    286435658                                                                     
    Elapsed: 00:03:02.29
    Statistics
              0  recursive calls                                                   
              0  db block gets                                                     
        2106395  consistent gets                                                   
        2106273  physical reads                                                    
              0  redo size                                                         
            422  bytes sent via SQL*Net to client                                  
            416  bytes received via SQL*Net from client                            
              2  SQL*Net roundtrips to/from client                                 
              0  sorts (memory)                                                    
              0  sorts (disk)                                                      
              1  rows processed                                                     After the first,I know that all the blocks have been flushed to SGA,
    but the second select,it generated many " 2106395 consistent gets" and " 2106273 physical reads" also,
    why?

    What exactly is consistent gets read below link:
    http://jonathanlewis.wordpress.com/2009/06/12/consistent-gets-2/
    Which is similar to docs clearity and authenticity.
    What exactly is physical read read below link:
    http://download.oracle.com/docs/cd/B16240_01/doc/doc.102/e16282/oracle_database_help/oracle_database_instance_throughput_physreads_ps.html
    Regards
    Girish Sharma

  • OCM RAC Physical dataguard setup

    Hi
    I am preparing for the OCM examination and have gone through the detailed course outline but i am not clear on the following things.
    OCM exam topic is very broad and the course outline does not cover topics in detail.
    Suppose the examiner asks me to setup a RAC physical standby for RAC primary database this is going to take a lot of time and without my toolkit it is going to take a lot of time.
    Currently the strategy i want to use for this kind of exam is have lot of things automated including shell tricks like aliasing example alias startup and the database gets started etc .But i guess these tools would not be allowed in the OCM exam.
    I agree peeking into the documentation is allowed but in my humble opinion my college professor used to call these open book exam but many candidates who opened the book felt they never had time to compete the exam so i am kind of skeptical on this.
    So do i use Oracle enterprise manage grid control to do things quickly ?
    Practice on my home laptop with Enterprise Manger Grid Control (GC) ? to make things faster ?
    If i need to install RAC and GC then probably i don't have a choice but if the question is to setup streams then probably GC is faster so confronted with a streams environment do i go ahead and use streams is that legal ?
    Is that a good strategy ?
    regards
    P.S:Is it oaky to ask such a tactical question on this forum ?
    Edited by: hrishy on Nov 4, 2009 3:10 AM

    hrishy wrote:
    My curiosity for OCM exam stems from the fact that setting up RAC physical standby with RAC primary would be a time consuming task and may not be feasible within the given time frame if i am ever asked to do so in the exam .You can be pretty confident that the exam will provide adequate time to do everything that is needed - if you are comfortable with the process, and the commands, based on experience.
    If you need to read (study) the provided documentation at exam time to find out basic command syntax or to figure out the methods on the fly, then you will certainly run out of time. In other words, you should not ordinarily need to refer to the documentation by the time, other than to verify something that you may have confused due to pressure/stress.
    Not sure what you are saying about pre-canned scripts. You can rest assured that you will not be allowed to
    I was not going to comment about the specific scenario you mention. However, I do reiterate that they do not expect OCMs to be superhuman - just very competent and experienced. And I do question where you get the "Data Guard on a RAC DB" connection? (Data Guard, yes ... but tied to RAC?)

  • Dataguard Issues

    OS :Windows 2003
    DB version:10.2.0.4.0- primary
    DB version:10.2.0.4.0- Physical stand by
    when I configuring the data guard , getting the below error.
    Can you please suugest me ?
    DGMGRL> show database tan statusreport;
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    tan ERROR ORA-16737: the redo transport service for standby database "venus" has an error
    tan WARNING ORA-16714: the value of property StandbyFileManagement is inconsistent with the database setting
    tan WARNING ORA-16714: the value of property DbFileNameConvert is inconsistent with the database setting
    tan WARNING ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting
    DGMGRL> show database venus statusreport;
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    * ERROR ORA-16816: incorrect database role
    * ERROR ORA-16700: the standby database has diverged from the primary database
    venus WARNING ORA-16714: the value of property StandbyFileManagement is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property ArchiveLagTarget is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property LogArchiveMinSucceedDest is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property StandbyArchiveLocation is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property AlternateLocation is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property LogArchiveFormat is inconsistent with the database setting
    * ERROR ORA-16766: Redo Apply unexpectedly offline

    OS :Windows 2003
    DB version:10.2.0.4.0- primary
    DB version:10.2.0.4.0- Physical stand by
    when I configuring the data guard , getting the below error.
    Can you please suugest me ?
    DGMGRL> show database tan statusreport;
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    tan ERROR ORA-16737: the redo transport service for standby database "venus" has an error
    tan WARNING ORA-16714: the value of property StandbyFileManagement is inconsistent with the database setting
    tan WARNING ORA-16714: the value of property DbFileNameConvert is inconsistent with the database setting
    tan WARNING ORA-16714: the value of property LogFileNameConvert is inconsistent with the database setting
    DGMGRL> show database venus statusreport;
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    * ERROR ORA-16816: incorrect database role
    * ERROR ORA-16700: the standby database has diverged from the primary database
    venus WARNING ORA-16714: the value of property StandbyFileManagement is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property ArchiveLagTarget is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property LogArchiveMinSucceedDest is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property StandbyArchiveLocation is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property AlternateLocation is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting
    venus WARNING ORA-16714: the value of property LogArchiveFormat is inconsistent with the database setting
    * ERROR ORA-16766: Redo Apply unexpectedly offline

  • Physical inventory issue

    Hi,
    We have defined split Valuation category in Material Master, with two Valuation type.
    Now when I'm creating a physical inventroy document, its giving me an error - Batch for plant, st loc, material does not exist.
    Please let me know, what extactly I need to do.
    Regards,
    Aditya

    dear,
    kindly go into transacation "mmsc"
    here first you define for your plant which stoarge location you are performing the activity.
    check there whether any "1" indicator is not assigneed ...because this will not allow you to perform physical inventory or this stock wont be considered....remove this "1" indicator
    then go into transaction "msc1n" and put your material no and plant and storage location...
    this will get solved...we have done this many times...
    regards,
    rewa

  • EWM: Annual Physical Inventory Issue

    Hello together!
    I'm looking for cute (use SAP default) solution for the following problem:
    For a EWM warehouse in Turkey it's possible to do Ad-hoc physical inventory for products during the year. Nevertheless it's necessary to do a complete annual physical inventory due to legal regulations. But the SAP default provides message "Storage bin XY was already inventoried in the physical inventory year" when trying to start the annual physical inventory.
    Is it possible to avoid this message and proceed the annual physical inventory? Is it possible to delete the Ad-hoc physical inventory results to be able to start the annual physical inventory? I found transaction "/SCWM/PI_COMPL_DEL" to delete completeness data sets?!?
    Thank you very much for your help!
    best regards
    Alexander

    Hi Alexander!
    When you create PI document in /SCWM/PI_DOC_CREATE set the checkbox "Include Inventoried Objects"
    BR, Alex

  • Dataguard issue:- URGENT!!!!!!!!!!!!!!!

    Dear all,
    please help me in solving the problem of the dataguard, the problem i am facing is when ever a archive log is created it is first applied in standby system and next it is applied to PRIMARY system and if the STANDBY connection is lost the primary hangs.
    I am doing this dataguard on oracle 9i on HP-Unix
    If i put the value of log_archive_dest_2 as 'SERVICE=STANDBY OPTIONAL ARCH ASYNC NOAFFIRM REGISTER' then i am getting an error ORA-16025: parameter LOG_ARCHIVE_DEST_2 contains repeated or conflicting attributes.
    the protection mode is maximum performance.
    can any one help me out of this problem.
    The parameters i have used for primary and standby are given below.
    ## ADDED PARAMETERS FOR DATAGUARD on PRIMARY
    dg_broker_start=true
    standby_file_management='AUTO'
    log_archive_dest_state_1=ENABLE
    log_archive_dest_state_2=ENABLE
    log_archive_dest_1='LOCATION=/oracle/C30/saparch/C30arch MANDATORY'
    log_archive_dest_2='SERVICE=STANDBY ARCH NOAFFIRM'
    remote_archive_enable=true
    standby_archive_dest='/oracle/C30/saparch/C30arch'
    fal_server='SERVICE=STANDBY'
    fal_client='SERVICE=PRIMARY'
    ## ADDED PARAMETERS FOR DATAGUARD ON STANDBY
    dg_broker_start=true
    standby_file_management='AUTO'
    log_archive_dest_state_2='defer'
    log_archive_dest_1='LOCATION=/oracle/C30/saparch/C30arch'
    #log_archive_dest_2='SERVICE=PRIMARY OPTIONAL ARCH ASYNC NOAFFIRM REGISTER'
    standby_archive_dest='/oracle/C30/saparch/C30arch'
    fal_client='SERVICE=STANDBY'
    fal_server='SERVICE=PRIMARY'
    Please help me out!!!!!!!!!!!

    thanks kamaljeet for reply,
    the protection mode is MAXIMUM PERFORMANCE.
    Please help me in sorting this problem.

Maybe you are looking for

  • Solaris 8 Installation on Intel - PROBLEM

    Hi, I am trying to load Solaris 8 on Intel machine , 810 motherboard , 128MB SDRAM , 20GB HDD but whenever I try to load with CD-1 I have a problem like Webstart Installation 3.00 start and when he ask me language , after that message comes no hardis

  • Issue with login page

    Hi, Can anyone help me with the following issue: When I try logging into the login page using the URL_*(http://hostname.domainname)*_ the page is re-directed to the same URL but with https:_ in front of the URL but the login page is not displayed/sho

  • Outlook 2007 not opening after Windows Vista Update 25-Aug-09 [Qosmio G50]

    I thought that posting my experience just now might be helpful to others. Yesterday (25-Aug-2009) my Outlook 2007 was working perfectly. I had downloaded & installed Outlook Connector 12.1 because I'd received an e-mail from Microsoft to say that the

  • Analog Video In for iPad

    Hello, is there any accessory allowing analog video in for the iPad? I want to show the image of a FPV camera, mounted on a RC plane, on my iPad screen. Is there any solution without converting this signal to an IP cam signal with an adapter like the

  • Return key now send key

    In SMS while texting my return key is now the send key with the new update. Why? How do I get the return/enter key back? Is this intended?