Dataguard with EBSR12.1.1
Hi,
We are planning to implement Dataguard with EBS R12.1.1 .
what is the procedure.
Regards,
Muthu
806099 wrote:
Hi,
We are planning to implement Dataguard with EBS R12.1.1 .
what is the procedure.Business Continuity for Oracle E-Business Release 12 Using Oracle 11g Physical Standby Database [ID 1070033.1]
Using Active Data Guard Reporting with Oracle E-Business Suite Release 12.1 and Oracle Database 11g [ID 1070491.1]
Business Continuity for Oracle Applications Release 12 on Database Release 10gR2 - Single Instance and RAC [ID 452056.1]
Thanks,
Hussein
Similar Messages
-
We have an 11gR2 database with physical standby (Dataguard with Fast start Fail over) implementation and considering the options for including Active Dataguard configuration on the above setup for using standby database for reporting.
Has anyone configured 'Active dataguard' with combination of FSFO, any pointers on lessons learnt or limitations on this combinations would be greatly appreciated.
Thank you very much for your time.An issue with Active Data Guard is the design of service level agreements. In the event of failover, service will degrade - perhaps dramatically. Clients are often very keen to bring the standby into use because it appears to be a way of saving money, but what will you do if it is activated? Your SLA will have to state either that the query service will be disabled, or that both the r/w and the r/o services will continue to be available, but at greatly reduced performance (which you must quantify). You need to get the client's agreement and document this.
-
Dataguard and RMAN.
Got Active DataGuard on a primary database, quite nicely sending its archive logs to its secondary.
I can quite happily use the Broker and switchover between them.
Now if I take RMAN backups of the primary database, if I have to failover to the secondary, I'm gonna loose all those backups.
Well, I can restore the whole database to the backup, cos I can restore the control file, from the backuip and therefore I can restore the whole db.
But if I want to restore to a tablespace, I wont be able to , cos the db_unique_names names are different, and the DB ID's will be different.
Same goes if I use a recovery catalog....
so how do I failover/switchover without loosing my previous rman backups ??If I were backup up to Disk than I would come later and back up the back ups to Tape.
Since a Standby and Primary are the same database with the same DBID you can backup from either server. -
TAF On 11g DataGuard with FSFO
Hi All,
Today, i have created TAF for my 11.1.0.7 Oracle database running on RHEL 5.4 64 bit.
I have configured my databases with FSFO enabled. (MaxAvailibility with LGWR SYNC AFFIRM transport)
I created TAF with following entries:
1) Created and enabled a dedicated service in primary :
I have local_listener parameter set in my primary and standby databases.
begin
dbms_service.create_service('taf_test','taf_test');
end;
begin
DBMS_SERVICE.START_SERVICE('taf_test');
end;
begin
dbms_service.modify_service
('taf_test',
FAILOVER_METHOD => 'BASIC',
FAILOVER_TYPE => 'SELECT',
FAILOVER_RETRIES => 200,
FAILOVER_DELAY => 1);
end;
/2) Made sure that listener is listening to above created service:
lsnrctl services l_payee1fe_dg_001
LSNRCTL for Linux: Version 11.1.0.7.0 - Production on 21-JAN-2010 09:52:27
Copyright (c) 1991, 2008, Oracle. All rights reserved.
Services Summary...
Service "taf_test.us" has 1 instance(s).
Instance "payee1fe", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:13 refused:0 state:ready
LOCAL SERVER3) TNS Entry :
taf_test.us=
(DESCRIPTION=
(SDU= 32767)
(ENABLE=BROKEN)
(ADDRESS_LIST=
(FAILOVER=ON)
(LOAD_BALANCE=YES)
(ADDRESS=(PROTOCOL=TCP)(HOST=payee1fe-orasvr.db.us.com)(PORT=58003))
(ADDRESS=(PROTOCOL=TCP)(HOST=payee1fe-orasvr.db.us.com)(PORT=58003))
(CONNECT_DATA=
(SERVICE_NAME = taf_test.us)
)4)Trigger for setting service on appropriate primary database :
create trigger taf_test after startup on database
declare
v_role varchar(30);
begin
select database_role into v_role from v$database;
if v_role = 'PRIMARY' then
DBMS_SERVICE.START_SERVICE('taf_test');
else
DBMS_SERVICE.STOP_SERVICE('taf_test');
end if;
end;
/5) Ran a big SELECT on primary and after 5 sec, kill primary's pmon to do fail-over.
6) After sometime, i saw that SELECT was hanged for some time and once, DGMGRL has open new primary (org standby), startup trigger got fired and SELECT started fetching rows from the point where it hanged.
In that way, my TAF was working properly as expected.
My question is :
How come new Primary got the session state of SELECT which originated on old primary and then failed-over to new primary ? I confimed that,SELECT was NOT re-executed on new primary and started fetching rows from the row where it hanged at the time of fail-over.
Since both the databases have different cache and controlfiles, i want to understand how TAF works on dataguard.
In RAC,there is always a GRD through which session state can be co-ordinated between different instances. But this is not the case in DG.
I did not find anything in PRIMARY's alert log. Though STANDBY alert log was containing below statement:
ALTER SYSTEM SET service_names='taf_test' SCOPE=MEMORY SID='payee1fe';Can anyone shed light on internal working of TAF in DG.
Regards,
Bhavik DesaiHi,
Just now i tested,SELECT FAIL-OVER for SELECT with PARALLEL (4) hint. I got below msg and SELECT did not executed on new primary:
ERROR:
ORA-25401: can not continue fetches
4350 rows selected.However, i observed that the session is failed-over to new primary. After getting above msg in SQLPLUS window, when i saw number of slaves given to my new sessions, i got :
SQL> select *From v$pq_tqstat;
DFO_NUMBER TQ_ID SERVER_TYP NUM_ROWS BYTES OPEN_TIME AVG_LATENCY WAITS TIMEOUTS PROCES INSTANCE
1 0 Producer 19800 65296 0 0 3 0 P002 1
1 0 Producer 19800 65296 0 0 4 1 P001 1
1 0 Producer 19800 65294 0 0 3 0 P000 1
1 0 Producer 19800 65296 0 0 3 0 P003 1
1 0 Consumer 4351 65296 0 0 8 0 QC 1Does it mean that, SELECT fail-over is only supported for serialized SELECTs ? Or there is other alternative to achive PARALLEL SELECT fail-over ?
Regards,
Bhavik Desai
Edited by: BhavikDe on Jan 24, 2010 11:30 PM -
We have client presentation for Active data guard with 11i EBS, is this certified with 11i EBS...?
Please let me know some more information.Hi,
Yes its certificated, please check below thread for notes-steps etc.
Data Guard setup with 11i apps
Dataguard for EBS db
Dataguard for EBS db
Implement Standby Instance for EBS
Implement Standby Instance for EBS
Also see:
Business Continuity for Oracle E-Business Release 11i Using Oracle 11g Physical Standby Database - Single Instance and Oracle RAC [ID 1068913.1]
Regard
Helios -
Use dataguard with applications built with APEX
Hi all,
I have questions about APEX with Dataguard. Can I develop an application with APEX and then use Dataguard? Can the application switch to Standby Database automatically when there is a failover event?
If it is possible, Can you explain me?
Thank in advance
VincenzoAPEX should work fine in a Physical Standby as it's an exact copy of your database. Logical Standby should work too, but you need to make sure the FLOWS_ schemas are being replicated and there is no transformation done to them. Also not sure if there are any datatypes in APEX that would not work in a Logical Standby, but you should be able to look those up in the Data Guard Docs then query all_tab_columns where table_owner like 'FLOWS_%' to verify. Physical Standby would be a safer bet.
You want to make sure you install your HTTP server on a different server so it doesn't go down with your database in the event the OS shuts down. Then in the TNS entry for your database you can add [url ="http://download.oracle.com/docs/cd/B19306_01/network.102/b14213/tnsnames.htm#sthref676"]failover=on and the additional IP address(es). This will automatically failover when your primary database is no longer accessible. Personally I would have a second HTTP Server installed at your backup site and handle the failover at the network level. If a sprinkler head pops or your air conditioner fails at your primary site, everything is going to stop there. You can easily have 2 HTTP servers accessing the same APEX install (on one database) at the same time, so it's easy to test / verify the state of the 2nd http server. You'll probably want to mirror the /i/ directory and any image directories you have for your applications from your primary site to your failover.
"Automatically"? Yes, all of this can be configured to happen automatically, but most people (Tom Kyte included) recommend that you only failover manually as it's best to investigate the underlying issues with your primary site before failing over. You should be able to tell within the first 5 minutes whether the problem was a simple network outage or unintentional OS reboot, or a more serious problem such as a corrupt block, Air Conditioning failure, hardware issues, etc. Keep in mind to do automatic failover you need a 3rd site as an observer of your 2 sites to decide which site is "still alive".
Varad, I believe TAF is only a RAC concept, not Data Guard. Furthermore, I'm pretty sure that TAF is not built into APEX. So, in an APEX / RAC installation, if you're running a page that takes say 4 seconds (long running report) and 2 seconds in the database node on which you are running fails, the end user is likely going to see a "404 not found" or some other error, then when they refresh the page it will hit another RAC note and return the results. I'm pretty sure the TAF logic is not built into APEX or mod_plsql to migrate the database session to a new surviving node when it detects a failure. Again, I could be wrong, but I'm pretty sure I'm not in this particular case (someone please prove me wrong).
Thanks,
Tyler -
Configuring maximum protection mode in dataguard with oracle 10g
Dear All,
I am facing a big problem in my configuration for oracle dataguard in maximum protection mode. as per oracle documentation i have done the following.
on the primary database i configured the floowing parameter.
LOG_ARCHIVE_DEST_2='SERVICE=CDER LGWR SYNC AFFIRM
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=CDER'
on the standby i configured the following parameters.
LOG_ARCHIVE_DEST_2='SERVICE=REDC LGWR SYNC AFFIRM
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=REDC'
i created standby redo logs on the standby database as per documentation.
I shut down the primary database and started it in mount stage and i executed the following commands.
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION
after database was sucessfully alterd, i execute it the following command to open the database.
SQL > ALTER DATABASE OPEN;
what is happening is that i am recieving the Error end of communication channel and also after i look at the log file the following error is in place
Thu Jul 22 23:33:37 2010
Errors in file c:\oracle\product\10.2.0\admin\redc\bdump\redc_psp0_1088.trc:
ORA-16072: a minimum of one standby database destination is required
though when i reset the dataguard to maximize performance it work really successful and the database open;
please guys guide me through thisyou got it
redc.__db_cache_size=1056964608
redc.__java_pool_size=16777216
redc.__large_pool_size=16777216
redc.__shared_pool_size=318767104
redc.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\REDC\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\REDC\bdump'
*.compatible='10.2.0.3.0'
*.control_files='C:\oracle\product\10.2.0\oradata\REDC\control01.ctl','C:\oracle\product\10.2.0\oradata\REDC\control02.ctl','C:\oracle\product\10.2.0\oradata\REDC\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\REDC\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=8
*.DB_FILE_NAME_CONVERT='C:\oracle\product\10.2.0\oradata\CDER','C:\oracle\product\10.2.0\oradata\REDC','D:\oracle\oradata\CDER','D:\oracle\oradata\REDC'
*.db_name='REDC'
*.DB_UNIQUE_NAME='REDC'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=REDCXDB)'
*.FAL_CLIENT='REDC'
*.FAL_SERVER='CDER'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(REDC,CDER)'
*.LOG_ARCHIVE_DEST_1='LOCATION=D:\oracle\Archives
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=REDC'
*.LOG_ARCHIVE_DEST_2='SERVICE=CDER LGWR SYNC AFFIRM
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=CDER'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=5
*.LOG_FILE_NAME_CONVERT='D:\oracle\Archives','D:\oracle\Archives'
*.open_cursors=300
*.pga_aggregate_target=471859200
*.processes=150
*.REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
*.sga_target=1417674752
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\REDC\udump' -
Hello All,
I need to know the configuration steps that need to be applied on the primary database and the standby database in a datagurad environment.
Is there any quick links instead of reading the whole Dataguard administration guide?
I need to know the steps that need to be applied in general.
Regards,check the below link if you are using oracle 11g
http://neeraj-dba.blogspot.com/2011/10/active-standby-database-in-oracle-11g.html
--neeraj -
Dataguard with 11g Standard Edition
Hi, We have 11g standard edition. I wish to provide high availability. From the license document We are able to setup Basic Standby Database(Manually Managed). What is difference between Data guard and Basic standby? Hw can i configure? What r all the requirements? Any other possible ways?
Thanks. That document is very useful for me. That document contains Version: 8.1.7.4 to 10.2. But i have 11g. Is this applicable for 11g also.
Edited by: ziya on Apr 24, 2009 2:55 PM -
Hi everybody!
I have a problem. Some weeks ago I opened a post related to this issue. We have two dataguards with dataguard broker. One of them is resync (thanks to mseberg and this forum) and now I have problems with the other.
Once I have learned how to configure and start/stop dataguard broker, I have a more basic problem, which is to resync it. I follow a process, where I backup the primary with RMAN, I copy the rman files to the other server with the controlfile, at once, I recover with rman again.
The problem is that it is too big, 2 hours for backing it up more or less, and when I restore it, no archivelog list appears being syncronized.
I have followed the same process than the other one and I can't resync it. I think there is something at my params or something new at 11g version...
SQL> show parameters
NAME TYPE VALUE
O7_DICTIONARY_ACCESSIBILITY boolean FALSE
active_instance_count integer
aq_tm_processes integer 0
archive_lag_target integer 0
asm_diskgroups string
asm_diskstring string
asm_power_limit integer 1
asm_preferred_read_failure_groups string
audit_file_dest string /opt/oracle/admin/MN122010P/ad
ump
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string DB
background_core_dump string partial
background_dump_dest string /opt/oracle/diag/rdbms/mn12201
0p/MN122010P/trace
backup_tape_io_slaves boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
cell_offload_compaction string ADAPTIVE
cell_offload_parameters string
cell_offload_plan_display string AUTO
cell_offload_processing boolean TRUE
cell_partition_large_extents string DEFAULT
circuits integer
client_result_cache_lag big integer 3000
client_result_cache_size big integer 0
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
commit_logging string
commit_point_strength integer 1
commit_wait string
commit_write string
compatible string 11.1.0.0.0
control_file_record_keep_time integer 7
control_files string /opt/oracle/oradata/MN122010P/
controlfile/control01.ctl, /op
t/oracle/oradata1/MN122010P/co
ntrolfile/control02.ctl
control_management_pack_access string DIAGNOSTIC+TUNING
core_dump_dest string /opt/oracle/diag/rdbms/mn12201
0p/MN122010P/cdump
cpu_count integer 4
create_bitmap_area_size integer 8388608
create_stored_outlines string
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_block_buffers integer 0
db_block_checking string FALSE
db_block_checksum string TYPICAL
db_block_size integer 8192
db_cache_advice string ON
db_cache_size big integer 0
db_create_file_dest string /opt/oracle/oradata
db_create_online_log_dest_1 string /opt/oracle/oradata
db_create_online_log_dest_2 string /opt/oracle/oradata1
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
db_domain string domain.es
db_file_multiblock_read_count integer 69
db_file_name_convert string
db_files integer 200
db_flashback_retention_target integer 1440
db_keep_cache_size big integer 0
db_lost_write_protect string NONE
db_name string MN122010
db_recovery_file_dest string /opt/oracle/oradata/flash_reco
very_area
db_recovery_file_dest_size big integer 100G
db_recycle_cache_size big integer 0
db_securefile string PERMITTED
db_ultra_safe string OFF
db_unique_name string MN122010P
db_writer_processes integer 1
dbwr_io_slaves integer 0
ddl_lock_timeout integer 0
dg_broker_config_file1 string /opt/oracle/product/db111/dbs/
dr1MN122010P.dat
dg_broker_config_file2 string /opt/oracle/product/db111/dbs/
dr2MN122010P.dat
dg_broker_start boolean FALSE
diagnostic_dest string /opt/oracle
disk_asynch_io boolean TRUE
dispatchers string (PROTOCOL=TCP) (SERVICE=MN1220
10PXDB)
distributed_lock_timeout integer 60
dml_locks integer 844
drs_start boolean FALSE
enable_ddl_logging boolean FALSE
event string
fal_client string
fal_server string
fast_start_io_target integer 0
fast_start_mttr_target integer 0
fast_start_parallel_rollback string LOW
file_mapping boolean FALSE
fileio_network_adapters string
filesystemio_options string none
fixed_date string
gc_files_to_locks string
gcs_server_processes integer 0
global_context_pool_size string
global_names boolean FALSE
global_txn_processes integer 1
hash_area_size integer 131072
hi_shared_memory_address integer 0
hs_autoregister boolean TRUE
ifile file
instance_groups string
instance_name string MN122010P
instance_number integer 0
instance_type string RDBMS
java_jit_enabled boolean TRUE
java_max_sessionspace_size integer 0
java_pool_size big integer 0
java_soft_sessionspace_limit integer 0
job_queue_processes integer 1000
large_pool_size big integer 0
ldap_directory_access string NONE
ldap_directory_sysauth string no
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
local_listener string LISTENER_MN122010P
lock_name_space string
lock_sga boolean FALSE
log_archive_config string dg_config=(MN122010P,MN122010R
,MN12201R)
log_archive_dest string
log_archive_dest_1 string location="USE_DB_RECOVERY_FILE
_DEST", valid_for=(ALL_LOGFIL
ES,ALL_ROLES)
log_archive_dest_10 string
log_archive_dest_2 string service=MN12201R, LGWR SYNC AF
FIRM delay=0 OPTIONAL compress
ion=DISABLE max_failure=0 max_
connections=1 reopen=300 db_
unique_name=MN12201R net_timeo
ut=30 valid_for=(online_logfi
le,primary_role)
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string ENABLE
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string ENABLE
log_archive_dest_state_3 string ENABLE
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string %t_%s_%r.dbf
log_archive_local_first boolean TRUE
log_archive_max_processes integer 4
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 7668736
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
max_commit_propagation_delay integer 0
max_dispatchers integer
max_dump_file_size string unlimited
max_enabled_roles integer 150
max_shared_servers integer
memory_max_target big integer 512M
memory_target big integer 512M
nls_calendar string
nls_comp string BINARY
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
olap_page_pool_size big integer 0
open_cursors integer 300
open_links integer 4
open_links_per_instance integer 4
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.1.0.7
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
os_authent_prefix string ops$
os_roles boolean FALSE
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
pga_aggregate_target big integer 0
plscope_settings string IDENTIFIERS:NONE
plsql_ccflags string
plsql_code_type string INTERPRETED
plsql_debug boolean FALSE
plsql_native_library_dir string
plsql_native_library_subdir_count integer 0
plsql_optimize_level integer 2
plsql_v2_compatibility boolean FALSE
plsql_warnings string DISABLE:ALL
pre_page_sga boolean FALSE
processes integer 170
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced
rdbms_server_dn string
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
recyclebin string on
redo_transport_user string
remote_dependencies_mode string TIMESTAMP
remote_listener string
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
replication_dependency_tracking boolean TRUE
resource_limit boolean FALSE
resource_manager_cpu_allocation integer 4
resource_manager_plan string
result_cache_max_result integer 5
result_cache_max_size big integer 1312K
result_cache_mode string MANUAL
result_cache_remote_expiration integer 0
resumable_timeout integer 0
rollback_segments string
sec_case_sensitive_logon boolean TRUE
sec_max_failed_login_attempts integer 10
sec_protocol_error_further_action string CONTINUE
sec_protocol_error_trace_action string TRACE
sec_return_server_release_banner boolean FALSE
serial_reuse string disable
service_names string MN122010P.domain.es
session_cached_cursors integer 50
session_max_open_files integer 10
sessions integer 192
sga_max_size big integer 512M
sga_target big integer 0
shadow_core_dump string partial
shared_memory_address integer 0
shared_pool_reserved_size big integer 10066329
shared_pool_size big integer 0
shared_server_sessions integer
shared_servers integer 1
skip_unusable_indexes boolean TRUE
smtp_out_server string
sort_area_retained_size integer 0
sort_area_size integer 65536
spfile string /opt/oracle/product/db111/dbs/
spfileMN122010P.ora
sql92_security boolean FALSE
sql_trace boolean FALSE
sql_version string NATIVE
sqltune_category string DEFAULT
standby_archive_dest string ?/dbs/arch
standby_file_management string AUTO
star_transformation_enabled string FALSE
statistics_level string TYPICAL
streams_pool_size big integer 0
tape_asynch_io boolean TRUE
thread integer 0
timed_os_statistics integer 0
timed_statistics boolean TRUE
trace_enabled boolean TRUE
tracefile_identifier string
transactions integer 211
transactions_per_rollback_segment integer 5
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
use_indirect_data_buffers boolean FALSE
user_dump_dest string /opt/oracle/diag/rdbms/mn12201
0p/MN122010P/trace
utl_file_dir string
workarea_size_policy string AUTO
xml_db_events string enable
I have tested the connectivity between them and it's ok, I recreated the password file
[oracle@servername01 MN122010P]$ sqlplus "sys/[email protected] as sysdba"
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
1 MN122010P
servername01
11.1.0.7.0 09-OCT-11 OPEN NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
[oracle@servername01 MN122010P]$ sqlplus "sys/[email protected] as sysdba"
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
1 MN12201R
servername02
11.1.0.7.0 28-NOV-11 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
Recovery Manager: Release 11.1.0.7.0 - Production on Thu Dec 1 10:16:23 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
RMAN> connect target /
connected to target database: MN122010 (DBID=2440111267)
RMAN> run{
ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT '/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_%d_t%t_s%s_p%p';
BACKUP DATABASE PLUS ARCHIVELOG;
2> 3> 4>
using target database control file instead of recovery catalog
allocated channel: d1
channel d1: SID=140 device type=DISK
Starting backup at 01-DEC-11
current log archived
channel d1: starting archived log backup set
channel d1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4117 RECID=7260 STAMP=766935608
input archived log thread=1 sequence=4118 RECID=7261 STAMP=766935619
input archived log thread=1 sequence=4119 RECID=7262 STAMP=766935630
input archived log thread=1 sequence=4120 RECID=7263 STAMP=766935635
....List of archives....
Starting backup at 01-DEC-11
channel d1: starting full datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00010 name=/opt/oracle/oradata/MN122010P/TBCESPANDM_01.DBF
input datafile file number=00009 name=/opt/oracle/oradata/MN122010P/CESPAROUTING_01.DBF
input datafile file number=00007 name=/opt/oracle/oradata/MN122010P/TBCESPACALLEJERO_01.DBF
input datafile file number=00008 name=/opt/oracle/oradata/MN122010P/CESPAGEOCODER_01.DBF
input datafile file number=00001 name=/opt/oracle/oradata/MN122010P/system01.dbf
input datafile file number=00002 name=/opt/oracle/oradata/MN122010P/sysaux01.dbf
input datafile file number=00003 name=/opt/oracle/oradata/MN122010P/undotbs01.dbf
input datafile file number=00006 name=/opt/oracle/oradata/MN122010P/TBCESPAFONDO_01.DBF
input datafile file number=00005 name=/opt/oracle/oradata/MN122010P/TBCESPAPOIS_01.DBF
input datafile file number=00004 name=/opt/oracle/oradata/MN122010P/users01.dbf
channel d1: starting piece 1 at 01-DEC-11
channel d1: finished piece 1 at 01-DEC-11
piece handle=/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_MN122010_t768739341_s768_p1 tag=TAG20111201T104221 comment=NONE
channel d1: backup set complete, elapsed time: 00:39:26
Finished backup at 01-DEC-11
Starting backup at 01-DEC-11
current log archived
channel d1: starting archived log backup set
channel d1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4256 RECID=7399 STAMP=768741707
channel d1: starting piece 1 at 01-DEC-11
channel d1: finished piece 1 at 01-DEC-11
piece handle=/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_MN122010_t768741708_s769_p1 tag=TAG20111201T112148 comment=NONE
channel d1: backup set complete, elapsed time: 00:00:01
Finished backup at 01-DEC-11
Starting Control File and SPFILE Autobackup at 01-DEC-11
piece handle=/opt/oracle/product/db111/dbs/c-2440111267-20111201-00 comment=NONE
Finished Control File and SPFILE Autobackup at 01-DEC-11
released channel: d1
I made a alter database create standby controlfile as at Primary and at Standby:
SQL> shutdown immediate;
ORA-01109: base de datos sin abrir
Base de datos desmontada.
Instancia ORACLE cerrada.
SQL> startup nomount;
Instancia ORACLE iniciada.
Total System Global Area 2937555928 bytes
Fixed Size 744408 bytes
Variable Size 1862270976 bytes
Database Buffers 1073741824 bytes
Redo Buffers 798720 bytes
copy the controlfile to standby controlfile locations
startup standby
ALTER DATABASE MOUNT STANDBY DATABASE;
And restoring with rman
Restoring
List of Archived Logs in backup set 616
Thrd Seq Low SCN Low Time Next SCN Next Time
1 4256 27049296 01-DEC-11 27052551 01-DEC-11
RMAN> run{
2> allocate channel c1 type disk format '/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_%d_t%t_s%s_p%p';
3> restore database;
4> recover database until sequence 4256 thread 1;
5> sql 'alter database recover managed standby database disconnect from session';
6> release channel c1;
7> }
allocated channel: c1
channel c1: SID=164 device type=DISK
Starting restore at 01-DEC-11
Starting implicit crosscheck backup at 01-DEC-11
Crosschecked 115 objects
Finished implicit crosscheck backup at 01-DEC-11
Starting implicit crosscheck copy at 01-DEC-11
Crosschecked 24 objects
Finished implicit crosscheck copy at 01-DEC-11
searching for all files in the recovery area
cataloging files...
no files cataloged
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00001 to /opt/oracle/oradata/MN122010P/system01.dbf
channel c1: restoring datafile 00002 to /opt/oracle/oradata/MN122010P/sysaux01.dbf
channel c1: restoring datafile 00003 to /opt/oracle/oradata/MN122010P/undotbs01.dbf
channel c1: restoring datafile 00004 to /opt/oracle/oradata/MN122010P/users01.dbf
channel c1: restoring datafile 00005 to /opt/oracle/oradata/MN122010P/TBCESPAPOIS_01.DBF
channel c1: restoring datafile 00006 to /opt/oracle/oradata/MN122010P/TBCESPAFONDO_01.DBF
channel c1: restoring datafile 00007 to /opt/oracle/oradata/MN122010P/TBCESPACALLEJERO_01.DBF
channel c1: restoring datafile 00008 to /opt/oracle/oradata/MN122010P/CESPAGEOCODER_01.DBF
channel c1: restoring datafile 00009 to /opt/oracle/oradata/MN122010P/CESPAROUTING_01.DBF
channel c1: restoring datafile 00010 to /opt/oracle/oradata/MN122010P/TBCESPANDM_01.DBF
channel c1: reading from backup piece /opt/oracle/oradata/BACKUPS_01/MN122010P/backup_MN122010_t768739341_s768_p1
After the restoring I found at standby that no archives have been applied:
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME,APPLIED
FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#
/ 2 3
no rows selected
SQL> select * from v$Instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
1 MN12201R
server02
11.1.0.7.0 01-DEC-11 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
SQL> select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
7 rows selected.
On primary
MESSAGE
ARC3: Beginning to archive thread 1 sequence 4258 (27056314-27064244)
ARC3: Completed archiving thread 1 sequence 4258 (27056314-27064244)
ARC0: Beginning to archive thread 1 sequence 4259 (27064244-27064251)
ARC0: Completed archiving thread 1 sequence 4259 (27064244-27064251)
ARC2: Beginning to archive thread 1 sequence 4260 (27064251-27064328)
ARC2: Completed archiving thread 1 sequence 4260 (27064251-27064328)
ARC3: Beginning to archive thread 1 sequence 4261 (27064328-27064654)
ARC3: Completed archiving thread 1 sequence 4261 (27064328-27064654)
Edited by: user8898355 on 01-dic-2011 7:02I'm seeing those errors at primary
LNSb started with pid=20, OS id=30141
LGWR: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (16086)
LGWR: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
trace file:
*** 2011-12-02 09:52:17.164
*** SESSION ID:(183.1) 2011-12-02 09:52:17.164
*** CLIENT ID:() 2011-12-02 09:52:17.164
*** SERVICE NAME:(SYS$BACKGROUND) 2011-12-02 09:52:17.164
*** MODULE NAME:() 2011-12-02 09:52:17.164
*** ACTION NAME:() 2011-12-02 09:52:17.164
*** TRACE FILE RECREATED AFTER BEING REMOVED ***
*** 2011-12-02 09:52:17.164 6465 krsu.c
Initializing NetServer[LNSb] for dest=MN12201R.domain.es mode SYNC
LNSb is not running anymore.
New SYNC LNSb needs to be started
Waiting for subscriber count on LGWR-LNSb channel to go to zero
Subscriber count went to zero - time now is <12/02/2011 09:52:17>
Starting LNSb ...
Waiting for LNSb [pid 30141] to initialize itself
*** TRACE FILE RECREATED AFTER BEING REMOVED ***
*** 2011-12-02 09:52:17.164 6465 krsu.c
Initializing NetServer[LNSb] for dest=MN12201R.domain.es mode SYNC
LNSb is not running anymore.
New SYNC LNSb needs to be started
Waiting for subscriber count on LGWR-LNSb channel to go to zero
Subscriber count went to zero - time now is <12/02/2011 09:52:17>
Starting LNSb ...
Waiting for LNSb [pid 30141] to initialize itself
*** 2011-12-02 09:52:20.185
*** 2011-12-02 09:52:20.185 6828 krsu.c
Netserver LNSb [pid 30141] for mode SYNC has been initialized
Performing a channel reset to ignore previous responses
Successfully started LNSb [pid 30141] for dest MN12201R.domain.es mode SYNC ocis=0x2ba2cb1fece8
*** 2011-12-02 09:52:20.185 2880 krsu.c
Making upiahm request to LNSb [pid 30141]: Begin Time is <12/02/2011 09:52:17>. NET_TIMEOUT = <30> seconds
Waiting for LNSb to respond to upiahm
*** 2011-12-02 09:52:20.262 3044 krsu.c
upiahm connect done status is 0
Receiving message from LNSb
Receiving message from LNSb
LGWR: Failed
rfsp: 0x2ba2ca55c328
rfsmod: 2
rfsver: 3
rfsflag: 0x24882 -
Dataguard in a replicated environment
Folks,
Has anyone implemented dataguard(standby database) in a replicated environment
or
worked in an environment where replication (updateable snapshots) are in place already along with dataguard?
Are there any complications I need to be aware of whilst setting-up dataguard with replication on?
Thanks
AmitThat is entirely due to the checkpoint delay. Depending on variations in hardware,
I/O configuration and workload, it can take clients longer to flush their caches than
the master. You can adjust the delay, which is 30 seconds by default, by calling
the DB_ENV->rep_set_timeout API with the DB_REP_CHECKPOINT_DELAY flag.
If you set it to 0, there will be no delay.
Sue LoVerso
Oracle -
Oracle Streams - Dataguard Configuration
Dataguard<------Streams<----Production------> Dataguard
I'm planning to implement a 4-way System where My Production Database with its own Physical Standby will be streaming(Streams database) to A reporting Database with its own Physical Standby.So,Effectively My production database,Especially it's redo logs will be put under severe load.I would like get some light on the feasibility of such a Setup.What parameters can i Take care of so as to make it a profitable High Availability-High Preformance System?
Any suggestions and advice will be highly appreciated..Remember that Streams check the source DB name of the LCR. Thus the db_name of each standby must be the same as the of the open DB of the remote DB will reject the LCR of the standby when it is activated.
Also streams, dataguard and crash don't fit so well in respect of streams consistency. At the crach time, some transaction will be lost that would have already been sent by streams, since streams react beneath the second. Thus when you activate the dataguard with its loss of some data, you are going to miss some source transaction, that would have already been replicated. You may end with errors on target site, either being dup val on transaction or OLD value in target do not match new value in LCR.
You can't avoid 100% this but you can decrease its extend. Use as method the 'LGW asynct' as dataguard destination.
LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR ASYNC'
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm#i1265762
This requires to create the standby redo logs on the dataguard DB (and also on the source DB, since it may becomes itself the standby) so that LGWR updates the remote redo as soon as it can ('async' otherwise 'sync' means that the commit on source is done AFTER the commit into the dataguard and you don't want that).
From my own observation the 'LGWR async nowait' lag usually under 1 sec behind production, which is very good. -
Monitoring Dataguard on 10g through OEM
Please,
Should I need to install a version of OEM to monitore my dataguard on 10g.
I configure Dataguard with Oracle 10g running on windows, but the GUI of OEM doesn't show a tab related to dataguard.
How should I get it ?, there is a link of that should help me to resoleve this issue?
Thankswe monitor from console using SYS userYou have to monitor it using a SYSDBA user not necessarily SYS. So you can create a user e.g. SYS_MONITOR which has different policy than SYS. Although it is best to change that password on regular bases too, maybe every 3 months.
so is any way we can change SYS password on console using command line utility?I don't get your question. Are you asking if it is possible to change the monitoring username/password using some command line utility? If so, I haven't heard of such tool. But there could be a package like MGMT_TARGET which you can manipulate the credentials if you know how to use it. -
Hi, I'm interested in learning RAC and dataguard with real time knowledge......... please help me finding some training institution or peoples with real time knowledge
ThanksThe term "+real-time+" has a very specific meaning in Information Technology. See http://en.wikipedia.org/wiki/Real-time_computing for the basic details.
Oracle RAC is not a real-time database.. and nor does the "real-time knowledge" make much sense IMO...
Always difficult to get to grips with the nuances of the English language where it is your 2nd or even 3rd language. But it is important that we speak the same language and understand the same terms and concepts, in Information Technology. -
Oracle dataguard configuration command
Hello,
Can any one provide me the command that is used to configure dataguard that will get the primary database to be connected to it standby database?
any way How do you configure database into dataguard with standby database?
What are the SQL command use to configure dataguard?
How to configure dataguard using Enterprise Manager?
Thanks
AlainPerhaps you could find the answers you seek in the fine Oracle® Data Guard Concepts and Administration guide.
:p
Maybe you are looking for
-
10.5.1 -Print Utility Incompatibility?
I have new iMac, O.S. 10.5.1, using HP 3820 Printer. Printer works fine but print utility gives me "Error 1001" when I ask for ink level. HP tells me that Leopard is not fully compatible with HP 3820 and that if I need to see ink level, I need to get
-
Usage of aggregation operators not supported by OWB
I want to use the WSUM operator for aggregation in an AW cube. Since WSUM is not supported by OWB I updated the corresponding AGGMAP after deploying the cube to the AW. But whenever I execute the cube load mappig the AGGMAP is overwritten to what has
-
Hello Expert, I want to know the functionality of Cashflow used in SAP B1. I am aware of this concept. Regards, Sandy
-
"search entire message" button in mail is sometimes grayed out
Hi, Recently I've noticed that sometimes when I do a search in mail the button to search entire message is grayed out and unclickable. This started happening in the last few days. Just a minute ago I tried to do a search and the button was gray, th
-
Activation & Deactivation of Condition type in Sales order
Hi friends, I have below requirement and need help on the same. While creating or changing the sales order, if order qty is less than 5, we need to activate condition type 'ZD03' for pricing. if it is more than 5 it should be deactivated. Kindly sug