Advantage of fal_client/fal_server in 9.2.0.7 ?
Hi
I have been setting up standby databases, and just have a (hopefully) stupid question ...
What is the advantage of using the fal_client/fal_Server over automatic gap resolution in 9.2.0.7.
As all my gaps seem to get resolved without the fal* parameters set ...
Please someone point out the obvious for me
Cheers
Ian
Thanks for that, but its a 10.1 document.
Just read metalink doc 232649.1 again, still cant see the advantage fo setting the fal* parameters. Have tested, and my gaps seem to get resolved without them
Having said that, the 10.1 doc does say "Prior to Oracle Database 10g Release 1, the FAL client and server were used to resolve gaps from the primary database", which contradicts the metalink doc slightly
I'll keep reading to see if I can understand, but in the meantime can someone please point the out the obvious for me !
Similar Messages
-
Fal_client, fal_server triggering when applying with delay .
Hi,
I'm on 10.2.0.3 RAC witn physical standby , old fasion config with no standby redo so shiping only after log switch .
We've got 10h delay and looks like gap resolving via fal client/server is triggered only when the acctual arch log
needs to be recovered and is not found on standby site, no when it is not shipped .
So, is that feature or bug ?
In my understanding gap resolving should trigger as soon as gap is detected ..
Looks like delay we using messed things up .
Regards.
GregHi,
Well, the shipment of archive files is done by Primary database's Background. There is no involvment of Standy database in shipment of archive log files.
Hence Standy database cannot detect the recieved archived log files, unless while recovering. While trying to recover if it finds any missing archive log file, it sends a request to the primary for that particular log file and again Primary Database sends the files.
No, its not a bug, rather a neccessity. As Standby database does not know which sequence# is generated into archive log.
Yes, identifying the missed archived log while recovering will increase the delay. Below the example:
Primary Generates seq#: 100
Recovery process at DR : 90
Files transfered at DR:
90
91
92
93
94
95
96
97 missed
98
98
.. and so on.
Here until Standby recovers 96 seq number, it wont be able to know that Seq# 97 is missing, and hence the delay.
but for this we have to increase the Recovery speed by other ways documented.
Below given link is for Oracle 10g Best Practises: Dataguard Redo Apply and Media Recovery.
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10grecoverybestpractices-129577.pdf
Hope this answers to your question.
regards,
Sajjad -
TIP 04: Duplicating a Database in 10g by Joel Pèrez
Hi OTN Readers!
Everyday I get connection on Internet and one of the first issues that
I do is to open the OTN main page to look for any new article or any
new news about the Oracle Technology. After I open the main page of
OTN Forums and I check what answers I can write to help some people
to work with the Oracle Technology and I decided to begin to write some
threads to help DBAs and Developers to learn the new features of 10g.
I hope you can take advantage of them which will be published here in
this forum. For any comment you can write to me directly to : [email protected] . Apart from your comments you can suggest to me any topic to write an article like this.
Please do not replay this thread, if you have any question related to
this I recommend you to open a new post. Thanks!
The tip of this thread is: Duplicating a Database in 10g
Joel Pérez
http://otn.oracle.com/expertsStep 6: Editing the file generated
The file generated is going to be like this:
Dump file f:\ora9i\admin\copy1\udump\copy1_ora_912.trc
Thu May 20 16:27:37 2004
ORACLE V9.2.0.1.0 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.0 Service Pack 4, CPU type 586
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Windows 2000 Version 5.0 Service Pack 4, CPU type 586
Instance name: copy1
Redo thread mounted by this instance: 1
Oracle process number: 10
Windows thread id: 912, image: ORACLE.EXE
*** SESSION ID:(9.38) 2004-05-20 16:27:37.000
*** 2004-05-20 16:27:37.000
# The following are current System-scope REDO Log Archival related
# parameters and can be included in the database initialization file.
# LOG_ARCHIVE_DEST=''
# LOG_ARCHIVE_DUPLEX_DEST=''
# LOG_ARCHIVE_FORMAT=ARC%S.%T
# REMOTE_ARCHIVE_ENABLE=TRUE
# LOG_ARCHIVE_MAX_PROCESSES=2
# STANDBY_FILE_MANAGEMENT=MANUAL
# STANDBY_ARCHIVE_DEST=%ORACLE_HOME%\RDBMS
# FAL_CLIENT=''
# FAL_SERVER=''
# LOG_ARCHIVE_DEST_1='LOCATION=f:\ora9i\RDBMS'
# LOG_ARCHIVE_DEST_1='MANDATORY NOREOPEN NODELAY'
# LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM SYNC'
# LOG_ARCHIVE_DEST_1='NOREGISTER NOALTERNATE NODEPENDENCY'
# LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED'
# LOG_ARCHIVE_DEST_STATE_1=ENABLE
# Below are two sets of SQL statements, each of which creates a new
# control file and uses it to open the database. The first set opens
# the database with the NORESETLOGS option and should be used only if
# the current versions of all online logs are available. The second
# set opens the database with the RESETLOGS option and should be used
# if online logs are unavailable.
# The appropriate set of statements can be copied from the trace into
# a script file, edited as necessary, and executed when there is a
# need to re-create the control file.
# Set #1. NORESETLOGS case
# The following commands will create a new control file and use it
# to open the database.
# Data used by the recovery manager will be lost. Additional logs may
# be required for media recovery of offline data files. Use this
# only if the current version of all online logs are available.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "COPY1" NORESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'C:\COPY1\COPY1\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\COPY1\COPY1\REDO02.LOG' SIZE 10M,
GROUP 3 'C:\COPY1\COPY1\REDO03.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\COPY1\COPY1\SYSTEM01.DBF',
'C:\COPY1\COPY1\UNDOTBS01.DBF',
'C:\COPY1\COPY1\CWMLITE01.DBF',
'C:\COPY1\COPY1\DRSYS01.DBF',
'C:\COPY1\COPY1\EXAMPLE01.DBF',
'C:\COPY1\COPY1\INDX01.DBF',
'C:\COPY1\COPY1\ODM01.DBF',
'C:\COPY1\COPY1\TOOLS01.DBF',
'C:\COPY1\COPY1\USERS01.DBF',
'C:\COPY1\COPY1\XDB01.DBF'
CHARACTER SET WE8ISO8859P1
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
# Commands to add tempfiles to temporary tablespaces.
# Online tempfiles have complete space information.
# Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\COPY1\COPY1\TEMP01.DBF'
SIZE 41943040 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
# End of tempfile additions.
# Set #2. RESETLOGS case
# The following commands will create a new control file and use it
# to open the database.
# The contents of online logs will be lost and all backups will
# be invalidated. Use this only if online logs are damaged.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "COPY1" RESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'C:\COPY1\COPY1\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\COPY1\COPY1\REDO02.LOG' SIZE 10M,
GROUP 3 'C:\COPY1\COPY1\REDO03.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\COPY1\COPY1\SYSTEM01.DBF',
'C:\COPY1\COPY1\UNDOTBS01.DBF',
'C:\COPY1\COPY1\CWMLITE01.DBF',
'C:\COPY1\COPY1\DRSYS01.DBF',
'C:\COPY1\COPY1\EXAMPLE01.DBF',
'C:\COPY1\COPY1\INDX01.DBF',
'C:\COPY1\COPY1\ODM01.DBF',
'C:\COPY1\COPY1\TOOLS01.DBF',
'C:\COPY1\COPY1\USERS01.DBF',
'C:\COPY1\COPY1\XDB01.DBF'
CHARACTER SET WE8ISO8859P1
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
# Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
# Commands to add tempfiles to temporary tablespaces.
# Online tempfiles have complete space information.
# Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\COPY1\COPY1\TEMP01.DBF'
SIZE 41943040 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
# End of tempfile additions.
#As you can see, you have there different ways to recreate the controlfile. In Our case, We are going to recreate the controlfiles so:
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "COPY2" RESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'C:\COPY2\COPY2\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\COPY2\COPY2\REDO02.LOG' SIZE 10M,
GROUP 3 'C:\COPY2\COPY2\REDO03.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\COPY2\COPY2\SYSTEM01.DBF',
'C:\COPY2\COPY2\UNDOTBS01.DBF',
'C:\COPY2\COPY2\CWMLITE01.DBF',
'C:\COPY2\COPY2\DRSYS01.DBF',
'C:\COPY2\COPY2\EXAMPLE01.DBF',
'C:\COPY2\COPY2\INDX01.DBF',
'C:\COPY2\COPY2\ODM01.DBF',
'C:\COPY2\COPY2\TOOLS01.DBF',
'C:\COPY2\COPY2\USERS01.DBF',
'C:\COPY2\COPY2\XDB01.DBF'
CHARACTER SET WE8ISO8859P1
Note: two important issues to denote in the sentence above is the word "SET" instead of "REUSE" and the controlfiles must be recreated in RESETLOG mode because the database must be opened in RESETLOG mode.
If you use the word "REUSE" instead of "SET" the opening of the database is going to request recovery of the datafile of the tablespace system.
So, apply this to recreate the controlfiles:
- Start the service in windows for the database COPY2
- Get connection through SQL*Plus as system
- Shut down the database with shutdown abort
- Start the database up in nomount stage
- apply the sentence to recreate the controlfile.
C:\>SET ORACLE_SID=COPY2
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.1.0 - Production on Thu May 20 16:46:49 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> conn sys as sysdba
Enter password:
Connected.
SQL>
SQL> shutdown abort
ORACLE instance shut down.
SQL>
SQL>
SQL> startup nomount
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL>
SQL>
SQL> CREATE CONTROLFILE SET DATABASE "COPY2" RESETLOGS NOARCHIVELOG
2 -- SET STANDBY TO MAXIMIZE PERFORMANCE
3 MAXLOGFILES 50
4 MAXLOGMEMBERS 5
5 MAXDATAFILES 100
6 MAXINSTANCES 1
7 MAXLOGHISTORY 226
8 LOGFILE
9 GROUP 1 'F:\COPY2\COPY2\REDO01.LOG' SIZE 10M,
10 GROUP 2 'F:\COPY2\COPY2\REDO02.LOG' SIZE 10M,
11 GROUP 3 'F:\COPY2\COPY2\REDO03.LOG' SIZE 10M
12 -- STANDBY LOGFILE
13 DATAFILE
14 'F:\COPY2\COPY2\SYSTEM01.DBF',
15 'F:\COPY2\COPY2\UNDOTBS01.DBF',
16 'F:\COPY2\COPY2\CWMLITE01.DBF',
17 'F:\COPY2\COPY2\DRSYS01.DBF',
18 'F:\COPY2\COPY2\EXAMPLE01.DBF',
19 'F:\COPY2\COPY2\INDX01.DBF',
20 'F:\COPY2\COPY2\ODM01.DBF',
21 'F:\COPY2\COPY2\TOOLS01.DBF',
22 'F:\COPY2\COPY2\USERS01.DBF',
23 'F:\COPY2\COPY2\XDB01.DBF'
24 CHARACTER SET WE8ISO8859P1
25 ;
Control file created.
SQL>Joel Pérez
http://otn.oracle.com/experts -
Help needed for hash_area_size setting for Datawarehouse environment
We have an Oracle 10g Datawarehousing environment , running on 3 - node RAC
with 16 GB RAM & 4 CPUs each and roughly we have 200 users and night jobs running on this D/W .
We find that query performance of all ETL Processes & joins are quite slow .
How much should we increase the value of hash_area_size parameter for this Datawarehouse environment ? This is a Production database, with Oracle Database 10g Enterprise Edition Release 10.1.0.5.0.
We use OWB 10g Tool for this D/W and we need to change the hash_area_size to increase the performance of the ETL Processes.
This is the Oracle init parameter settings used, as shown below : -
Kindly suggest ,
Thanks & best regards ,
===========================================================
ORBIT
__db_cache_size 1073741824
__java_pool_size 67108864
__large_pool_size 318767104
__shared_pool_size 1744830464
optimizercost_based_transformation OFF
active_instance_count
aq_tm_processes 1
archive_lag_target 0
asm_diskgroups
asm_diskstring
asm_power_limit 1
audit_file_dest /dboracle/orabase/product/10.1.0/rdbms/audit
audit_sys_operations FALSE
audit_trail NONE
background_core_dump partial
background_dump_dest /dborafiles/orbit/ORBIT01/admin/bdump
backup_tape_io_slaves TRUE
bitmap_merge_area_size 1048576
blank_trimming FALSE
buffer_pool_keep
buffer_pool_recycle
circuits
cluster_database TRUE
cluster_database_instances 3
cluster_interconnects
commit_point_strength 1
compatible 10.1.0
control_file_record_keep_time 90
control_files #NAME?
core_dump_dest /dborafiles/orbit/ORBIT01/admin/cdump
cpu_count 4
create_bitmap_area_size 8388608
create_stored_outlines
cursor_sharing EXACT
cursor_space_for_time FALSE
db_16k_cache_size 0
db_2k_cache_size 0
db_32k_cache_size 0
db_4k_cache_size 0
db_8k_cache_size 0
db_block_buffers 0
db_block_checking FALSE
db_block_checksum TRUE
db_block_size 8192
db_cache_advice ON
db_cache_size 1073741824
db_create_file_dest #NAME?
db_create_online_log_dest_1 #NAME?
db_create_online_log_dest_2 #NAME?
db_create_online_log_dest_3
db_create_online_log_dest_4
db_create_online_log_dest_5
db_domain
db_file_multiblock_read_count 64
db_file_name_convert
db_files 999
db_flashback_retention_target 1440
db_keep_cache_size 0
db_name ORBIT
db_recovery_file_dest #NAME?
db_recovery_file_dest_size 2.62144E+11
db_recycle_cache_size 0
db_unique_name ORBIT
db_writer_processes 1
dbwr_io_slaves 0
ddl_wait_for_locks FALSE
dg_broker_config_file1 /dboracle/orabase/product/10.1.0/dbs/dr1ORBIT.dat
dg_broker_config_file2 /dboracle/orabase/product/10.1.0/dbs/dr2ORBIT.dat
dg_broker_start FALSE
disk_asynch_io TRUE
dispatchers
distributed_lock_timeout 60
dml_locks 9700
drs_start FALSE
enqueue_resources 10719
event
fal_client
fal_server
fast_start_io_target 0
fast_start_mttr_target 0
fast_start_parallel_rollback LOW
file_mapping FALSE
fileio_network_adapters
filesystemio_options asynch
fixed_date
gc_files_to_locks
gcs_server_processes 2
global_context_pool_size
global_names FALSE
hash_area_size 131072
hi_shared_memory_address 0
hpux_sched_noage 0
hs_autoregister TRUE
ifile
instance_groups
instance_name ORBIT01
instance_number 1
instance_type RDBMS
java_max_sessionspace_size 0
java_pool_size 67108864
java_soft_sessionspace_limit 0
job_queue_processes 10
large_pool_size 318767104
ldap_directory_access NONE
license_max_sessions 0
license_max_users 0
license_sessions_warning 0
local_listener
lock_name_space
lock_sga FALSE
log_archive_config
log_archive_dest
log_archive_dest_1 LOCATION=+ORBT_A06635_DATA1_ASM/ORBIT/ARCHIVELOG/
log_archive_dest_10
log_archive_dest_2
log_archive_dest_3
log_archive_dest_4
log_archive_dest_5
log_archive_dest_6
log_archive_dest_7
log_archive_dest_8
log_archive_dest_9
log_archive_dest_state_1 enable
log_archive_dest_state_10 enable
log_archive_dest_state_2 enable
log_archive_dest_state_3 enable
log_archive_dest_state_4 enable
log_archive_dest_state_5 enable
log_archive_dest_state_6 enable
log_archive_dest_state_7 enable
log_archive_dest_state_8 enable
log_archive_dest_state_9 enable
log_archive_duplex_dest
log_archive_format %t_%s_%r.arc
log_archive_local_first TRUE
log_archive_max_processes 2
log_archive_min_succeed_dest 1
log_archive_start FALSE
log_archive_trace 0
log_buffer 1167360
log_checkpoint_interval 0
log_checkpoint_timeout 1800
log_checkpoints_to_alert FALSE
log_file_name_convert
logmnr_max_persistent_sessions 1
max_commit_propagation_delay 700
max_dispatchers
max_dump_file_size UNLIMITED
max_enabled_roles 150
max_shared_servers
nls_calendar
nls_comp
nls_currency #
nls_date_format DD-MON-RRRR
nls_date_language ENGLISH
nls_dual_currency ?
nls_iso_currency UNITED KINGDOM
nls_language ENGLISH
nls_length_semantics BYTE
nls_nchar_conv_excp FALSE
nls_numeric_characters
nls_sort
nls_territory UNITED KINGDOM
nls_time_format HH24.MI.SSXFF
nls_time_tz_format HH24.MI.SSXFF TZR
nls_timestamp_format DD-MON-RR HH24.MI.SSXFF
nls_timestamp_tz_format DD-MON-RR HH24.MI.SSXFF TZR
O7_DICTIONARY_ACCESSIBILITY FALSE
object_cache_max_size_percent 10
object_cache_optimal_size 102400
olap_page_pool_size 0
open_cursors 1024
open_links 4
open_links_per_instance 4
optimizer_dynamic_sampling 2
optimizer_features_enable 10.1.0.5
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
os_authent_prefix ops$
os_roles FALSE
parallel_adaptive_multi_user TRUE
parallel_automatic_tuning TRUE
parallel_execution_message_size 4096
parallel_instance_group
parallel_max_servers 80
parallel_min_percent 0
parallel_min_servers 0
parallel_server TRUE
parallel_server_instances 3
parallel_threads_per_cpu 2
pga_aggregate_target 8589934592
plsql_code_type INTERPRETED
plsql_compiler_flags INTERPRETED
plsql_debug FALSE
plsql_native_library_dir
plsql_native_library_subdir_count 0
plsql_optimize_level 2
plsql_v2_compatibility FALSE
plsql_warnings DISABLE:ALL
pre_page_sga FALSE
processes 600
query_rewrite_enabled TRUE
query_rewrite_integrity enforced
rdbms_server_dn
read_only_open_delayed FALSE
recovery_parallelism 0
remote_archive_enable TRUE
remote_dependencies_mode TIMESTAMP
remote_listener
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
replication_dependency_tracking TRUE
resource_limit FALSE
resource_manager_plan
resumable_timeout 0
rollback_segments
serial_reuse disable
service_names ORBIT
session_cached_cursors 0
session_max_open_files 10
sessions 2205
sga_max_size 3221225472
sga_target 3221225472
shadow_core_dump partial
shared_memory_address 0
shared_pool_reserved_size 102760448
shared_pool_size 318767104
shared_server_sessions
shared_servers 0
skip_unusable_indexes TRUE
smtp_out_server
sort_area_retained_size 0
sort_area_size 65536
sp_name ORBIT
spfile #NAME?
sql_trace FALSE
sql_version NATIVE
sql92_security FALSE
sqltune_category DEFAULT
standby_archive_dest ?/dbs/arch
standby_file_management MANUAL
star_transformation_enabled TRUE
statistics_level TYPICAL
streams_pool_size 0
tape_asynch_io TRUE
thread 1
timed_os_statistics 0
timed_statistics TRUE
trace_enabled TRUE
tracefile_identifier
transactions 2425
transactions_per_rollback_segment 5
undo_management AUTO
undo_retention 7200
undo_tablespace UNDOTBS1
use_indirect_data_buffers FALSE
user_dump_dest /dborafiles/orbit/ORBIT01/admin/udump
utl_file_dir /orbit_serial/oracle/utl_out
workarea_size_policy AUTOThe parameters are already unset in the environment, but do show up in v$parameter, much like shared_pool_size is visible in v$parameter despite only sga_target being set.
SQL> show parameter sort
NAME TYPE VALUE
sortelimination_cost_ratio integer 5
nls_sort string binary
sort_area_retained_size integer 0
sort_area_size integer 65536
SQL> show parameter hash
NAME TYPE VALUE
hash_area_size integer 131072
SQL> exit
Only set hash_area_size and sort_area_size should only be set when not using automatic undo, which is not supported in EBS databases.
Database Initialization Parameters for Oracle Applications 11i
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216205.1 -
I install two portal and one infra.
And Configuring Multiple Middle Tiers with a Load Balancing Router successfully.
The origin portal virsion is 10.1.2.
Now i want to update to 10.1.4 but have somthing wrong after I enter the update commond.
Error message is java.SQLException: IO Exception : connection is reset.
Then I run de upgrade command again , get the different error message
### ERROR: OracleAS Portal 10.1.4 upgrade precheck failed. See /raid/product/OraHome_1/upgrade/temp/portal/precheck.log for details.
Error: Component upgrade failed PORTAL
Error: PORTAL component version is: 10.1.2.0.2 INVALID
FAILURE: Some OracleAS plug-ins report failure during upgrade.
The Portal Upgrade precheck log is follow:
-- Portal Upgrade release information: 10.1.4 Release 1
Upgrade Started in -precheck -force mode at Wed Oct 18 21:22:19 2006
### PHASE 1: Initial setup
Existing temporary directory /raid/product/OraHome_1/upgrade/temp/portal/prechktmp renamed to /raid/product/OraHome_1/upgrade/temp/portal/prechktmp.Wed-Oct-18-21.14.10-2006
Existing log file /raid/product/OraHome_1/upgrade/temp/portal/precheck.log renamed to /raid/product/OraHome_1/upgrade/temp/portal/precheck.log.Wed-Oct-18-21.14.10-2006
Creating /raid/product/OraHome_1/upgrade/temp/portal/prechktmp directory
Creating /raid/product/OraHome_1/upgrade/temp/portal/prechktmp/gen directory
Welcome to the Oracle Portal Production Upgrade
The script will lead you through the upgrade step by step.
For questions asked in this script that have appropriate defaults
those defaults will be shown in square brackets after the question.
To accept a default value, simply hit the Return key.
### Set New Variables and Validate Environment Variables
Step started at Wed Oct 18 21:22:19 2006
PERL5LIB set to ../../../perl/lib/site_perl/5.6.1/i686-linux:../../../perl/lib:../../../perl/lib/5.6.1
Check SQL*Plus version
Running upg/frwk/upchkpls.sql### Log shared and environment variables
Step started at Wed Oct 18 21:22:19 2006
Log file: /raid/product/OraHome_1/upgrade/temp/portal/precheck.log
Log dir: /raid/product/OraHome_1/upgrade/temp/portal/prechktmp
Profile dir: /raid/product/OraHome_1/upgrade/temp/portal/prechktmp/gen
Verbose flag: 0
Debug mode: 0
Force flag: 1
Nosave flag: 0
Save flag: 1
Repos flag: 0
Compile flag: 0
Oldver flag: 0
isPatch flag: 0
Will save Tables: 1
Environment variables:
===========================================================
DISPLAY: :0
G_BROKEN_FILENAMES: 1
HISTSIZE: 1000
HOME: /home/oracle
HOSTNAME: portal1.bizmatch.com.cn
IBPATH: /usr/bin
INPUTRC: /etc/inputrc
KDEDIR: /usr
LANG: en_US.UTF-8
LC_CTYPE: en_US.UTF-8
LD_ASSUME_KERNEL: 2.4.19
LD_LIBRARY_PATH: /raid/product/OraHome_1/lib32:/raid/product/OraHome_1/lib:/raid/tmp/jdk/jre/lib/i386/client:/raid/tmp/jdk/jre/lib/i386:/raid/tmp/jdk/jre/../lib/i386:/raid/product/OraHome_1/lib32:/raid/product/OraHome_1/network/lib32:/raid/product/OraHome_1/lib:/raid/product/OraHome_1/network/lib:/raid/product/OraHome_1/lib:/usr/lib:/usr/local/lib
LD_LIBRARY_PATH_64: /raid/product/OraHome_1/lib:/raid/product/OraHome_1/lib32:/raid/product/OraHome_1/network/lib32:/raid/product/OraHome_1/lib:/raid/product/OraHome_1/network/lib:
LESSOPEN: |/usr/bin/lesspipe.sh %s
LIBPATH: /raid/product/OraHome_1/lib32:/raid/product/OraHome_1/lib
LOGNAME: oracle
LS_COLORS: no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:
MAIL: /var/spool/mail/oracle
NLSPATH: /usr/dt/lib/nls/msg/%L/%N.cat
ORACLE_BASE: /raid/product
ORACLE_HOME: /raid/product/OraHome_1
ORACLE_SID:
PATH: /raid/product/OraHome_1/bin:../../../perl/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/home/oracle/bin:/raid/product/OraHome_1/bin:/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin:/home/oracle/bin:/bin:/sbin:/usr/bin
PERL5LIB: ../../../perl/lib/site_perl/5.6.1/i686-linux:../../../perl/lib:../../../perl/lib/5.6.1
PWD: /raid/tmp/mrua
QTDIR: /usr/lib/qt-3.3
REPCA_ORACLE_HOME: /raid/tmp
SHELL: /bin/bash
SHLIB_PATH: /raid/product/OraHome_1/lib32:/raid/product/OraHome_1/lib
SHLVL: 3
SQLPATH: .:owa:/raid/product/OraHome_1/upgrade/temp/portal/prechktmp/gen:upg/frwk:sql:wwc
SSH_ASKPASS: /usr/libexec/openssh/gnome-ssh-askpass
TERM: xterm
USER: oracle
XAUTHORITY: /root/.Xauthority
XFILESEARCHPATH: /usr/dt/app-defaults/%L/Dt
_: /raid/tmp/jdk/bin/java
### PHASE 2: User inputs
Upgrade phase started at Wed Oct 18 21:22:19 2006
Processing Metadata File: upg/common/inputchk/inputchk.met Running upg/common/inputchk/inputchk.pl ### Verify that the database has been backed up
Step started at Wed Oct 18 21:22:19 2006
Before beginning the upgrade, it is important that you backup your database.
Have you backed up your database (y/n)? [y]: y
Ask user for schema and database details
Enter the name of schema that you would like to upgrade [PORTAL]: portal
Enter the password for the schema that you would like to upgrade [portal]:
Enter the password for the SYS user of your database [CHANGE_ON_INSTALL]:
Enter the TNS connect string to connect to the database [ORCL]: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=portaldb.bizmatch.com.cn)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=IASDB.bizmatch.com.cn)))
Responses to the above questions will now be recorded in the file
upgrade.in. Placeholders are recorded instead of actual passwords,
for security reasons. If you wish, this file can be edited and used
as the standard input for a subsequent run of upgrade.pl.
### Verify database connection information.
Step started at Wed Oct 18 21:22:19 2006
Validating the connection information supplied by the user
Running CheckConnections()Check connection to the Portal repository.
Check connection as SYS to the Portal repository.
Ending CheckConnections() Wed Oct 18 21:22:19 2006
### PHASE 3: Setup
Upgrade phase started at Wed Oct 18 21:22:19 2006
Processing Metadata File: upg/common/setup/setup.met Running upg/common/setup/setup.pl Analyzing the product schema
Running upg/common/setup/upgettbs.sqlPortal SQL script started at Wed Oct 18 21:22:19 2006
Connected.
### Install messaging framework
Step started at Wed Oct 18 21:22:19 2006
Portal SQL script started at Wed Oct 18 21:22:19 2006
Connected.
No errors.
No errors.
Creating sequence 'wwpof_output_id_seq'
Creating sequence 'wwpof_output_script_run_id_seq'
Creating table 'wwpof_output$'
Creating table 'wwpof_msg$'
Creating index 'wwpof_output_idx1' in tablespace PORTAL
Creating index 'wwpof_output_idx2' in tablespace PORTAL
Creating index 'wwpof_output_idx3' in tablespace PORTAL
Creating index 'wwpof_msg_uk1' in tablespace PORTAL
No errors.
No errors.
Granting privileges on POF objects to SYS
Loading /raid/product/OraHome_1/upgrade/temp/portal/prechktmp/upgus.ctl using sqlldr
Copying scripts to de-install message objects.
Get Portal version and determine upgrade sequence
Running upg/common/setup/upgetver.sqlPortal SQL script started at Wed Oct 18 21:22:22 2006
Connected.
Upgrading to version 10.1.4.0.0
Version directories to be traversed: upg/10140
Running upg/common/setup/setseq.pl Set the correct Traversal Sequence
### PHASE 4: Pre upgrade checks
Upgrade phase started at Wed Oct 18 21:22:22 2006
Processing Metadata File: upg/common/prechk/prechk.met Running upg/common/prechk/prechk.pl Set up subscriber iteration
Running upg/common/prechk/upgetsub.sqlPortal SQL script started at Wed Oct 18 21:22:22 2006
Connected.
### Perform pre upgrade checks
Step started at Wed Oct 18 21:22:22 2006
Running upg/frwk/utlchvpd.sqlPortal SQL script started at Wed Oct 18 21:22:22 2006
Connected.
Calling DoPreChecks()Starting precheck at Wed Oct 18 21:22:23 2006
Calling upg/common/prechk/sysuppre.sql
Connected.
Running upg/common/prechk/upgtabs.sqlPortal SQL script started at Wed Oct 18 21:22:23 2006
Connected.
### ERROR: WWU-00013: Tables with UPG_ prefix were found in the OracleAS Portal
### schema.
### Table Name
### UPG_PTL_OBJECTS$
### UPG_PTL_TABLES$
### UPG_WWV_DOCINFO
### CAUSE: The upgrade is terminated when UPG_ prefix tables are present in the
### OracleAS Portal schema.
### ACTION: Back up all tables with the UPG_ prefix, then delete them from the
### OracleAS Portal schema. The script
### /raid/product/OraHome_1/upgrade/temp/portal/prechktmp/dropupg.sql can
### be used for this purpose.
### Check Failed at Wed Oct 18 21:22:23 2006 Continuing as PreCheck mode is specified
Calling upg/common/prechk/wwvcheck.sql
Portal SQL script started at Wed Oct 18 21:22:23 2006
Connected.
# Beginning outer script: prechk/wwvcheck
# Check for invalid Portlet Builder (webview) components.
# Checking if there are too many archive components.
# Checking for missing application schemas.
# Ending outer script: prechk/wwvcheck, 0.31 seconds
Ending precheck at Wed Oct 18 21:22:24 2006
Running upg/common/prechk/upchkobj.sqlPortal SQL script started at Wed Oct 18 21:22:24 2006
Connected.
Running upg/common/prechk/chkmrreg.sqlPortal SQL script started at Wed Oct 18 21:22:24 2006
Connected.
# Beginning outer script: prechk/chkmrreg
# Pre-check to determine that OracleAS Portal is registered with OracleAS Internet Directory
# OracleAS Portal has been wired with OracleAS Internet Directory
# Ending outer script: prechk/chkmrreg, 0.10 seconds
### Connect to OID as Application Entry
Running upg/common/prechk/bindapp.sql . Portal SQL script started at Wed Oct 18 21:22:24 2006
Connected.
# Beginning outer script: prechk/bindapp
#-- Beginning inner script: prechk/bindapp
# Pre-check to test bind to OracleAS Internet Directory Server
# Connecting to OracleAS Internet Directory as the Application Entry
# Connecting to OracleAS Internet Directory as the Application Entry was successful
#-- Ending inner script: prechk/bindapp, 0.14 seconds
# Ending outer script: prechk/bindapp, 0.20 seconds
### Display Tablespace and Parameter Settings
Running upg/common/prechk/../../frwk/upshoset.sql .
Subscriber independent Processing.
Portal SQL script started at Wed Oct 18 21:22:24 2006
Connected.
# Beginning outer script: prechk/upshoset
Tablespace Usage
TABLESPACE BYTES_USED BYTES_FREE TOTAL BYTES CREATE_BYTES AUT FILE STAT STATUS ENABLED BLOCKS BLOCK_SIZE FILE_NAME
B2B_DT 63766528 20054016 83886080 0 YES AVAILABLE ONLINE READ WRITE 10240 8192 /raid/product/oradata/IASDB/b2b_dt.dbf
B2B_IDX 14942208 26935296 41943040 0 YES AVAILABLE ONLINE READ WRITE 5120 8192 /raid/product/oradata/IASDB/b2b_idx.dbf
B2B_LOB 11141120 30736384 41943040 0 YES AVAILABLE ONLINE READ WRITE 5120 8192 /raid/product/oradata/IASDB/b2b_lob.dbf
B2B_RT 39780352 12582912 52428800 0 YES AVAILABLE ONLINE READ WRITE 6400 8192 /raid/product/oradata/IASDB/b2b_rt.dbf
BAM 6553600 3866624 10485760 0 YES AVAILABLE ONLINE READ WRITE 1280 8192 /raid/product/oradata/IASDB/bam.dbf
DCM 237174784 20709376 257949696 0 YES AVAILABLE ONLINE READ WRITE 31488 8192 /raid/product/oradata/IASDB/dcm.dbf
DISCO_PTM5 1310720 1769472 3145728 0 YES AVAILABLE ONLINE READ WRITE 384 8192 /raid/product/oradata/IASDB/discopltc1.dbf
_CACHE
DISCO_PTM5 1310720 1769472 3145728 0 YES AVAILABLE ONLINE READ WRITE 384 8192 /raid/product/oradata/IASDB/discopltm1.dbf
_META
DSGATEWAY_ 5701632 1572864 7340032 0 YES AVAILABLE ONLINE READ WRITE 896 8192 /raid/product/oradata/IASDB/oss_sys01.dbf
TAB
IAS_META 210567168 30539776 241172480 0 YES AVAILABLE ONLINE READ WRITE 29440 8192 /raid/product/oradata/IASDB/ias_meta01.dbf
OCATS 1769472 5505024 7340032 0 YES AVAILABLE ONLINE READ WRITE 896 8192 /raid/product/oradata/IASDB/oca.dbf
OLTS_ATTRS 2555904 917504 3538944 0 YES AVAILABLE ONLINE READ WRITE 432 8192 /raid/product/oradata/IASDB/attrs1_oid.dbf
TORE
OLTS_BATTR 262144 131072 516096 0 YES AVAILABLE ONLINE READ WRITE 63 8192 /raid/product/oradata/IASDB/battrs1_oid.dbf
STORE
OLTS_DEFAU 3997696 851968 4915200 0 YES AVAILABLE ONLINE READ WRITE 600 8192 /raid/product/oradata/IASDB/gdefault1_oid.dbf
LT
ORABPEL 11993088 29884416 41943040 0 YES AVAILABLE ONLINE READ WRITE 5120 8192 /raid/product/oradata/IASDB/orabpel.dbf
PORTAL 74383360 6946816 78643200 0 YES AVAILABLE ONLINE READ WRITE 9600 8192 /raid/product/oradata/IASDB/portal.dbf
PORTAL_DOC 851968 3276800 4194304 0 YES AVAILABLE ONLINE READ WRITE 512 8192 /raid/product/oradata/IASDB/ptldoc.dbf
PORTAL_IDX 11206656 41156608 52428800 0 YES AVAILABLE ONLINE READ WRITE 6400 8192 /raid/product/oradata/IASDB/ptlidx.dbf
PORTAL_LOG 262144 3866624 4194304 0 YES AVAILABLE ONLINE READ WRITE 512 8192 /raid/product/oradata/IASDB/ptllog.dbf
SYSAUX 235732992 5373952 241172480 0 YES AVAILABLE ONLINE READ WRITE 29440 8192 /raid/product/oradata/IASDB/sysaux01.dbf
SYSTEM 834404352 4390912 838860800 0 YES AVAILABLE SYSTEM READ WRITE 102400 8192 /raid/product/oradata/IASDB/system01.dbf
TEMP 6291456 18874368 25165824 25165824 YES AVAILABLE ONLINE READ WRITE 3072 8192 /raid/product/oradata/IASDB/temp01.dbf
UDDISYS_TS 19988480 28180480 48234496 0 YES AVAILABLE ONLINE READ WRITE 5888 8192 /raid/product/oradata/IASDB/uddisys01.dbf
UNDOTBS1 247201792 4390912 251658240 0 YES AVAILABLE ONLINE READ WRITE 30720 8192 /raid/product/oradata/IASDB/undotbs01.dbf
USERS 327680 4849664 5242880 0 YES AVAILABLE ONLINE READ WRITE 640 8192 /raid/product/oradata/IASDB/users01.dbf
WCRSYS_TS 1703936 15007744 16777216 0 YES AVAILABLE ONLINE READ WRITE 2048 8192 /raid/product/oradata/IASDB/wcrsys01.dbf
Sort Segment Data
TABLESPACE EXTENT_SIZE TOTAL_EXTENTS USED_EXTENTS FREE_EXTENTS MAX_USED_SIZE
TEMP 128 5 0 5 1
SGA Allocation Stats
POOL NAME BYTES
java pool free memory 67108864
Total 67108864
SGA Allocation Stats
POOL NAME BYTES
large pool free memory 8388608
Total 8388608
SGA Allocation Stats
POOL NAME BYTES
shared pool fixed allocation callback 344
shared pool pl/sql source 1156
shared pool table definiti 1712
shared pool alert threshol 2648
shared pool trigger inform 3048
shared pool joxs heap 4220
shared pool policy hash ta 4220
shared pool trigger defini 5980
shared pool KQR S SO 7176
shared pool PLS non-lib hp 12208
shared pool trigger source 18652
shared pool KQR L SO 44032
shared pool repository 76264
shared pool KQR M SO 81408
shared pool parameters 105696
shared pool type object de 194164
shared pool KQR S PO 207136
shared pool VIRTUAL CIRCUITS 649340
shared pool FileOpenBlock 746704
shared pool kmgsb circular statistics 821248
shared pool KSXR pending messages que 841036
shared pool KSXR receive buffers 1032500
shared pool KQR M PO 1675892
shared pool sessions 1835204
shared pool PL/SQL DIANA 2910560
shared pool private strands 2928640
shared pool KTI-UNDO 3019632
shared pool KGLS heap 3105320
shared pool PL/SQL MPCODE 3705484
shared pool row cache 3707272
shared pool ASH buffers 4194304
shared pool event statistics per sess 9094400
shared pool sql area 9234624
shared pool library cache 10361688
shared pool miscellaneous 15820288
shared pool free memory 74540744
Total 150994944
SGA Allocation Stats
POOL NAME BYTES
log_buffer 524288
fixed_sga 778968
buffer_cache 50331648
Total 51634904
Database Parameters
NAME VALUE
O7_DICTIONARY_ACCESSIBILITY FALSE
active_instance_count
aq_tm_processes 1
archive_lag_target 0
asm_diskgroups
asm_diskstring
asm_power_limit 1
audit_file_dest /raid/product/OraHome_1/rdbms/audit
audit_sys_operations FALSE
audit_trail NONE
background_core_dump partial
background_dump_dest /raid/product/admin/IASDB/bdump
backup_tape_io_slaves FALSE
bitmap_merge_area_size 1048576
blank_trimming FALSE
buffer_pool_keep
buffer_pool_recycle
circuits
cluster_database FALSE
cluster_database_instances 1
cluster_interconnects
commit_point_strength 1
compatible 10.1.0.2.0
control_file_record_keep_time 7
control_files /raid/product/oradata/IASDB/control01.ctl, /raid/product/oradata/IASDB/control02
.ctl, /raid/product/oradata/IASDB/control03.ctl
core_dump_dest /raid/product/admin/IASDB/cdump
cpu_count 2
create_bitmap_area_size 8388608
create_stored_outlines
cursor_sharing EXACT
cursor_space_for_time FALSE
db_16k_cache_size 0
db_2k_cache_size 0
db_32k_cache_size 0
db_4k_cache_size 0
db_8k_cache_size 0
db_block_buffers 0
db_block_checking FALSE
db_block_checksum TRUE
db_block_size 8192
db_cache_advice ON
db_cache_size 50331648
db_create_file_dest
db_create_online_log_dest_1
db_create_online_log_dest_2
db_create_online_log_dest_3
db_create_online_log_dest_4
db_create_online_log_dest_5
db_domain bizmatch.com.cn
db_file_multiblock_read_count 16
db_file_name_convert
db_files 200
db_flashback_retention_target 1440
db_keep_cache_size 0
db_name IASDB
db_recovery_file_dest /raid/product/flash_recovery_area
db_recovery_file_dest_size 2147483648
db_recycle_cache_size 0
db_unique_name IASDB
db_writer_processes 1
dbwr_io_slaves 0
ddl_wait_for_locks FALSE
dg_broker_config_file1 /raid/product/OraHome_1/dbs/dr1IASDB.dat
dg_broker_config_file2 /raid/product/OraHome_1/dbs/dr2IASDB.dat
dg_broker_start FALSE
disk_asynch_io TRUE
dispatchers (PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer), (PROTOCOL=TCP)(PRE=oracle.a
urora.server.SGiopServer)
distributed_lock_timeout 60
dml_locks 1760
drs_start FALSE
enqueue_resources 1980
event
fal_client
fal_server
fast_start_io_target 0
fast_start_mttr_target 0
fast_start_parallel_rollback LOW
file_mapping FALSE
fileio_network_adapters
filesystemio_options none
fixed_date
gc_files_to_locks
gcs_server_processes 0
global_context_pool_size
global_names FALSE
hash_area_size 131072
hi_shared_memory_address 0
hs_autoregister TRUE
ifile
instance_groups
instance_name IASDB
instance_number 0
instance_type RDBMS
java_max_sessionspace_size 0
java_pool_size 67108864
java_soft_sessionspace_limit 0
job_queue_processes 5
large_pool_size 8388608
ldap_directory_access NONE
license_max_sessions 0
license_max_users 0
license_sessions_warning 0
local_listener
lock_name_space
lock_sga FALSE
log_archive_config
log_archive_dest
log_archive_dest_1
log_archive_dest_10
log_archive_dest_2
log_archive_dest_3
log_archive_dest_4
log_archive_dest_5
log_archive_dest_6
log_archive_dest_7
log_archive_dest_8
log_archive_dest_9
log_archive_dest_state_1 enable
log_archive_dest_state_10 enable
log_archive_dest_state_2 enable
log_archive_dest_state_3 enable
log_archive_dest_state_4 enable
log_archive_dest_state_5 enable
log_archive_dest_state_6 enable
log_archive_dest_state_7 enable
log_archive_dest_state_8 enable
log_archive_dest_state_9 enable
log_archive_duplex_dest
log_archive_format %t_%s_%r.dbf
log_archive_local_first TRUE
log_archive_max_processes 2
log_archive_min_succeed_dest 1
log_archive_start FALSE
log_archive_trace 0
log_buffer 524288
log_checkpoint_interval 0
log_checkpoint_timeout 1800
log_checkpoints_to_alert FALSE
log_file_name_convert
logmnr_max_persistent_sessions 1
max_commit_propagation_delay 0
max_dispatchers
max_dump_file_size UNLIMITED
max_enabled_roles 150
max_shared_servers
nls_calendar
nls_comp
nls_currency
nls_date_format
nls_date_language
nls_dual_currency
nls_iso_currency
nls_language AMERICAN
nls_length_semantics BYTE
nls_nchar_conv_excp FALSE
nls_numeric_characters
nls_sort
nls_territory AMERICA
nls_time_format
nls_time_tz_format
nls_timestamp_format
nls_timestamp_tz_format
object_cache_max_size_percent 10
object_cache_optimal_size 102400
olap_page_pool_size 0
open_cursors 300
open_links 4
open_links_per_instance 4
optimizer_dynamic_sampling 2
optimizer_features_enable 10.1.0.5
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
os_authent_prefix ops$
os_roles FALSE
parallel_adaptive_multi_user TRUE
parallel_automatic_tuning FALSE
parallel_execution_message_size 2148
parallel_instance_group
parallel_max_servers 40
parallel_min_percent 0
parallel_min_servers 0
parallel_server FALSE
parallel_server_instances 1
parallel_threads_per_cpu 2
pga_aggregate_target 33554432
plsql_code_type INTERPRETED
plsql_compiler_flags INTERPRETED, NON_DEBUG
plsql_debug FALSE
plsql_native_library_dir
plsql_native_library_subdir_count 0
plsql_optimize_level 2
plsql_v2_compatibility FALSE
plsql_warnings DISABLE:ALL
pre_page_sga FALSE
processes 150
query_rewrite_enabled TRUE
query_rewrite_integrity enforced
rdbms_server_dn
read_only_open_delayed FALSE
recovery_parallelism 0
remote_archive_enable true
remote_dependencies_mode TIMESTAMP
remote_listener
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
replication_dependency_tracking TRUE
resource_limit FALSE
resource_manager_plan
resumable_timeout 0
rollback_segments
serial_reuse disable
service_names IASDB.bizmatch.com.cn
session_cached_cursors 0
session_max_open_files 10
sessions 400
sga_max_size 281018368
sga_target 0
shadow_core_dump partial
shared_memory_address 0
shared_pool_reserved_size 7549747
shared_pool_size 150994944
shared_server_sessions
shared_servers 1
skip_unusable_indexes TRUE
smtp_out_server
sort_area_retained_size 0
sort_area_size 65536
sp_name IASDB
spfile /raid/product/OraHome_1/dbs/spfileIASDB.ora
sql92_security FALSE
sql_trace FALSE
sql_version NATIVE
sqltune_category DEFAULT
standby_archive_dest ?/dbs/arch
standby_file_management MANUAL
star_transformation_enabled FALSE
statistics_level TYPICAL
streams_pool_size 0
tape_asynch_io TRUE
thread 0
timed_os_statistics 0
timed_statistics TRUE
trace_enabled TRUE
tracefile_identifier
transactions 440
transactions_per_rollback_segment 5
undo_management AUTO
undo_retention 900
undo_tablespace UNDOTBS1
use_indirect_data_buffers FALSE
user_dump_dest /raid/product/admin/IASDB/udump
utl_file_dir
workarea_size_policy AUTO
All Portal DBMS jobs
JOB LOG_USER PRIV_USER SCHEMA_USER
17 PORTAL PORTAL PORTAL
18 PORTAL PORTAL PORTAL
27 PORTAL PORTAL PORTAL
28 PORTAL PORTAL PORTAL
43 PORTAL PORTAL PORTAL
Details of all Portal DBMS jobs
JOB WHAT
17 begin execute immediate 'begin wwctx_sso.cleanup_sessions(
p_hours_old => 168 ); end;' ; exception when others then
null; end;
18 wwsec_api_private.rename_users;
27 wwv_context.sync;
28 wwv_context.optimize(CTX_DDL.OPTLEVEL_FULL,1440,null);
43 begin execute immediate 'begin wwutl_cache_sys.process_background_inv
al; end;' ; exception when others then wwlog_api.log(p_
domain=>'utl', p_subdomain=>'cache', p_name=>'background
', p_action=>'process_background_inval', p_information =
> 'Error in process_background_inval '|| sqlerrm);end;
Database version details
BANNER
Oracle Database 10g Enterprise Edition Release 10.1.0.4.2 - Prod
PL/SQL Release 10.1.0.4.2 - Production
CORE 10.1.0.4.0 Production
TNS for Linux: Version 10.1.0.4.0 - Production
NLSRTL Version 10.1.0.4.2 - Production
# Ending outer script: prechk/upshoset, 1.83 seconds
### Log invalid DB objects in the temporary directory.
List count of invalid objects in the database in /raid/product/OraHome_1/upgrade/temp/portal/prechktmp/dbinvob1.log
Running upg/frwk/dbinvobj.sqlPortal SQL script started at Wed Oct 18 21:22:26 2006
Connected.
### Install Schema Validation Utility
Running upg/common/prechk/svuver.sql . Portal SQL script started at Wed Oct 18 21:22:26 2006
Connected.
# Beginning outer script: prechk/svuver
#-- Beginning inner script: prechk/svuver
# Portal Schema Version = 10.1.2.0.2
# Version of schema validation utility being installed = 101202
# Load the Schema Validation Utility
Installed version of schema validation utility: 10.1.2.0.6
Schema Validation Utility version: 10.1.2.0.6 will be installed.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
No errors.
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_SCHEMA_COMMON:
44/9 PL/SQL: Statement ignored
44/16 PLS-00905: object PORTAL.WWSBR_SITE_DB is invalid
70/9 PL/SQL: Statement ignored
70/17 PLS-00905: object PORTAL.WWPOB_API_PAGE is invalid
96/10 PL/SQL: Statement ignored
96/18 PLS-00905: object PORTAL.WWV_THINGDB is invalid
122/10 PL/SQL: Statement ignored
122/18 PLS-00905: object PORTAL.WWV_THINGDB is invalid
No errors.
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_ATTR_VALIDATION:
740/9 PL/SQL: SQL Statement ignored
778/26 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_PAGE_GROUP_VALIDATION:
366/9 PL/SQL: SQL Statement ignored
372/31 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
1447/13 PL/SQL: SQL Statement ignored
1458/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
1624/13 PL/SQL: SQL Statement ignored
1635/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
1783/13 PL/SQL: SQL Statement ignored
1794/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
1896/13 PL/SQL: SQL Statement ignored
1907/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
2009/13 PL/SQL: SQL Statement ignored
2020/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
2126/13 PL/SQL: SQL Statement ignored
2137/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
No errors.
No errors.
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_PAGE_VALIDATION:
137/9 PL/SQL: SQL Statement ignored
144/53 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
258/21 PL/SQL: SQL Statement ignored
260/43 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
277/17 PL/SQL: SQL Statement ignored
279/39 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
322/9 PL/SQL: SQL Statement ignored
327/25 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
353/25 PL/SQL: Statement ignored
353/40 PLS-00905: object PORTAL.WWPOB_API_PAGE is invalid
362/29 PL/SQL: SQL Statement ignored
369/42 PL/SQL: ORA-06575: Package or function WWPOB_API_PAGE is in an
invalid state
376/33 PL/SQL: Statement ignored
376/48 PLS-00905: object PORTAL.WWPOB_API_PAGE is invalid
385/21 PL/SQL: SQL Statement ignored
388/39 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
1214/21 PL/SQL: SQL Statement ignored
1216/42 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
1262/9 PL/SQL: SQL Statement ignored
1266/30 PL/SQL: ORA-06575: Package or function WWPOB_API_PAGE is in an
invalid state
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_REGION_VALIDATION:
237/9 PL/SQL: SQL Statement ignored
249/41 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_STYLE_VALIDATION:
414/9 PL/SQL: SQL Statement ignored
424/44 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
584/21 PL/SQL: Statement ignored
584/52 PLS-00905: object PORTAL.WWSBR_SITE_DB is invalid
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_THING_VALIDATION:
2129/32 PL/SQL: Item ignored
2130/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
2131/32 PL/SQL: Item ignored
2132/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
2133/32 PL/SQL: Item ignored
2134/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
2135/31 PL/SQL: Item ignored
2136/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
2137/31 PL/SQL: Item ignored
2138/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
2139/31 PL/SQL: Item ignored
2140/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
2151/32 PL/SQL: Item ignored
2151/42 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
2152/32 PL/SQL: Item ignored
2152/42 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
2153/32 PL/SQL: Item ignored
2153/42 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
2202/17 PL/SQL: SQL Statement ignored
2212/27 PL/SQL: ORA-06575: Package or function WWSBR_THING_TYPES is in an
invalid state
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_ITEM_VALIDATION:
322/9 PL/SQL: SQL Statement ignored
338/30 PL/SQL: ORA-06575: Package or function WWPOB_API_PAGE is in an
invalid state
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_PORTLET_VALIDATION:
155/13 PL/SQL: SQL Statement ignored
163/36 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
170/13 PL/SQL: SQL Statement ignored
178/36 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
invalid state
459/9 PL/SQL: SQL Statement ignored
467/31 PL/SQL: ORA-06575: Package or function WWSBR_SITEBUILDER_PROVIDER
is in an invalid state
581/9 PL/SQL: SQL Statement ignored
584/21 PL/SQL: ORA-06575: Package or function WWSBR_SITEBUILDER_PROVIDER
is in an invalid state
588/9 PL/SQL: SQL Statement ignored
591/29 PL/SQL: ORA-06575: Package or function WWSBR_SITEBUILDER_PROVIDER
is in an invalid state
Warning: Package Body created with compilation errors.
Errors for PACKAGE BODY WWUTL_DBPROV_VALIDATION:
341/25 PL/SQL: Item ignored
341/45 PLS-00302: component 'URL' must be declared
559/17 PL/SQL: Statement ignored
559/17 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
562/17 PL/SQL: Statement ignored
564/40 PLS-00320: the declaration of the type of this expression is
incomplete or malformed
No errors.
No errors.
### Invoke Schema Validation Utility in Report Mode
Running upg/common/prechk/../../frwk/svurun.sql . Portal SQL script started at Wed Oct 18 21:22:31 2006
Connected.
# Beginning outer script: prechk/svurun
#-- Beginning inner script: frwk/svurun
declare
ERROR at line 1:
ORA-20000:
ORA-06512: at "PORTAL.WWPOF", line 440
ORA-06512: at line 45
ORA-20000:
ORA-06512: at "PORTAL.WWPOF", line 440
ORA-06512: at "PORTAL.WWUTL_SCHEMA_VALIDATION", line 263
ORA-04063: package body "PORTAL.WWUTL_PAGE_GROUP_VALIDATION" has errors
ORA-06508: PL/SQL: could not find program unit being called
Connected.
# Run the report mode of the schema validation utility
#---- Beginning inner script: wwutl_schema_validation.validate_all
# Running the validation in report mode
# Schema Validation Utility Version = 10.1.2.0.6
# Validate Page Groups
# Handling exception
# ERROR: When executing schema validation utility
# ERROR: ORA-06508: PL/SQL: could not find program unit being called
# ----- PL/SQL Call Stack -----
object line object
handle number name
0x5c4fef28 434 package body PORTAL.WWPOF
0x5bb96e20 263 package body PORTAL.WWUTL_SCHEMA_VALIDATION
0x5bb96e20 297 package body PORTAL.WWUTL_SCHEMA_VALIDATION
0x5b885fe4 18 anonymous block
# Handling exception
# ERROR: When running the schema validation utility
# ERROR: ORA-20000:
ORA-06512: at "PORTAL.WWPOF", line 440
ORA-06512: at "PORTAL.WWUTL_SCHEMA_VALIDATION", line 263
ORA-04063: package body "PORTAL.WWUTL_PAGE_GROUP_VALIDATION" has errors
ORA-06508: PL/SQL: could not find program unit being called
# ----- PL/SQL Call Stack -----
object line object
handle number name
0x5c4fef28 434 package body PORTAL.WWPOF
0x5b885fe4 45 anonymous block
### ERROR: Exception Executing upg/common/prechk/../../frwk/svurun.sql REPORT PRECHK for Subscriber: 1
### Check Failed at Wed Oct 18 21:22:31 2006 Continuing as PreCheck mode is specified
### PHASE 5: Version specific user inputs
Upgrade phase started at Wed Oct 18 21:22:31 2006
Processing Metadata File: upg/10140/inputchk/inputchk.met ###
### PHASE 6: Version specific pre upgrade checks
Upgrade phase started at Wed Oct 18 21:22:31 2006
Processing Metadata File: upg/10140/prechk/prechk.met ###
### PHASE 7: Pre upgrade common information gathering
Upgrade phase started at Wed Oct 18 21:22:31 2006
Processing Metadata File: upg/common/info/info.met ### Log portal configuration info in the temporary directory.
Running upg/common/info/ptlinfo.sql . Portal SQL script started at Wed Oct 18 21:22:31 2006
Connected.
# Beginning outer script: info/ptlinfo
# Ending outer script: info/ptlinfo, 0.13 seconds
Metadata File upg/10140/info/info.met does not exist.
### PHASE 8: Verify user inputs
Upgrade phase started at Wed Oct 18 21:22:32 2006
Processing Metadata File: upg/common/verfyinp/verfyinp.met Running upg/common/verfyinp/verfyinp.pl The following details have been determined:
General Details
===========================================================
Log File Name : /raid/product/OraHome_1/upgrade/temp/portal/precheck.log
RDBMS Version : 10.1.0
Product Version : 10.1.2.0.2
Oracle PL/SQL Toolkit Schema : SYS
Oracle PL/SQL Toolkit version : 10.1.2.0.2
O7 accessibility : FALSE
Schema Details
===========================================================
Name : portal
Connect String : (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=portaldb.bizmatch.com.cn)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=IASDB.bizmatch.com.cn)))
Tablespace Details
===========================================================
Default Tablespace : PORTAL
Temporary Tablespace : TEMP
Document Tablespace : PORTAL_DOC
Logging Tablespace : PORTAL_LOG
Index Tablespace : PORTAL
### ERROR: WWU-00030: Pre-Check mode encountered the following errors:
### 184 : ### ERROR: WWU-00013: Tables with UPG_ prefix were found in the OracleAS Portal
### 706 : 44/16 PLS-00905: object PORTAL.WWSBR_SITE_DB is invalid
### 708 : 70/17 PLS-00905: object PORTAL.WWPOB_API_PAGE is invalid
### 710 : 96/18 PLS-00905: object PORTAL.WWV_THINGDB is invalid
### 712 : 122/18 PLS-00905: object PORTAL.WWV_THINGDB is invalid
### 719 : 778/26 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 727 : 372/31 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 731 : 1458/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 735 : 1635/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 739 : 1794/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 743 : 1907/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 747 : 2020/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 751 : 2137/28 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 761 : 144/53 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 765 : 260/43 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 769 : 279/39 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 773 : 327/25 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 777 : 353/40 PLS-00905: object PORTAL.WWPOB_API_PAGE is invalid
### 779 : 369/42 PL/SQL: ORA-06575: Package or function WWPOB_API_PAGE is in an
### 783 : 376/48 PLS-00905: object PORTAL.WWPOB_API_PAGE is invalid
### 785 : 388/39 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 789 : 1216/42 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 793 : 1266/30 PL/SQL: ORA-06575: Package or function WWPOB_API_PAGE is in an
### 801 : 249/41 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 809 : 424/44 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 813 : 584/52 PLS-00905: object PORTAL.WWSBR_SITE_DB is invalid
### 819 : 2130/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
### 821 : 2132/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
### 823 : 2134/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
### 825 : 2136/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
### 827 : 2138/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
### 829 : 2140/13 PLS-00905: object PORTAL.WWSBR_THING_TYPES is invalid
### 831 : 2151/42 PLS-00320: the declaration of the type of this expression is
### 835 : 2152/42 PLS-00320: the declaration of the type of this expression is
### 839 : 2153/42 PLS-00320: the declaration of the type of this expression is
### 843 : 2212/27 PL/SQL: ORA-06575: Package or function WWSBR_THING_TYPES is in an
### 851 : 338/30 PL/SQL: ORA-06575: Package or function WWPOB_API_PAGE is in an
### 859 : 163/36 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 863 : 178/36 PL/SQL: ORA-06575: Package or function WWSBR_SITE_DB is in an
### 867 : 467/31 PL/SQL: ORA-06575: Package or function WWSBR_SITEBUILDER_PROVIDER
### 871 : 584/21 PL/SQL: ORA-06575: Package or function WWSBR_SITEBUILDER_PROVIDER
### 875 : 591/29 PL/SQL: ORA-06575: Package or function WWSBR_SITEBUILDER_PROVIDER
### 883 : 341/45 PLS-00302: component 'URL' must be declared
### 885 : 559/17 PLS-00320: the declaration of the type of this expression is
### 889 : 564/40 PLS-00320: the declaration of the type of this expression is
### 904 : ERROR at line 1:
### 905 : ORA-20000:
### 906 : ORA-06512: at "PORTAL.WWPOF", line 440
### 907 : ORA-06512: at line 45
### 908 : ORA-20000:
### 909 : ORA-06512: at "PORTAL.WWPOF", line 440
### 910 : ORA-06512: at "PORTAL.WWUTL_SCHEMA_VALIDATION", line 263
### 911 : ORA-04063: package body "PORTAL.WWUTL_PAGE_GROUP_VALIDATION" has errors
### 912 : ORA-06508: PL/SQL: could not find program unit being called
### 922 : # ERROR: When executing schema validation utility
### 923 : # ERROR: ORA-06508: PL/SQL: could not find program unit being called
### 933 : # ERROR: When running the schema validation utility
### 934 : # ERROR: ORA-20000:
### 935 : ORA-06512: at "PORTAL.WWPOF", line 440
### 936 : ORA-06512: at "PORTAL.WWUTL_SCHEMA_VALIDATION", line 263
### 937 : ORA-04063: package body "PORTAL.WWUTL_PAGE_GROUP_VALIDATION" has errors
### 938 : ORA-06508: PL/SQL: could not find program unit being called
### 947 : ### ERROR: Exception Executing upg/common/prechk/../../frwk/svurun.sql REPORT PRECHK for Subscriber: 1
### Check Failed at Wed Oct 18 21:22:32 2006 Continuing as PreCheck mode is specified
Pre-Check Completed at Wed Oct 18 21:22:32 2006Hi,
Its good that u pasted the complete log file. In your environment you have to run this upgrade tool only once from any of the middle tier.
And with respect to your error that u got in precheck is quite simple. All u have to do is just run this script from by connecting to portal schema using sqlplus.
Run dropupg.sql
Location-------- /raid/product/OraHome_1/upgrade/temp/portal/prechktmp/dropupg.sql
Later you re-run the upgrade tool and let me know the status.
Good luck
Tanmai -
Physycal Standby archive log gap....
Archive log gap caused... The reason being before the logs can be shipped to standby location where deleted by rman backup... So restored the archives on primary database site back again... These old logs from the gap are not getting shipped to the standby site, but the new ones generated currently are getting shipped.
Can some one help what action do I have to take to resolve the gap? And how to know what's causing and not letting this shipping happen?
Or shall I manually ship these gap archive logs to the standby site?1) Yep running 9i.. But still its not shipping...Are FAL_CLIENT & FAL_SERVER parameters are defined at standby level?
If not, define them at standby level. Those parameter will help to get missing (gap) archives from primary database.
2) If so shipped manually do have to register the archive logs? Just copy from primary to standby and don't need to register any gap, that was in 8i and when there was no background process MRP (media recovery process). If the standby database is in auto media recovery, then, it will automatically applies all the archived logs.
Jaffar -
Steps to do switchover / switchback in RAC environment
Hi folks,
I m having setup with 2 node RAC primary and 2 node RAC Dataguard on 10.2.0.4.0. Dataguard setup is working fine. Dataguard is setup with Standby Redo log group with managed recovery. There is no problem with transferring archives & applying on standby.
Now I want to do Switchover/Switchback between Primary and Standby for RAC. I am familiar with Single instance Switchover and Switchback but never did RAC environment Switchover/Switchback. Can anybody please elaborate steps or suggest any link for me??
regards,
manishHi Guys,
Today I performed RAC Switchover / Switchback for 2 Node Primary with 2 Node Standby on OEL. I expected some issues, but it was totally smooth. Giving you steps for the same, so it will be useful to you. Even this would be my first contribution to Oracle Forums.
DB Name DB Unique Name Host Name Instance Name
live live linux1 live1
live live linux2 live2
live livestdby linux3 livestdby1
live livestdby linux4 livestdby2
Verify that each database is properly configured for the role it is about to assume and the standby database is in mounted state.
(Verify all Dataguard parameters on each node for Primary & Standby)
Like,
Log_archive_dest_1
Log_archive_dest_2
Log_archive_dest_state_1
Log_archive_dest_state_2
Fal_client
Fal_server
Local_listener
Remote_listener
Standby_archive_Dest
Standby_archive_management
service_names
db_unique_name
instance_name
db_file_name_convert
log_file_name_convert
Verify that both Primary RAC & Dataguard RAC are functioning properly and both are in Sync
On Primary,
Select thread#,max(sequence#) from v$archived_log group by thread#;
On Standby,
Select thread#,max(sequence#) from v$log_history group by thread#;
Before performing a switchover from a RAC primary shut down all but one primary instance (they can be restarted after the switchover has completed).
./srvctl stop instance –d live –i live1
Before performing a switchover or a failover to a RAC standby shut down all but one standby instance (they can be restarted after the role transition has completed).
./srvctl stop instance –d live –i livestdby1
On the primary database initiate the switchover:
alter database commit to switchover to physical standby with session shutdown;
Shutdown former Primary database & Startup in Mount State.
Shut immediate;
Startup mount;
select name,db_unique_name, log_mode,open_mode,controlfile_type,switchover_status,database_role from v$database;
Make log_Archive_Dest_state_2 to DEFER
alter system set log_archive_dest_state_2='DEFER' sid='*';
On the (old) standby database,
select name,log_mode,open_mode,controlfile_type,switchover_status,database_role from v$database;
On the (old) standby database switch to new primary role:
alter database commit to switchover to primary;
shut immediate;
startup;
On new Primary database,
select name,log_mode,open_mode,controlfile_type,switchover_status,database_role from v$database;
Make log_Archive_Dest_state_2 to ENABLE
alter system set log_archive_dest_state_2='ENABLE' sid='*';
Add tempfiles in New Primary database.
Do some archivelog switches on new primary database & verify that archives are getting transferred to Standby database.
On new primary,
select error from v$archive_Dest_status;
select max(sequence#) from v$archived_log;
On new Standby, Start Redo Apply
alter database recover managed standby database using current logfile disconnect;
Select max(sequence#) from v$log_history; (should be matching with Primary)
Now Start RAC databases services (both Primary – in open & Standby – in mount)
On new Primary Server.
./srvctl start instance –d live –i livestdby2
Verify using ./crs_stat –t
Check that database is opened in R/W mode.
On new Standby Server.
./srvctl start instance –d live –i live2 –o mount
Now add TAF services on new Primary (former Standby) Server.
By Command Prompt,
./srvctl add service -d live -s srvc_livestdby -r livestdby1,livestdby2 -P BASIC
OR
By GUI,
dbca -> Oracle Read Application Cluster database -> Service Management -> select database -> add services, details (Preferred / Available), TAF Policy (Basic / Preconnect) - > Finish
Start the services,
./srvctl start service -d live
Verify the same,
./crs_stat -t
Perform TAF testing, to make sure Load Balancing & Failover.
regards,
manish
Email: [email protected]
Edited by: Manish Nashikkar on Aug 31, 2010 7:41 AM
Edited by: Manish Nashikkar on Aug 31, 2010 7:42 AM -
Data Guard configuration-Archivelogs not being transferred
Hi Gurus,
I have configured data guard in Linux with 10g oracle, although I am new to this concept. My tnsping is working well both sides. I have issued alter database recover managed standby using current logfile disconnect command in standby site. But I am not receiving the archive logs in the standby site. I have attached my both pfiles below for your reference:
Primary database name: Chennai
Secondary database name: Mumbai
PRIMARY PFILE:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_domain=""
background_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/bdump
core_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/cdump
user_dump_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/udump
db_create_file_dest=/u01/app/oracle/product/10.2.0/db_1/oradata
db_recovery_file_dest=/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.2.0.1.0
processes=150
sga_target=285212672
audit_file_dest=/u01/app/oracle/product/10.2.0/db_1/admin/chennai/adump
remote_login_passwordfile=EXCLUSIVE
dispatchers="(PROTOCOL=TCP) (SERVICE=chennaiXDB)"
pga_aggregate_target=94371840
undo_management=AUTO
undo_tablespace=UNDOTBS1
control_files=("/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/controlfile/o1_mf_82gl1b43_.ctl", "/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/controlfile/o1_mf_82gl1bny_.ctl")
DB_NAME=chennai
DB_UNIQUE_NAME=chennai
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/arch/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chennai'
LOG_ARCHIVE_DEST_2=
'SERVICE=MUMBAI LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=mumbai'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=mumbai
FAL_CLIENT=chennai
DB_FILE_NAME_CONVERT=(/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/,/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/)
LOG_FILE_NAME_CONVERT='/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/'
STANDBY_FILE_MANAGEMENT=AUTO
SECONDARY PFILE:
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_domain=""
db_name=chennai
background_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/bdump
core_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/cdump
user_dump_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/udump
db_recovery_file_dest=/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/
db_create_file_dest=/home/oracle/oracle/product/10.2.0/db_1/oradata/
db_recovery_file_dest_size=2147483648
job_queue_processes=10
compatible=10.2.0.1.0
processes=150
sga_target=285212672
audit_file_dest=/home/oracle/oracle/product/10.2.0/db_1/admin/mumbai/adump
remote_login_passwordfile=EXCLUSIVE
dispatchers="(PROTOCOL=TCP) (SERVICE=mumbaiXDB)"
pga_aggregate_target=94371840
undo_management=AUTO
undo_tablespace=UNDOTBS1
control_files="/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/controlfile/standby01.ctl","/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/controlfile/standby02.ctl"
DB_UNIQUE_NAME=mumbai
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chennai,mumbai)'
LOG_ARCHIVE_DEST_1='LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=mumbai'
LOG_ARCHIVE_DEST_2='SERVICE=chennai LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chennai'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
FAL_SERVER=chennai
FAL_CLIENT=mumbai
DB_FILE_NAME_CONVERT=(/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/datafile/,/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/datafile/)
LOG_FILE_NAME_CONVERT='/u01/app/oracle/product/10.2.0/db_1/oradata/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/oradata/MUMBAI/onlinelog/','/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area/CHENNAI/onlinelog/','/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/MUMBAI/onlinelog/'
STANDBY_FILE_MANAGEMENT=AUTO
Any help would be greatly appreciated. Kindly, help me someone please..
-Vimal.Thanks Balazs, Mseberg, CKPT for all your replies...
CKPT....I just did what you said..Comes below primary output & standby output...
PRIMARY_
SQL> set feedback off
SQL> set trimspool on
SQL> set line 500
SQL> set pagesize 50
SQL> column name for a30
SQL> column display_value for a30
SQL> column ID format 99
SQL> column "SRLs" format 99
SQL> column active format 99
SQL> col type format a4
SQL> column ID format 99
SQL> column "SRLs" format 99
SQL> column active format 99
SQL> col type format a4
SQL> col PROTECTION_MODE for a20
SQL> col RECOVERY_MODE for a20
SQL> col db_mode for a15
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert /home/oracle/oracle/product/10
.2.0/db_1/oradata/MUMBAI/dataf
ile/, /u01/app/oracle/product/
10.2.0/db_1/oradata/CHENNAI/da
tafile/
db_name chennai
db_unique_name chennai
dg_broker_config_file1 /u01/app/oracle/product/10.2.0
/db_1/dbs/dr1chennai.dat
dg_broker_config_file2 /u01/app/oracle/product/10.2.0
/db_1/dbs/dr2chennai.dat
dg_broker_start FALSE
fal_client chennai
fal_server mumbai
local_listener
log_archive_config DG_CONFIG=(chennai,mumbai)
log_archive_dest_2 SERVICE=MUMBAI LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,P
RIMARY_ROLE)
DB_UNIQUE_NAME=mumbai
log_archive_dest_state_2 ENABLE
log_archive_max_processes 30
log_file_name_convert /home/oracle/oracle/product/10
.2.0/db_1/oradata/MUMBAI/onlin
elog/, /u01/app/oracle/product
/10.2.0/db_1/oradata/CHENNAI/o
nlinelog/, /home/oracle/oracle
/product/10.2.0/db_1/flash_rec
overy_area/MUMBAI/onlinelog/,
/u01/app/oracle/product/10.2.0
/db_1/flash_recovery_area/CHEN
NAI/onlinelog/
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management AUTO
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE,switchover_status from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
CHENNAI chennai MAXIMUM PERFORMANCE PRIMARY READ WRITE NOT ALLOWED
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
1 210
SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
2 FROM
3 (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
4 (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
5 WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
Thread Last Sequence Received Last Sequence Applied Difference
1 210 210 0
SQL> col severity for a15
SQL> col message for a70
SQL> col timestamp for a20
SQL> select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16191 15-AUG-2012 12:46:02 LGWR: Error 16191 creating archivelog file 'MUMBAI'
Error 16191 15-AUG-2012 12:46:02 FAL[server, ARC1]: Error 16191 creating remote archivelog file 'MUMBAI
Error 16191 15-AUG-2012 12:51:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 12:56:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:01:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:06:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:11:58 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:16:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:21:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:26:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:31:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:36:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:41:59 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:47:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:52:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 13:57:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:02:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
SEVERITY ERROR_CODE timestamp MESSAGE
16191.
Error 16191 15-AUG-2012 14:07:00 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:12:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:17:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:22:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:27:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:32:01 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 14:37:03 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 18:21:40 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
Error 16191 15-AUG-2012 18:26:41 PING[ARCb]: Heartbeat failed to connect to standby 'MUMBAI'. Error is
16191.
SQL> select ds.dest_id id
2 , ad.status
3 , ds.database_mode db_mode
4 , ad.archiver type
5 , ds.recovery_mode
6 , ds.protection_mode
7 , ds.standby_logfile_count "SRLs"
8 , ds.standby_logfile_active active
9 , ds.archived_seq#
10 from v$archive_dest_status ds
11 , v$archive_dest ad
12 where ds.dest_id = ad.dest_id
13 and ad.status != 'INACTIVE'
14 order by
15 ds.dest_id;
ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 210
2 ERROR UNKNOWN LGWR UNKNOWN MAXIMUM PERFORMANCE 0 0 0
SQL> column FILE_TYPE format a20
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
/u01/app/oracle/product/10.2.0/db_1/flash_recovery_area 2048 896
SQL> spool offspool u01/app/oracle/vimal.log
SP2-0768: Illegal SPOOL command
Usage: SPOOL { <file> | OFF | OUT }
where <file> is file_name[.ext] [CRE[ATE]|REP[LACE]|APP[END]]
SQL> spool /u01/app/oracle/vimal.log
Standby output_
SQL> set feedback off
SQL> set trimspool on
SQL> set line 500
SQL> set pagesize 50
SQL> set linesize 200
SQL> column name for a30
SQL> column display_value for a30
SQL> col value for a10
SQL> col PROTECTION_MODE for a15
SQL> col DATABASE_Role for a15
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert /u01/app/oracle/product/10.2.0
/db_1/oradata/CHENNAI/datafile
/, /home/oracle/oracle/product
/10.2.0/db_1/oradata/MUMBAI/da
tafile/
db_name chennai
db_unique_name mumbai
dg_broker_config_file1 /home/oracle/oracle/product/10
.2.0/db_1/dbs/dr1mumbai.dat
dg_broker_config_file2 /home/oracle/oracle/product/10
.2.0/db_1/dbs/dr2mumbai.dat
dg_broker_start FALSE
fal_client mumbai
fal_server chennai
local_listener
log_archive_config DG_CONFIG=(chennai,mumbai)
log_archive_dest_2 SERVICE=chennai LGWR ASYNC VAL
ID_FOR=(ONLINE_LOGFILES,PRIMAR
Y_ROLE) DB_UNIQUE_NAME=chennai
log_archive_dest_state_2 ENABLE
log_archive_max_processes 2
log_file_name_convert /u01/app/oracle/product/10.2.0
/db_1/oradata/CHENNAI/onlinelo
g/, /home/oracle/oracle/produc
t/10.2.0/db_1/oradata/MUMBAI/o
nlinelog/, /u01/app/oracle/pro
duct/10.2.0/db_1/flash_recover
y_area/CHENNAI/onlinelog/, /ho
me/oracle/oracle/product/10.2.
0/db_1/flash_recovery_area/MUM
BAI/onlinelog/
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management AUTO
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
CHENNAI mumbai MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
SQL> select process, status,thread#,sequence# from v$managed_standby;
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
MRP0 WAIT_FOR_LOG 1 152
SQL> col name for a30
SQL> select * from v$dataguard_stats;
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(1) interval
apply lag day(2) to second(0) interval
estimated startup time 10 second
standby has been open N
transport lag day(2) to second(0) interval
SQL> select * from v$archive_gap;
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
/home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/ 2048 150
SQL> spool off
-Vimal. -
Hi Guys,
I am trying to setup an physical standby across 2 locations in different cities. used RMAN duplicate command to setup an standby database on DR location.
Now once RMAN completed, we executed alter database recover managed standby database using current logfile command..
On standby database it shows:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 0
Next log sequence to archive 0
Current log sequence 0
and on primary current sequence is 3395
Also on standby
select PROCESS,STATUS,THREAD#,SEQUENCE#,BLOCK#,DELAY_MINS from v$managed_standby where process like 'MRP%';
PROCESS STATUS THREAD# SEQUENCE# BLOCK# DELAY_MINS
MRP0 WAIT_FOR_LOG 1 3788 0 0
Not sure what is the issue, please suggest..
Thanks and Regards!Thanks for the reply guys..
On Primary database:
SQL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2','log_archive_max_processes') order by name;
NAME DISPLAY_VALUE
db_file_name_convert
db_name orcl
db_unique_name orcl
dg_broker_config_file1 D:\SOFTWARES\ORACLE11GR2DB\PRO
DUCT\11.2.0\DBHOME_1\DATABASE\
DR1ORCL.DAT
dg_broker_config_file2 D:\SOFTWARES\ORACLE11GR2DB\PRO
DUCT\11.2.0\DBHOME_1\DATABASE\
DR2ORCL.DAT
dg_broker_start FALSE
fal_client
fal_server
local_listener LISTENER_ORCL
log_archive_config dg_config=(orcl,dr)
log_archive_dest_2 service=dr async valid_for=(on
line_logfile,primary_role) db_
unique_name=dr
log_archive_dest_state_2 enable
log_archive_max_processes 4
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest %ORACLE_HOME%\RDBMS
standby_file_management MANUAL
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE,switchover_status from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
ORCL orcl MAXIMUM PERFORMANCE PRIMARY READ WRITE RESOLVABLE GAP
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
1 3798
SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
2 FROM
3 (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
4 (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
5 WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
Thread Last Sequence Received Last Sequence Applied Difference
1 3798 3798 0
SQL> col severity for a15
SQL> col message for a70
SQL> col timestamp for a20
SQL> select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16058 16-JUN-2012 21:30:15 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:39:52 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:40:53 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:41:53 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:42:54 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:43:54 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:44:55 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:45:55 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:46:56 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:47:56 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16058 16-JUN-2012 22:48:57 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:49:57 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:50:58 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:51:58 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:52:59 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:54:00 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:55:00 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:56:01 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:57:01 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:58:02 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 22:59:02 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:00:03 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:01:03 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:02:04 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:03:05 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:04:05 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16058 16-JUN-2012 23:05:06 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:06:06 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:07:07 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:08:07 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:09:08 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:10:08 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:11:09 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:12:12 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:13:13 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:14:13 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:15:14 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:16:14 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:17:15 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:18:16 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:19:16 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:20:16 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
SEVERITY ERROR_CODE timestamp MESSAGE
8.
Error 16058 16-JUN-2012 23:21:17 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:22:18 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:23:18 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:24:19 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:25:19 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:26:20 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:27:20 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:28:21 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:29:21 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:30:22 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:31:22 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:32:23 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:33:23 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:34:24 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:35:24 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
SEVERITY ERROR_CODE timestamp MESSAGE
8.
Error 16058 16-JUN-2012 23:48:31 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:49:32 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:50:32 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:51:33 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16058 16-JUN-2012 23:52:33 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:53:34 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:54:35 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:55:35 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:56:36 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:57:36 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:58:37 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 16-JUN-2012 23:59:37 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 00:00:38 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 00:01:39 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 00:02:39 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:10:52 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:11:52 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:12:53 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:13:54 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:14:54 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:15:55 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:16:55 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:17:56 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:18:56 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:19:57 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:20:57 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:21:58 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:22:59 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:23:59 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:25:00 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
SEVERITY ERROR_CODE timestamp MESSAGE
Error 16058 17-JUN-2012 01:26:00 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:27:01 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:28:01 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:29:02 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:30:03 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:31:03 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:32:04 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:33:05 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:34:06 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
Error 16058 17-JUN-2012 01:35:06 PING[ARC1]: Heartbeat failed to connect to standby 'dr'. Error is 1605
8.
SQL> select ds.dest_id id
2 , ad.status
3 , ds.database_mode db_mode
4 , ad.archiver type
5 , ds.recovery_mode
6 , ds.protection_mode
7 , ds.standby_logfile_count "SRLs"
8 , ds.standby_logfile_active active
9 , ds.archived_seq#
10 from v$archive_dest_status ds
11 , v$archive_dest ad
12 where ds.dest_id = ad.dest_id
13 and ad.status != 'INACTIVE'
14 order by
15 ds.dest_id;
ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 3798
2 ERROR UNKNOWN LGWR IDLE MAXIMUM PERFORMANCE 0 0 0
SQL> column FILE_TYPE format a20
SQL> col name format a60
SQL> select name
2 , floor(space_limit / 1024 / 1024) "Size MB"
3 , ceil(space_used / 1024 / 1024) "Used MB"
4 from v$recovery_file_dest
5 order by name;
NAME Size MB Used MB
D:\Softwares\Oracle11gR2DB\flash_recovery_area 3912 282
SQL> spool off -
Backup controlfile to trace as 'c:\prod_ctl.txt'
Hi All,
11.2.0.1
I am examining the output of the trace file, because I want to simulate or test this process.
This file has two(2) sets of command. one(1) is for the intact online logs and the other one(1) is for damage online logs.
Questions:
1. What do u mean by damage online logs. does this mean that i lost all these redo files:
LOGFILE
GROUP 1 'D:\APP\PROD\ORADATA\ORCL\REDO01.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 2 'D:\APP\PROD\ORADATA\ORCL\REDO02.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 3 'D:\APP\PROD\ORADATA\ORCL\REDO03.LOG' SIZE 50M BLOCKSIZE 512
2. Why is that the two(2) sets are just the same set of commands? except for backup controlfile and open resetlogs?
3. Do I need RMAN backup to run this recovery statements?
Thanks,
pK
========
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
-- LOG_ARCHIVE_FORMAT=ARC%S_%R.%T
-- DB_UNIQUE_NAME="orcl"
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=MANUAL
-- STANDBY_ARCHIVE_DEST=%ORACLE_HOME%\RDBMS
-- FAL_CLIENT=''
-- FAL_SERVER=''
-- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
-- LOG_ARCHIVE_DEST_1='MANDATORY NOREOPEN NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM EXPEDITE NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='NOREGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE
-- Below are two sets of SQL statements, each of which creates a new
-- control file and uses it to open the database. The first set opens
-- the database with the NORESETLOGS option and should be used only if
-- the current versions of all online logs are available. The second
-- set opens the database with the RESETLOGS option and should be used
-- if online logs are unavailable.
-- The appropriate set of statements can be copied from the trace into
-- a script file, edited as necessary, and executed when there is a
-- need to re-create the control file.
-- Set #1. NORESETLOGS case
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'D:\APP\PROD\ORADATA\ORCL\REDO01.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 2 'D:\APP\PROD\ORADATA\ORCL\REDO02.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 3 'D:\APP\PROD\ORADATA\ORCL\REDO03.LOG' SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
'D:\APP\PROD\ORADATA\ORCL\SYSTEM01.DBF',
'D:\APP\PROD\ORADATA\ORCL\SYSAUX01.DBF',
'D:\APP\PROD\ORADATA\ORCL\UNDOTBS01.DBF',
'D:\APP\PROD\ORADATA\ORCL\USERS01.DBF',
'D:\APP\PROD\ORADATA\ORCL\EXAMPLE01.DBF'
CHARACTER SET WE8MSWIN1252
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE 'D:\APP\PROD\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2013_12_26\O1_MF_1_1_%U_.ARC';
-- ALTER DATABASE REGISTER LOGFILE 'D:\APP\PROD\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2013_12_26\O1_MF_1_1_%U_.ARC';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE
-- Database can now be opened normally.
ALTER DATABASE OPEN;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\APP\PROD\ORADATA\ORCL\TEMP01.DBF'
SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
-- End of tempfile additions.
-- Set #2. RESETLOGS case
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'D:\APP\PROD\ORADATA\ORCL\REDO01.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 2 'D:\APP\PROD\ORADATA\ORCL\REDO02.LOG' SIZE 50M BLOCKSIZE 512,
GROUP 3 'D:\APP\PROD\ORADATA\ORCL\REDO03.LOG' SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
'D:\APP\PROD\ORADATA\ORCL\SYSTEM01.DBF',
'D:\APP\PROD\ORADATA\ORCL\SYSAUX01.DBF',
'D:\APP\PROD\ORADATA\ORCL\UNDOTBS01.DBF',
'D:\APP\PROD\ORADATA\ORCL\USERS01.DBF',
'D:\APP\PROD\ORADATA\ORCL\EXAMPLE01.DBF'
CHARACTER SET WE8MSWIN1252
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE 'D:\APP\PROD\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2013_12_26\O1_MF_1_1_%U_.ARC';
-- ALTER DATABASE REGISTER LOGFILE 'D:\APP\PROD\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2013_12_26\O1_MF_1_1_%U_.ARC';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\APP\PROD\ORADATA\ORCL\TEMP01.DBF'
SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
-- End of tempfile additions.
===============,Hi ,
As far as i understand , one is with :
CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS NOARCHIVELOG
and the other one is with :
CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS NOARCHIVELOG
The first one (resetlogs) is useful when you are trying to open a database after incomplete recovery (during cloning for example) or while changing the name of the database after restoring and recovering a database during a clone process.
The second one ( noresetlogs )is used when we have a consistent database and we are recreating the controlfile.
In General , you may be using resetlogs while opening a cloned database after incomplete recovery.
Regards
Karan -
Few errror/doubts in Primary and Standby server - Need Help
Hi All,
I am having below doubts/errors. I need help to solve all the below questions.
I configured Dataguard successfully. Now the sync is upto date using the below parameters (using 11g):
at primary:
log_archive_dest_1=
log_archive_dest_2='SERVICE=standby.123 arch'
standby_file_management=auto
at sandby:
log_archive_dest_1=
standby_file_management=auto
Still i face below messages in the alertliog. Can anybody clearly explain me all the points.
Primary:
ORA-1652: unable to extend temp segment by 640 in tablespace NEWTEMP
I get this error when my archvie destination got filled up, later we released the enough space, the archived are generating. Still i see this message once in between. Need to solve this.
Primary:
Checkpoint not complete ( i see this message very often, want to get rid of this)
standby:
kcrrvslf: active RFS archival for log thread 1 sequence (sometimes i see this KCRRVSLF)
standby:
check that the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter is defined to a value that is sufficiently large enough to maintain afequate log switch information to resolve archivelog gaps. (also get this message in between the alertlog file in standby)
Standby:
FAL[client]: Error fetching gap sequence, no FAL server specified (this is very often message. How to remove this, what needs to be added. Do i need to add the below parameters)
FAL_CLIENT
FAL_SERVER
Thanks in advance.
Pas Moh
[email protected]Pas Moh wrote:
Hi All,
I am having below doubts/errors. I need help to solve all the below questions.
I configured Dataguard successfully. Now the sync is upto date using the below parameters (using 11g):
at primary:
log_archive_dest_1=
log_archive_dest_2='SERVICE=standby.123 arch'
standby_file_management=auto
at sandby:
log_archive_dest_1=
standby_file_management=auto
Still i face below messages in the alertliog. Can anybody clearly explain me all the points.
Primary:
ORA-1652: unable to extend temp segment by 640 in tablespace NEWTEMP
I get this error when my archvie destination got filled up, later we released the enough space, the archived are generating. Still i see this message once in between. Need to solve this.This error has absolutely nothing to do with the handling of archive logs, the status of archivelog destination, or primary/standby. Any relation you thought you saw was pure coincidence.
>
>
Primary:
Checkpoint not complete ( i see this message very often, want to get rid of this)
Here is the very first hit I got when I googled "oracle checkpoint not complete". Tom says it better than I would have.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:69012348056
standby:
kcrrvslf: active RFS archival for log thread 1 sequence (sometimes i see this KCRRVSLF)
A quick google of that one, and it looks like it is not even an error, but rather simply an informative
>
standby:
check that the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter is defined to a value that is sufficiently large enough to maintain afequate log switch information to resolve archivelog gaps. (also get this message in between the alertlog file in standby)
In that case I would check that the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter is defined to a value that is sufficiently large enough to maintain adequate log switch information to resolve archivelog gaps.
How big is that? Well, at least bigger than it is now. Beyond that, it would have to be "large enough to maintain adequate log switch information to resolve archivelog gaps" Just play with it until you get the result you want.
>
Standby:
FAL[client]: Error fetching gap sequence, no FAL server specified (this is very often message. How to remove this, what needs to be added. Do i need to add the below parameters)
FAL_CLIENT
FAL_SERVER
Thanks in advance.
Pas Moh
[email protected] -
Creating standby DB after failover
Hi,
I have performed failover to my standby DB, now i need to re create the standby db for the new production.
But there is some confusion, because previously , in my production db_name and unique name was same, suppose
test1. And the db_name and unique_name for standby was test1 and test2. And i created standby db that way using test1 and test2 in log_archive_config.
After failover, the scenario changed , the production has two different value,db name test1 and db unique name test2. And i need create standby from this. It makes me confuse, how i will create standby? What will be the db unique name for new standby??
Please help me...
regards,user8983130 wrote:
thanks.
we use db_unque_name in log_archive_config, ok???
and in the log_archive_dest_state_2, we service name..should it be necessary to be same of the db_unique_name???DB_NAME should be same across the primary and physical standby databases.
DB_UNIQUE_NAME choose different names for each database.
Service names whatever you use either in FAL_CLIENT/FAL_SERVER (or) DEST_n there is no relation with DB_NAME/DB_UNIQUE_NAME, Its just service name how you want to call.
HTH. -
All of sudden primary alert log filled
ORA-00270: error creating archive log
FAL[server, ARC2]: FAL archive failed, see trace file.
Errors in file /mv1/oracle/admin/PRD/diag/diag/rdbms/prd/PRD1/trace/PRD1_arc2_909440.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance PRD1 - Archival Error. Archiver continuing.
DB version is 11.1.0.7, and nothing has been changed except I just increased db recovery area in standby side.
Anybody can provide me ways to debug?
ThanksNAME DISPLAY_VALUE
db_file_name_convert
db_name PRD
db_unique_name PRD
dg_broker_config_file1 /u01/orahome_a/dbs/dr1PRD.dat
dg_broker_config_file2 /u01/orahome_a/dbs/dr2PRD.dat
dg_broker_start FALSE
fal_client
fal_server
local_listener listener_oracle
log_archive_config
log_archive_dest_2 service=PRD_standby.PROD
log_archive_dest_state_2 ENABLE
log_archive_max_processes 4
log_file_name_convert /oradata/dy, +PRD_ DAT
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management auto
SQL> col name for a10
SQL> col DATABASE_ROLE for a10
SQL> SELECT name,db_unique_name,protection_mode,DATABASE_ROLE,OPEN_MODE,switchover_status from v$database;
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE SWITCHOVER_STATUS
PRD PRD MAXIMUM PERFORMANCE PRIMARY READ WRITE SESSIONS ACTIVE
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
THREAD# MAX(SEQUENCE#)
1 78035
2 77624
SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
2 FROM
3 (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
4 (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
5 WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
Thread Last Sequence Received Last Sequence Applied Difference
1 78035 78035 0
1 78035 78035 0
2 77624 77624 0
2 77624 77624 0
SQL> col severity for a15
SQL> col message for a70
SQL> col timestamp for a20
SQL> select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE timestamp MESSAGE
Warning 270 06-JUL-2012 18:46:02 ARC3: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (270) )
Warning 270 06-JUL-2012 18:46:02 ARC3: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 270 06-JUL-2012 18:46:02 FAL[server, ARC3]: Error 270 creating remote archivelog file 'PRD_ standby.PROD'
Error 270 06-JUL-2012 18:46:05 FAL[server, ARC3]: Error 270 creating remote archivelog file 'PRD_ standby.PROD'
SQL> select ds.dest_id id
2 , ad.status
3 , ds.database_mode db_mode
4 , ad.archiver type
5 , ds.recovery_mode
6 , ds.protection_mode
7 , ds.standby_logfile_count "SRLs"
8 , ds.standby_logfile_active active
9 , ds.archived_seq#
10 from v$archive_dest_status ds
11 , v$archive_dest ad
12 where ds.dest_id = ad.dest_id
13 and ad.status != 'INACTIVE'
14 order by
15 ds.dest_id;
ID STATUS DB_MODE TYPE RECOVERY_MODE PROTECTION_MODE SRLs ACTIVE ARCHIVED_SEQ#
1 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 78035
2 ERROR MOUNTED-STANDBY LGWR IDLE MAXIMUM PERFORMANCE 0 0 77578
3 VALID OPEN ARCH IDLE MAXIMUM PERFORMANCE 0 0 78035
From standby side:
NAME DISPLAY_VALUE
db_file_name_convert
db_name PRD
db_unique_name PRD
dg_broker_config_file1 /u01/orahome_a/dbs/dr1PRD.dat
dg_broker_config_file2 /u01/orahome_a/dbs/dr2PRD.dat
dg_broker_start FALSE
fal_client
fal_server
local_listener listener_oracle4
log_archive_config
log_archive_dest_2
log_archive_dest_state_2 enable
log_archive_max_processes 4
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest ?/dbs/arch
standby_file_management auto
SQL> SQL> SQL> col name for a30
select * from v$dataguard_stats;
select * from v$archive_gap;
col name format a60
select name
, floor(space_limit / 1024 / 1024) "Size MB"
, ceil(space_used / 1024 / 1024) "Used MB"
from v$recovery_file_dest
order by name;
spool off
NAME DB_UNIQUE_NAME PROTECTION_MODE DATABASE_R OPEN_MODE
PRD PRD MAXIMUM PERFORM PHYSICAL S MOUNTED
ANCE TANDBY
SQL>
THREAD# MAX(SEQUENCE#)
1 77989
2 77577
SQL>
PROCESS STATUS THREAD# SEQUENCE#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
SQL> SQL>
NAME VALUE UNIT TIME_COMPUTED
apply finish time day(2) to second(3) interval
apply lag day(2) to second(0) interval
estimated startup time 12 second
standby has been open N
transport lag day(2) to second(0) interval
SQL> SQL> SQL> 2 3 4 5
NAME Size MB Used MB
/oracle_arch/_FRA 30720 99216 -
A handy query which lists all important DG related init parameters
Version : 11.2/10.2
Do you guys have a handy query which I could run at Primary and Standby sites which will lists all important
Data Guard related init parameters.
Something like below but a query that list important Dataguard related init.ora parameters
col name format a35
col display_value forma a20
set pages 25
SELECT name, display_value FROM v$parameter WHERE name IN ('db_name',
'db_block_size','undo_retention',
'shared_servers',
'memory_target','sessions',
'processes',
'session_cached_cursors',
'sga_target',
'pga_aggregate_target',
'compatible',
'open_cursors',
'nls_date_format',
'db_file_multiblock_read_count',
'cpu_count',
'cursor_sharing')ORDER BY name;Yes more parameters from Mseberg..
Adding one more important parameter LOCAL_LISTENER which plays a big role in dataguard with RAC too..
sys@ORCL> SELECT name, display_value FROM v$parameter WHERE name IN ('db_name','db_unique_name','log_archive_config','log_archive_dest_2','log_archive_dest_state_2','fal_client','fal_server','standby_
file_management','standby_archive_dest','db_file_name_convert','log_file_name_convert','remote_login_passwordfile','local_listener') order by name;
NAME DISPLAY_VALUE
db_file_name_convert
db_name orcl
db_unique_name orcl
fal_client
fal_server
local_listener
log_archive_config
log_archive_dest_2
log_archive_dest_state_2 enable
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_archive_dest %ORACLE_HOME%\RDBMS
standby_file_management MANUAL
13 rows selected.
sys@ORCL> -
Dear Friends,
We have created DR server of our ECC production server.
We are running ECC 6.0, HP-UX and oracle 10g.
Now we want to create DR for the same using offline backup of our primary database.
Please help me ?
Regrdsa
Ganesh Datt TiwariHi Rajesh,
Thanks for your valuable reply.
I have tried to fin dthe file control.trc but in my system there is a file "pdr_ora_22200.trc".
Please find the content of this file.
Unix process pid: 22200, image: oracle@jkeccdb (TNS V1-V3)
SERVICE NAME:(SYS$USERS) 2008-10-01 02:03:27.767
SESSION ID:(739.24722) 2008-10-01 02:03:27.767
2008-10-01 02:03:27.767
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
-- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
-- DB_UNIQUE_NAME="PDR"
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG'
-- LOG_ARCHIVE_MAX_PROCESSES=2
-- STANDBY_FILE_MANAGEMENT=MANUAL
-- STANDBY_ARCHIVE_DEST=?/dbs/arch
-- FAL_CLIENT=''
-- FAL_SERVER=''
-- LOG_ARCHIVE_DEST_1='LOCATION=/oracle/PDR/oraarch/PDRarch'
-- LOG_ARCHIVE_DEST_1='MANDATORY NOREOPEN NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM EXPEDITE NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='NOREGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE
-- Below are two sets of SQL statements, each of which creates a new
-- control file and uses it to open the database. The first set opens
-- the database with the NORESETLOGS option and should be used only if
-- the current versions of all online logs are available. The second
-- set opens the database with the RESETLOGS option and should be used
-- if online logs are unavailable.
-- The appropriate set of statements can be copied from the trace into
-- a script file, edited as necessary, and executed when there is a
-- need to re-create the control file.
-- Set #1. NORESETLOGS case
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "PDR" NORESETLOGS ARCHIVELOG
MAXLOGFILES 255
MAXLOGMEMBERS 3
MAXDATAFILES 254
MAXINSTANCES 50
MAXLOGHISTORY 23360
LOGFILE
GROUP 21 (
'/oracle/PDR/origlogA/log_g21m1.dbf',
'/oracle/PDR/mirrlogA/log_g21m2.dbf'
) SIZE 250M,
GROUP 22 (
'/oracle/PDR/origlogA/log_g22m1.dbf',
'/oracle/PDR/mirrlogA/log_g22m2.dbf'
) SIZE 250M,
GROUP 23 (
'/oracle/PDR/origlogB/log_g23m1.dbf',
'/oracle/PDR/mirrlogB/log_g23m2.dbf'
) SIZE 250M,
GROUP 24 (
'/oracle/PDR/origlogB/log_g24m1.dbf',
'/oracle/PDR/mirrlogB/log_g24m2.dbf'
) SIZE 250M
-- STANDBY LOGFILE
DATAFILE
'/oracle/PDR/sapdata1/system_1/system.data1',
'/oracle/PDR/sapdata3/undo_1/undo.data1',
'/oracle/PDR/sapdata1/sysaux_1/sysaux.data1',
'/oracle/PDR/sapdata1/pdr_1/pdr.data1',
'/oracle/PDR/sapdata1/pdr_2/pdr.data2',
'/oracle/PDR/sapdata1/pdr_3/pdr.data3',
'/oracle/PDR/sapdata1/pdr_4/pdr.data4',
'/oracle/PDR/sapdata1/pdr_5/pdr.data5',
'/oracle/PDR/sapdata2/pdr_6/pdr.data6',
'/oracle/PDR/sapdata2/pdr_7/pdr.data7',
'/oracle/PDR/sapdata2/pdr_8/pdr.data8',
'/oracle/PDR/sapdata2/pdr_9/pdr.data9',
'/oracle/PDR/sapdata2/pdr_10/pdr.data10',
'/oracle/PDR/sapdata3/pdr_11/pdr.data11',
'/oracle/PDR/sapdata3/pdr_12/pdr.data12',
'/oracle/PDR/sapdata3/pdr_13/pdr.data13',
'/oracle/PDR/sapdata3/pdr_14/pdr.data14',
'/oracle/PDR/sapdata3/pdr_15/pdr.data15',
'/oracle/PDR/sapdata4/pdr_16/pdr.data16',
'/oracle/PDR/sapdata4/pdr_17/pdr.data17',
'/oracle/PDR/sapdata4/pdr_18/pdr.data18',
'/oracle/PDR/sapdata4/pdr_19/pdr.data19',
'/oracle/PDR/sapdata4/pdr_20/pdr.data20',
'/oracle/PDR/sapdata1/pdr700_1/pdr700.data1',
'/oracle/PDR/sapdata1/pdr700_2/pdr700.data2',
'/oracle/PDR/sapdata1/pdr700_3/pdr700.data3',
'/oracle/PDR/sapdata1/pdr700_4/pdr700.data4',
'/oracle/PDR/sapdata2/pdr700_5/pdr700.data5',
'/oracle/PDR/sapdata2/pdr700_6/pdr700.data6',
'/oracle/PDR/sapdata2/pdr700_7/pdr700.data7',
'/oracle/PDR/sapdata2/pdr700_8/pdr700.data8',
'/oracle/PDR/sapdata3/pdr700_9/pdr700.data9',
'/oracle/PDR/sapdata3/pdr700_10/pdr700.data10',
'/oracle/PDR/sapdata3/pdr700_11/pdr700.data11',
'/oracle/PDR/sapdata3/pdr700_12/pdr700.data12',
'/oracle/PDR/sapdata4/pdr700_13/pdr700.data13',
'/oracle/PDR/sapdata4/pdr700_14/pdr700.data14',
'/oracle/PDR/sapdata4/pdr700_15/pdr700.data15',
'/oracle/PDR/sapdata4/pdr700_16/pdr700.data16',
'/oracle/PDR/sapdata1/pdrusr_1/pdrusr.data1',
'/oracle/PDR/sapdata4/pdr_21/pdr.data21',
'/oracle/PDR/sapdata4/pdr_22/pdr.data22',
'/oracle/PDR/sapdata4/pdr_23/pdr.data23',
'/oracle/PDR/sapdata4/pdr_24/pdr.data24',
'/oracle/PDR/sapdata4/pdr_25/pdr.data25',
'/oracle/PDR/sapdata4/pdr_26/pdr.data26',
'/oracle/PDR/sapdata1/pdr_27/pdr.data27',
'/oracle/PDR/sapdata4/pdr_28/pdr.data28',
'/oracle/PDR/sapdata3/pdr_29/pdr.data29',
'/oracle/PDR/sapdata2/pdr_30/pdr.data30',
'/oracle/PDR/sapdata5/pdr_31/pdr.data31',
'/oracle/PDR/sapdata5/pdr_32/pdr.data32',
'/oracle/PDR/sapdata5/pdr_33/pdr.data33',
'/oracle/PDR/sapdata5/pdr_34/pdr.data34',
'/oracle/PDR/sapdata5/pdr_35/pdr.data35',
'/oracle/PDR/sapdata6/pdr_36/pdr.data36',
'/oracle/PDR/sapdata6/pdr_37/pdr.data37',
'/oracle/PDR/sapdata6/pdr_38/pdr.data38',
'/oracle/PDR/sapdata6/pdr_39/pdr.data39',
'/oracle/PDR/sapdata6/pdr_40/pdr.data40',
'/oracle/PDR/sapdata5/pdr700_17/pdr700.data17',
'/oracle/PDR/sapdata5/pdr700_18/pdr700.data18',
'/oracle/PDR/sapdata6/pdr700_19/pdr700.data19',
'/oracle/PDR/sapdata6/pdr700_20/pdr700.data20',
'/oracle/PDR/sapdata6/pdr700_21/pdr700.data21',
'/oracle/PDR/sapdata5/pdr700_22/pdr700.data22',
'/oracle/PDR/sapdata6/pdr_41/pdr.data41',
'/oracle/PDR/sapdata6/pdr_42/pdr.data42',
'/oracle/PDR/sapdata6/pdr_43/pdr.data43',
'/oracle/PDR/sapdata5/pdr_44/pdr.data44',
'/oracle/PDR/sapdata5/pdr_45/pdr.data45'
CHARACTER SET UTF8
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE
-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;
-- Database can now be opened normally.
ALTER DATABASE OPEN;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE PSAPTEMP ADD TEMPFILE '/oracle/PDR/sapdata2/temp_1/temp.data1'
SIZE 3380M REUSE AUTOEXTEND ON NEXT 20971520 MAXSIZE 10000M;
-- End of tempfile additions.
-- Set #2. RESETLOGS case
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "PDR" RESETLOGS ARCHIVELOG
MAXLOGFILES 255
MAXLOGMEMBERS 3
MAXDATAFILES 254
MAXINSTANCES 50
MAXLOGHISTORY 23360
LOGFILE
GROUP 21 (
'/oracle/PDR/origlogA/log_g21m1.dbf',
'/oracle/PDR/mirrlogA/log_g21m2.dbf'
) SIZE 250M,
GROUP 22 (
'/oracle/PDR/origlogA/log_g22m1.dbf',
'/oracle/PDR/mirrlogA/log_g22m2.dbf'
) SIZE 250M,
GROUP 23 (
'/oracle/PDR/origlogB/log_g23m1.dbf',
'/oracle/PDR/mirrlogB/log_g23m2.dbf'
) SIZE 250M,
GROUP 24 (
'/oracle/PDR/origlogB/log_g24m1.dbf',
'/oracle/PDR/mirrlogB/log_g24m2.dbf'
) SIZE 250M
-- STANDBY LOGFILE
DATAFILE
'/oracle/PDR/sapdata1/system_1/system.data1',
'/oracle/PDR/sapdata3/undo_1/undo.data1',
'/oracle/PDR/sapdata1/sysaux_1/sysaux.data1',
'/oracle/PDR/sapdata1/pdr_1/pdr.data1',
'/oracle/PDR/sapdata1/pdr_2/pdr.data2',
'/oracle/PDR/sapdata1/pdr_3/pdr.data3',
'/oracle/PDR/sapdata1/pdr_4/pdr.data4',
'/oracle/PDR/sapdata1/pdr_5/pdr.data5',
'/oracle/PDR/sapdata2/pdr_6/pdr.data6',
'/oracle/PDR/sapdata2/pdr_7/pdr.data7',
'/oracle/PDR/sapdata2/pdr_8/pdr.data8',
'/oracle/PDR/sapdata2/pdr_9/pdr.data9',
'/oracle/PDR/sapdata2/pdr_10/pdr.data10',
'/oracle/PDR/sapdata3/pdr_11/pdr.data11',
'/oracle/PDR/sapdata3/pdr_12/pdr.data12',
'/oracle/PDR/sapdata3/pdr_13/pdr.data13',
'/oracle/PDR/sapdata3/pdr_14/pdr.data14',
'/oracle/PDR/sapdata3/pdr_15/pdr.data15',
'/oracle/PDR/sapdata4/pdr_16/pdr.data16',
'/oracle/PDR/sapdata4/pdr_17/pdr.data17',
'/oracle/PDR/sapdata4/pdr_18/pdr.data18',
'/oracle/PDR/sapdata4/pdr_19/pdr.data19',
'/oracle/PDR/sapdata4/pdr_20/pdr.data20',
'/oracle/PDR/sapdata1/pdr700_1/pdr700.data1',
'/oracle/PDR/sapdata1/pdr700_2/pdr700.data2',
'/oracle/PDR/sapdata1/pdr700_3/pdr700.data3',
'/oracle/PDR/sapdata1/pdr700_4/pdr700.data4',
'/oracle/PDR/sapdata2/pdr700_5/pdr700.data5',
'/oracle/PDR/sapdata2/pdr700_6/pdr700.data6',
'/oracle/PDR/sapdata2/pdr700_7/pdr700.data7',
'/oracle/PDR/sapdata2/pdr700_8/pdr700.data8',
'/oracle/PDR/sapdata3/pdr700_9/pdr700.data9',
'/oracle/PDR/sapdata3/pdr700_10/pdr700.data10',
'/oracle/PDR/sapdata3/pdr700_11/pdr700.data11',
'/oracle/PDR/sapdata3/pdr700_12/pdr700.data12',
'/oracle/PDR/sapdata4/pdr700_13/pdr700.data13',
'/oracle/PDR/sapdata4/pdr700_14/pdr700.data14',
'/oracle/PDR/sapdata4/pdr700_15/pdr700.data15',
'/oracle/PDR/sapdata4/pdr700_16/pdr700.data16',
'/oracle/PDR/sapdata1/pdrusr_1/pdrusr.data1',
'/oracle/PDR/sapdata4/pdr_21/pdr.data21',
'/oracle/PDR/sapdata4/pdr_22/pdr.data22',
'/oracle/PDR/sapdata4/pdr_23/pdr.data23',
'/oracle/PDR/sapdata4/pdr_24/pdr.data24',
'/oracle/PDR/sapdata4/pdr_25/pdr.data25',
'/oracle/PDR/sapdata4/pdr_26/pdr.data26',
'/oracle/PDR/sapdata1/pdr_27/pdr.data27',
'/oracle/PDR/sapdata4/pdr_28/pdr.data28',
'/oracle/PDR/sapdata3/pdr_29/pdr.data29',
'/oracle/PDR/sapdata2/pdr_30/pdr.data30',
'/oracle/PDR/sapdata5/pdr_31/pdr.data31',
'/oracle/PDR/sapdata5/pdr_32/pdr.data32',
'/oracle/PDR/sapdata5/pdr_33/pdr.data33',
'/oracle/PDR/sapdata5/pdr_34/pdr.data34',
'/oracle/PDR/sapdata5/pdr_35/pdr.data35',
'/oracle/PDR/sapdata6/pdr_36/pdr.data36',
'/oracle/PDR/sapdata6/pdr_37/pdr.data37',
'/oracle/PDR/sapdata6/pdr_38/pdr.data38',
'/oracle/PDR/sapdata6/pdr_39/pdr.data39',
'/oracle/PDR/sapdata6/pdr_40/pdr.data40',
'/oracle/PDR/sapdata5/pdr700_17/pdr700.data17',
'/oracle/PDR/sapdata5/pdr700_18/pdr700.data18',
'/oracle/PDR/sapdata6/pdr700_19/pdr700.data19',
'/oracle/PDR/sapdata6/pdr700_20/pdr700.data20',
'/oracle/PDR/sapdata6/pdr700_21/pdr700.data21',
'/oracle/PDR/sapdata5/pdr700_22/pdr700.data22',
'/oracle/PDR/sapdata6/pdr_41/pdr.data41',
'/oracle/PDR/sapdata6/pdr_42/pdr.data42',
'/oracle/PDR/sapdata6/pdr_43/pdr.data43',
'/oracle/PDR/sapdata5/pdr_44/pdr.data44',
'/oracle/PDR/sapdata5/pdr_45/pdr.data45'
CHARACTER SET UTF8
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/oracle/PDR/oraarch/PDRarch1_1_624465037.dbf
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE PSAPTEMP ADD TEMPFILE '/oracle/PDR/sapdata2/temp_1/temp.data1'
SIZE 3380M REUSE AUTOEXTEND ON NEXT 20971520 MAXSIZE 10000M;
-- End of tempfile additions.
Maybe you are looking for
-
Video purchase not downloading properly
Newbie to ipod. purchased 2 videos from iTMS & keep getting same error: "THERE WAS AN ERROR DOWNLOADING YOUR PURCHASED MUSIC. THE DISK COULD NOT BE READ FROM OR WRITTEN TO." i've used the check for purchased music command & nothing happens. i wrote t
-
I'm trying to download 10g for Win 32, but when I get to the page that says "Click the link on the left to continue to the sign in page", there's no link to click on. Here's the URL: http://www.oracle.com/webapps/dialogue/dlgpage.jsp?p_ext=Y&p_dlg_id
-
Dynamic memory allocation failure
Dear reader, We sometimes have a problem where our windows 2012 r2 RDS virtual servers, that reside on windows 2012r2 hyper-v hosts, loose their dynamic memory and only have their startup memory left to work with. Users start complaining that things
-
Upgraded to 1.1, problem navigating in grid
Hi Is it possible to change the result grid behaviour to be like in 1.0? I find it quite problematic to not beeing able to use the cursor keys to navigate in the grid. Every cell always gets in edit mode directly even though it's a read only grid, an
-
Nike+ AMP not linking to iPod Nano (5th Gen)
I've tried rebooting the nano (as suggested for 3rd gen). Any other ideas?