DataGuard in a Cluster
The standby and production are on different cluster nodes, but in case of a failure on the same node. I know that is possible to run the production and the standby on the same server using the lock_name_space parameter. Is it possible to use DataGuard in such a cluster without major changes ?
Posible, sure.
A good idea? That depends on what you mean by a production database. Normally, you would do the migration on a dev database first, validate that your applications still work properly (and fix the inevitable issues), move to test, etc. You wouldn't normally be doing a migration directly to the production instance.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC
Similar Messages
-
How to manually create a standby db with SAN for hardware cluster failover
Hi all,
The primary db(oracle 9i r2, Sun Solaris) puts its datafile, redo logs, control files in SAN while the pfile, listener and tnsnames files are in its local hardisk. I need to create a standby db(not for dataguard) for hardware cluster failover, where if the primary db fails the hardware cluster will failover to standby db(mounting the datafile, redologs, control files in
SAN to a mount point automatically and start the db services). But I don't know how to create the standby db, I would install the db software first then should I copy the pfile(created from primary db), listener and tnsnames files to standby db? What are the correct steps to do it? Any advice is greatly appreciated.Thanks ackermsb for the reponse. I think I have confused of setting standby db and creating a HA. What I trying to achieve is creating a HA with vendor clusterware and oracle e.g. Sun Cluster HA for Oracle.
The steps involved in preparing the primary and standby oracle are:
Oracle application files – These files include Oracle binaries, configuration files, and parameter files. Need to installed separately in two servers locally.
The Database-related files – These files include the control file, redo logs, and data files are placed in a Cluster File System.
What I don't how to do it is, after installing the oracle binary in standby server, should I create the listener, tnsnames and pfile or copy them from the primary db?
Thanks in advance. -
SunCluster control of Oracle DataGuard databases
I've got two sites in a "Metro SAN" arrangement. I've setup a zone on each site using local storage for Oracle with DataGuard. Both zones must therefore be running at all times, so the zone does not failover but the database does (ie. standby becomes primary). Well, thats my plan
Ive deployed SC3.2u2 (non geographic). Is this able to change the DataGuard role under cluster control?
Ive read that the Geographic edition can do this but we dont have RAC. Can it run without RAC?
If DataGuard proves to be inappropriate, then should I just use mount the database on zpool mirrored shared storage (consisting of one LUN from each site) to achieve my replication? Not very elegant, but it may have to do if I cannot use DataGuard.
thanksjames.patterson wrote:
I've got two sites in a "Metro SAN" arrangement. I've setup a zone on each site using local storage for Oracle with DataGuard. Both zones must therefore be running at all times, so the zone does not failover but the database does (ie. standby becomes primary). Well, that’s my plan…
I’ve deployed SC3.2u2 (non geographic). Is this able to change the DataGuard role under cluster control? The HA-Oracle agent can accommodate Data Guard but it doesn't change the roles. If the database is a standby then it doesn't attempt recover it, it simply mounts it. Thus if you want to have this hybrid arrangement, then you need to construct some sort of mechanism to bring the primary down and make it a standby and then start the standby recovery and bring it online as a primary.
I’ve read that the Geographic edition can do this – but we don’t have RAC. Can it run without RAC?No. I wrote the module and it was decided that Data Guard broker was the best interface to use to control the Data Guard set-ups. Unfortunately, Data Guard broker does not work with cold fail-over databases because the broker files have embedded host names in them. Although we could work around the host naming issue, Oracle wouldn't support it.
If DataGuard proves to be inappropriate, then should I just use mount the database on zpool mirrored shared storage (consisting of one LUN from each site) to achieve my replication? Not very elegant, but it may have to do if I cannot use DataGuard.If you have a Metro SAN, why use Data Guard in the first place? You seem to be confusing HA with DR. A simple mirrored LUN with one LUN from each site is far simpler, but the performance will depend on the separation. If it is ~ < 20Km, then performance should be OK.
thanksTim
--- -
Oracle and volume- and disk array-based replication?
Dear mighty all,
here is a situation:
Location A - active Oracle database + volume replication software + geocluster software
Location B - standby location, i.e. standby database + volume replication software + geocluster
Oracle is 10g
The size of the database is 500 GB+
The daily updates are roughly 2-3%.
The comm channel between locations can have any reasonable bandwidth, however the jitter and latency cannot be guaranteed (i.e. no synchronous replication).
This is a matter of protecting against a disaster (such as a complete destruction of Location A).
The task is to make sure that Location B has the up-to-date data (RPO is 8 hours) and Oracle database can be started automatcially without any manual intervention. (I.e. cluster administrator pushes the button and the rest is done without human intervention).
Here is what I am thinking of:
1) volume replication software usually observes the atomicity on the disk block level. I.e. it is impossible that the data block being updated on the volume in Location A is updated in location B incompletely.
2) When the database in Location A completes a transaction it writes data to the disk thus initiating the series of disk block updates being sent to Location B.
3) From the database point of view, the transaction is completed, database files are updated.
4) However there is a slim chance of Location A being destroyed while:
* the volume replication software has started pushing through to Location B the updated disk blocks representing the completed transaction
* but it did not finish transmitting the whole lot of disk updates (boom! the asteroid fell on Location A)
* and volume replicator in Location B cannot roll back the disk updates since volume replicator does not know of the database transaction
5) Volume replicator atomicity is disk data block while Oracle's atomictity is transaction (i.e. there cannot be a half-transaction applied)
6) Thus, when we start Oracle in Location B, we will get inconsistent database, since the last transaction was transferred to Location B incompletely
7) Sure that there is a way to fix this using various DBA tools, but that defies the whole purpose of automatic start
So, my question is:
1) Is volume- or disk array-based replication not a good idea in this case at all?
2) or is there a way to nonetheless ensure the consistency of the database (say start it every time in the recovery mode)?
3) Or should I dump the idea of replicating Oracle with volume replication and switch to DataGuard? (The cluster software can handle DataGuard).
thanks a million for your answers, ye mighty all!Hi,
this is not a "typical" question what is easy to discuss in a forum.
Just some hints:
1. using volume replication won't help you against logical or physical corruption
2. RPO of 8 hours ? Sounds a lot regarding a change rate of 10-15 GB a day.
3. 100% Automation is normally not a good idea in a database disaster scenario
( a false disaster detection can end in a real disaster!) I've already seen too much databases which had been totally corrupted by a false switch. -
Advice about Migration Workbench and RAC
Hello,
I have a system with RAC and dataguard installed (a cluster with 2 nodes, in active/passive mode).
I would like to know if it is possible to perform a migration from mysql to oracle on the 'production' database, that is on the first node.
So do you think that it is possible ?
Thank you very,
Best regards,
isaPosible, sure.
A good idea? That depends on what you mean by a production database. Normally, you would do the migration on a dev database first, validate that your applications still work properly (and fix the inevitable issues), move to test, etc. You wouldn't normally be doing a migration directly to the production instance.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Win 2003 Cluster + Oracle Fail Safe + Dataguard (physical & Logical)
Hello,<br>
<br>
It´s my first post (sorry for my bad english)...I am mounting a high availability solution for test purpose. For the moment i mount the following and runs ok, but i´ve a little problem with the logical database:<br>
<br>
Configuration<br>
ESX Server 2.0 with this machines:<br>
Windows 2003 Cluster (Enterprise Edition R2, 2 nodes)<br>
* NODE 1 - Oracle 10gR2 + Patch 9 + Oracle Fail Safe 3.3.4<br>
* NODE 2 - Oracle 10gR2 + Patch 9 + Oracle Fail Safe 3.3.4<br>
c:/Windows Software<br>
e:/Oracle Software/ (pfile -> R:/spfile)<br>
<br>
Virtual SAN<br>
* Datafile, Redos.. are in Virtual SAN.<br>
R:/ Datafiles & Archivers & dump files & spfile<br>
S:/ , T:/ ,U:/ -> Redos<br>
V:/ Undo<br>
<br>
Data Guard<br>
* NODE3 Physical Database<br>
* NODE4 Logical Database<br>
<br>
The Oracle Fail Safe and windows cluster run OK, the switchs... <br>
The physical database runs OK... (redo aply, switchover, failover, all ok) but the logical receives the redos ok but it has a problem when goes to apply the redo.<br>
<br>
The error is the following:<br>
ORA-12801: error señalizado en el servidor P004 de consultas paralelas<br>
ORA-06550: linea 1, columna 536:<br>
PLS-00103: se ha encontrado el simbolo "," cuando se esperaba uno de los siguientes:<br>
(- + case mod new not null <an identifier>
<a double-quoted delimited-identifier><a bind variable><avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification><a number> > a single-quoted SQL string> pipe <an alternatively quoted string literal with character set specification> <an alternativel.<br>
update "SYS"."JOB$" set "LAST_DATE"=TO_DATE('11/09/07','DD/MM/RR'),<br>
<br>
This sql statement i saw in dba_logstdby_events and was joined with the error in alert log and dba_logstdby_events.<br>
<br>
I´m a bit lost with this error. I don´t understand why the logical database can´t start to apply the redos received from primary database.<br>
<br>
The database has two tables with two columns one integer and the other a varchar2(25). She hasn´t rare types of columns.<br>
<br>
Thanks a lot for any help,<br>
Roberto Marotta<br>I recreate the logical database OK, no problem, no errors.<br>
<br>
The redo aply run ok. I have done logfile switch in primary database and they were applied in logical and standby databases. But...<br>
<br>
When I created a tablespace in primary database when i did a switch logfile in primary the changes transfers ok to standby database, but to logical NO!!!, the redo are in they path in logical ok, but when the process tried to apply, reports me the same error.<br>
<br>
SQL> select sequence#, first_time, next_time, dict_begin, dict_end, applied from dba_logstdby_log order by 1;<BR>
<BR>
SEQUENCE# FIRST_TI NEXT_TIM DIC DIC APPLIED<BR>
--------- -------- -------- --- --- -------<BR>
138 14/09/07 14/09/07 NO NO CURRENT<BR>
139 14/09/07 14/09/07 NO NO CURRENT<BR>
<br>
SQL> select event_time, status, event from dba_logstdby_events order by event_time, timestamp, commit_scn;<br>
<br>
14/09/07<br>
ORA-16222: reintento automatico de la base de datos logica en espera de la ultima accion<br>
14/09/07<br>
ORA-16111: extraccion de log y configuracion de aplicacion<br>
14/09/07<br>
ORA-06550: linea 1, columna 536:<br>
PLS-00103: Se ha encontrado el simbolo "," cuando se esperaba uno de los siguientes:<br>
( - + case mod new not null <an identifier><br>
<a double-quoted delimited-identifier> <a bind variable> avg<br>
count current exists max min prior sql stddev sum variance<br>
execute forall merge time tiemstamp interval date<br>
<a string literal with character set specification><br>
<a number><a single-quoted SQL string> pipe
<an alternatively-quoted string literal with charactert set specificastion><br>
<an alternativel<br>
update "SYS"."JOB$" set "LAST_NAME" = TO_DATE('14/09/07','DD/MM/RR'),<br>
<br>
The alert.log report the same message that the dba_logstdby_events view.<br>
<br>
Any idea¿?<br>
<br>
I´m a bit frustrated. It´s the third time that recreate the logical database OK and reproduce the same error when i create a tablespace in primary database, and i haven´t got any idea because of that. -
Sun Cluster with Oracle Dataguard
When I use the logical hostname in the dataguard configuration and go and enable it I get
ORA-16596: site is not a member of the Data Guard configuration
I use the physical hostname works ok.
Has any one got around thisYou may find further informations here:
<span style="text-decoration: none"><font color="#0000FF">www.oracle-10g.de</font></span></font></strong></span></p>
<p><span class="bibliotext"><strong>
<font size="2" face="Arial"> -
DataGuard Standby 環境で db が open できない.
DataGuard Standby 環境で db が open できなくて困っています。
詳しい方いれば、原因と対処方法を教えていただきたいと思います。
create した spfile が正確に読み込まれていないことが原因のようなのですが。。
なぜ読み込めないのかが分からない状態です。
pfile を読み込んだ起動は可能ですが、spfile を読み込んだ起動の前提条件とは、
何なのでしょうか。単に整合性が取れていないだけなのか、それとも、、。
[grid@osaka1 shell]$ asmcmd
ASMCMD> ls
DATA/
FRA/
ASMCMD> cd data
ASMCMD> ls
ASM/
WEST/
ASMCMD> ls -l
Type Redund Striped Time Sys Name
Y ASM/
N WEST/
ASMCMD>
ASMCMD> cd west
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
spfilewest.ora
ASMCMD> ls -l
Type Redund Striped Time Sys Name
N CONTROLFILE/
N DATAFILE/
N ONLINELOG/
N PARAMETERFILE/
N TEMPFILE/
N spfilewest.ora => +DATA/WEST/PARAMETERFILE/spfile.257.824236121
ASMCMD> pwd
+data/west
ASMCMD>
ASMCMD> cd para*
ASMCMD> ls -l
Type Redund Striped Time Sys Name
PARAMETERFILE MIRROR COARSE AUG 23 18:00:00 Y spfile.257.824236121
ASMCMD>
ASMCMD> pwd
+data/west/PARAMETERFILE
ASMCMD> quit
[grid@osaka1 shell]$
[oracle@osaka1 dbs]$ more initHPYMUSIC.ora
SPFILE='+DATA/west/spfilewest.ora'
[oracle@osaka1 dbs]$
よろしくお願い致します。
ps.
ORA-12154 は整合性の問題であるので、それを合わせれば消えると思っています。
そもそも RAC を前提としていたのですが、それを standalone に置き換えて検証始めた結果、
こうなってしまっています。
open できない原因が ORA-12154 だったりして。。
■ プライマリの場合
○ open するほうは、シンプルに以下だけで open することが確認できる。
が、db_name を変更した関係で
「ORA-12154: TNS: 指定された接続識別子を解決できませんでした」
が出続けている。
ORA-12154 は db が open できない原因とは無関係かもしれない。
srvctl stop database -d east -f
srvctl start database -d east -o open
srvctl config database -d east
srvctl status database -d east
○ 参考出力
set linesize 500 pages 0
col value for a90
col name for a50
select name, value
from v$parameter
where name in ('db_name','db_unique_name','log_archive_config', 'log_archive_dest_1','log_archive_dest_2',
'log_archive_dest_state_1','log_archive_dest_state_2', 'remote_login_passwordfile',
'log_archive_format','log_archive_max_processes','fal_server','db_file_name_convert',
'log_file_name_convert', 'standby_file_management');
SQL>
db_file_name_convert
log_file_name_convert
log_archive_dest_1
log_archive_dest_2 SERVICE=HPYMUSIC SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=west
log_archive_dest_state_1 enable
log_archive_dest_state_2 enable
fal_server
log_archive_config
log_archive_format %t_%s_%r.dbf
log_archive_max_processes 4
standby_file_management AUTO
remote_login_passwordfile EXCLUSIVE
db_name HPYMUSIC
db_unique_name HPYMUSIC ← ▼ db_name だけを変更したつもりが db_unique_name も変更されていた
14行が選択されました。
[oracle@tokyo1 shell]$ srvctl stop database -d east -f
[oracle@tokyo1 shell]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Cluster Resources
ora.east.db
1 OFFLINE OFFLINE Instance Shutdown
[oracle@tokyo1 shell]$ srvctl start database -d east -o open
[oracle@tokyo1 shell]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE tokyo1
ora.FRA.dg
ONLINE ONLINE tokyo1
ora.LISTENER.lsnr
ONLINE ONLINE tokyo1
ora.asm
ONLINE ONLINE tokyo1 Started
Cluster Resources
ora.cssd
1 ONLINE ONLINE tokyo1
ora.diskmon
1 ONLINE ONLINE tokyo1
ora.east.db
1 ONLINE ONLINE tokyo1 Open ← ▼
[oracle@tokyo1 shell]$
[oracle@tokyo1 shell]$ srvctl config database -d east
一意のデータベース名: east
データベース名: east
Oracleホーム: /u01/app/oracle/product/11.2.0/dbhome_1
Oracleユーザー: grid
spfile: +DATA/east/spfileeast.ora
ドメイン:
開始オプション: open
停止オプション: immediate
データベース・ロール: PRIMARY
管理ポリシー: AUTOMATIC
ディスク・グループ: DATA,FRA
サービス:
[oracle@tokyo1 shell]$ srvctl status database -d east
データベースは実行中です。
Fri Aug 23 19:44:10 2013
Error 12154 received logging on to the standby
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_7579.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
[oracle@tokyo1 dbs]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/dbs
[oracle@tokyo1 dbs]$
[oracle@tokyo1 dbs]$
[oracle@tokyo1 dbs]$ more 2013.08.23_east_pfile.txt
HPYMUSIC.__db_cache_size=301989888
HPYMUSIC.__java_pool_size=4194304
HPYMUSIC.__large_pool_size=8388608
HPYMUSIC.__pga_aggregate_target=339738624
HPYMUSIC.__sga_target=503316480
HPYMUSIC.__shared_io_pool_size=0
HPYMUSIC.__shared_pool_size=176160768
HPYMUSIC.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/east/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='+DATA/east/controlfile/current.270.823277705','+FRA/east/controlfile/current.
260.823277707'
*.db_block_checking='TRUE'
*.db_block_checksum='TRUE'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='HPYMUSIC'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=3038773248
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=HPYMUSICXDB)'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=842006528
*.nls_language='JAPANESE'
*.nls_territory='JAPAN'
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'
Fri Aug 23 19:49:38 2013
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
ARCH: Warning; less destinations available than specified
by LOG_ARCHIVE_MIN_SUCCEED_DEST init.ora parameter
Autotune of undo retention is turned on.
IMODE=BR
ILAT =27
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfileHPYMUSIC.ora
System parameters with non-default values:
processes = 150
nls_language = "JAPANESE"
nls_territory = "JAPAN"
memory_target = 804M
control_files = "+DATA/east/controlfile/current.270.823277705"
control_files = "+FRA/east/controlfile/current.260.823277707"
db_block_checksum = "TRUE"
db_block_size = 8192
compatible = "11.2.0.0.0"
log_archive_dest_2 = "SERVICE=HPYMUSIC SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=west"
log_archive_format = "%t_%s_%r.dbf"
db_create_file_dest = "+DATA"
db_recovery_file_dest = "+FRA"
db_recovery_file_dest_size= 2898M
standby_file_management = "AUTO"
undo_tablespace = "UNDOTBS1"
db_block_checking = "TRUE"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=HPYMUSICXDB)"
audit_file_dest = "/u01/app/oracle/admin/east/adump"
audit_trail = "DB"
db_name = "HPYMUSIC"
open_cursors = 300
diagnostic_dest = "/u01/app/oracle"
Fri Aug 23 19:49:39 2013
PMON started with pid=2, OS id=8442
Fri Aug 23 19:49:39 2013
VKTM started with pid=3, OS id=8444 at elevated priority
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Fri Aug 23 19:49:39 2013
GEN0 started with pid=4, OS id=8448
Fri Aug 23 19:49:39 2013
DIAG started with pid=5, OS id=8450
Fri Aug 23 19:49:39 2013
DBRM started with pid=6, OS id=8452
Fri Aug 23 19:49:39 2013
PSP0 started with pid=7, OS id=8454
Fri Aug 23 19:49:39 2013
DIA0 started with pid=8, OS id=8456
Fri Aug 23 19:49:39 2013
MMAN started with pid=9, OS id=8458
Fri Aug 23 19:49:39 2013
DBW0 started with pid=10, OS id=8460
Fri Aug 23 19:49:39 2013
LGWR started with pid=11, OS id=8462
Fri Aug 23 19:49:39 2013
CKPT started with pid=12, OS id=8464
Fri Aug 23 19:49:39 2013
SMON started with pid=13, OS id=8466
Fri Aug 23 19:49:39 2013
RECO started with pid=14, OS id=8468
Fri Aug 23 19:49:39 2013
RBAL started with pid=15, OS id=8470
Fri Aug 23 19:49:39 2013
ASMB started with pid=16, OS id=8472
Fri Aug 23 19:49:39 2013
MMON started with pid=17, OS id=8474
Fri Aug 23 19:49:39 2013
MMNL started with pid=18, OS id=8478
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
NOTE: initiating MARK startup
starting up 1 shared server(s) ...
Starting background process MARK
Fri Aug 23 19:49:39 2013
MARK started with pid=20, OS id=8482
NOTE: MARK has subscribed
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Reusing ORACLE_BASE from an earlier startup = /u01/app/oracle
Fri Aug 23 19:49:39 2013
ALTER DATABASE MOUNT
NOTE: Loaded library: System
SUCCESS: diskgroup DATA was mounted
ERROR: failed to establish dependency between database HPYMUSIC and diskgroup resource ora.DATA.dg
SUCCESS: diskgroup FRA was mounted
ERROR: failed to establish dependency between database HPYMUSIC and diskgroup resource ora.FRA.dg
Fri Aug 23 19:49:46 2013
NSS2 started with pid=24, OS id=8572
Successful mount of redo thread 1, with mount id 2951868947
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
ALTER DATABASE OPEN
LGWR: STARTING ARCH PROCESSES
Fri Aug 23 19:49:47 2013
ARC0 started with pid=26, OS id=8574
ARC0: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Fri Aug 23 19:49:48 2013
ARC1 started with pid=27, OS id=8576
Fri Aug 23 19:49:48 2013
ARC2 started with pid=28, OS id=8578
ARC1: Archival started
ARC2: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC2: Becoming the heartbeat ARCH
Fri Aug 23 19:49:48 2013
ARC3 started with pid=29, OS id=8580
LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Error 12154 received logging on to the standby
Fri Aug 23 19:49:51 2013
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_lgwr_8462.trc:
ORA-12154: TNS: ?????????????????????
Error 12154 for archive log file 2 to 'HPYMUSIC'
LGWR: Failed to archive log 2 thread 1 sequence 8 (12154)
Thread 1 advanced to log sequence 8 (thread open)
Thread 1 opened at log sequence 8
Current log# 2 seq# 8 mem# 0: +DATA/hpymusic/onlinelog/group_2.272.824213887
Current log# 2 seq# 8 mem# 1: +FRA/hpymusic/onlinelog/group_2.262.824213889
Successful open of redo thread 1
Fri Aug 23 19:49:51 2013
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Aug 23 19:49:51 2013
SMON: enabling cache recovery
Error 12154 received logging on to the standby
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_8578.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
Archived Log entry 7 added for thread 1 sequence 7 ID 0xaff1210d dest 1:
Error 12154 received logging on to the standby
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc3_8580.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
FAL[server, ARC3]: Error 12154 creating remote archivelog file 'HPYMUSIC'
FAL[server, ARC3]: FAL archive failed, see trace file.
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc3_8580.trc:
ORA-16055: FALリクエストが拒否されました。
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance HPYMUSIC - Archival Error. Archiver continuing.
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
No Resource Manager plan active
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Fri Aug 23 19:49:55 2013
QMNC started with pid=32, OS id=8590
Completed: ALTER DATABASE OPEN
Fri Aug 23 19:49:59 2013
Starting background process CJQ0
Fri Aug 23 19:49:59 2013
CJQ0 started with pid=33, OS id=8609
Fri Aug 23 19:49:59 2013
db_recovery_file_dest_size of 2898 MB is 6.38% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
[root@tokyo1 app]#
[root@tokyo1 app]# more /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_8578.trc
Trace file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_8578.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: tokyo1.oracle11g.jp
Release: 2.6.18-348.12.1.el5
Version: #1 SMP Wed Jul 10 05:28:41 EDT 2013
Machine: x86_64
Instance name: HPYMUSIC
Redo thread mounted by this instance: 1
Oracle process number: 28
Unix process pid: 8578, image: [email protected] (ARC2)
*** 2013-08-23 19:49:51.707
*** SESSION ID:(15.1) 2013-08-23 19:49:51.707
*** CLIENT ID:() 2013-08-23 19:49:51.707
*** SERVICE NAME:() 2013-08-23 19:49:51.707
*** MODULE NAME:() 2013-08-23 19:49:51.707
*** ACTION NAME:() 2013-08-23 19:49:51.707
Redo shipping client performing standby login
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:49:51.972 4132 krsh.c
Error 12154 received logging on to the standby
*** 2013-08-23 19:49:51.972 869 krsu.c
Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:49:51.973 4132 krsh.c
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
*** 2013-08-23 19:49:51.973 2747 krsi.c
krsi_dst_fail: dest:2 err:12154 force:0 blast:1
*** 2013-08-23 19:50:49.816
Redo shipping client performing standby login
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:50:50.070 4132 krsh.c
Error 12154 received logging on to the standby
*** 2013-08-23 19:50:50.070 869 krsu.c
Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:50:50.070 4132 krsh.c
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
*** 2013-08-23 19:50:50.070 2747 krsi.c
krsi_dst_fail: dest:2 err:12154 force:0 blast:1
*** 2013-08-23 19:51:51.147
Redo shipping client performing standby login
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:51:51.403 4132 krsh.c
Error 12154 received logging on to the standby
*** 2013-08-23 19:51:51.403 869 krsu.c
Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:51:51.403 4132 krsh.c
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
*** 2013-08-23 19:51:51.403 2747 krsi.c
krsi_dst_fail: dest:2 err:12154 force:0 blast:1
[root@tokyo1 app]#
[grid@tokyo1 shell]$ ./grid_info_east-x.sh
+ export ORACLE_SID=+ASM
+ ORACLE_SID=+ASM
+ LOGDIR=/home/grid/log
+ PRIMARYDB=east_DGMGRL
+ STANDBYDB=
+ PASSWORD=dataguard
+ mkdir -p /home/grid/log
++ date +%y%m%d,%H%M%S
+ echo 'asm info,130823,195709'
+ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on 金 8月 23 19:57:09 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Automatic Storage Management option
に接続されました。
SQL> SQL>
SYSDATE
13-08-23
SQL> SQL> SQL>
NAME TYPE VALUE
asm_diskgroups string FRA
asm_diskstring string /dev/sd*1
asm_power_limit integer 1
asm_preferred_read_failure_groups string
audit_file_dest string /u01/app/11.2.0/grid/rdbms/audit
audit_sys_operations boolean FALSE
audit_syslog_level string
background_core_dump string partial
background_dump_dest string /u01/app/grid/diag/asm/+asm/+ASM/trace
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
core_dump_dest string /u01/app/grid/diag/asm/+asm/+ASM/cdump
db_cache_size big integer 0
db_ultra_safe string OFF
db_unique_name string +ASM
diagnostic_dest string /u01/app/grid
event string
file_mapping boolean FALSE
filesystemio_options string none
ifile file
instance_name string +ASM
instance_number integer 1
instance_type string asm
large_pool_size big integer 12M
ldap_directory_sysauth string no
listener_networks string
local_listener string
lock_name_space string
lock_sga boolean FALSE
max_dump_file_size string unlimited
memory_max_target big integer 272M
memory_target big integer 272M
nls_calendar string
nls_comp string BINARY
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
os_authent_prefix string ops$
os_roles boolean FALSE
pga_aggregate_target big integer 0
processes integer 100
remote_listener string
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
service_names string +ASM
sessions integer 172
sga_max_size big integer 272M
sga_target big integer 0
shadow_core_dump string partial
shared_pool_reserved_size big integer 6081740
shared_pool_size big integer 0
sort_area_size integer 65536
spfile string +DATA/asm/asmparameterfile/registry.253.823204697
sql_trace boolean FALSE
statistics_level string TYPICAL
timed_os_statistics integer 0
timed_statistics boolean TRUE
trace_enabled boolean TRUE
user_dump_dest string /u01/app/grid/diag/asm/+asm/+ASM/trace
workarea_size_policy string AUTO
++ date +%y%m%d,%H%M%S
+ echo 'asmcmd info,130823,195709'
+ asmcmd ls -l
State Type Rebal Name
MOUNTED NORMAL N DATA/
MOUNTED NORMAL N FRA/
+ asmcmd ls -l 'data/asm/*'
Type Redund Striped Time Sys Name
ASMPARAMETERFILE MIRROR COARSE AUG 11 19:00:00 Y REGISTRY.253.823204697
+ asmcmd ls -l 'data/east/*'
Type Redund Striped Time Sys Name
+data/east/CONTROLFILE/:
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.260.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.270.823277705
+data/east/DATAFILE/:
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y SYSAUX.257.823276133
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y SYSAUX.267.823277615
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y SYSTEM.256.823276131
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y SYSTEM.266.823277615
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y UNDOTBS1.258.823276133
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y UNDOTBS1.268.823277615
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y USERS.259.823276133
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y USERS.269.823277615
+data/east/ONLINELOG/:
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_1.261.823276235
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_2.262.823276241
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_3.263.823276247
+data/east/PARAMETERFILE/:
PARAMETERFILE MIRROR COARSE AUG 23 12:00:00 Y spfile.265.823277967
+data/east/TEMPFILE/:
TEMPFILE MIRROR COARSE AUG 12 15:00:00 Y TEMP.264.823276263
TEMPFILE MIRROR COARSE AUG 23 19:00:00 Y TEMP.274.823277733
N spfileeast.ora => +DATA/EAST/PARAMETERFILE/spfile.265.823277967
+ asmcmd ls -l 'fra/east/*'
Type Redund Striped Time Sys Name
+fra/east/ARCHIVELOG/:
Y 2013_08_12/
Y 2013_08_15/
Y 2013_08_19/
Y 2013_08_22/
Y 2013_08_23/
+fra/east/CONTROLFILE/:
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.256.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.260.823277707
+fra/east/ONLINELOG/:
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_1.257.823276237
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_10.272.823535727
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_11.273.823535737
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_12.274.823535745
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_13.275.823535757
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_14.276.823535763
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_15.277.823535771
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_2.258.823276245
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_3.259.823276251
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_7.269.823535685
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_8.270.823535695
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_9.271.823535703
+fra/east/STANDBYLOG/:
N standby_group_07.log => +FRA/EAST/ONLINELOG/group_7.269.823535685
N standby_group_08.log => +FRA/EAST/ONLINELOG/group_8.270.823535695
N standby_group_09.log => +FRA/EAST/ONLINELOG/group_9.271.823535703
N standby_group_10.log => +FRA/EAST/ONLINELOG/group_10.272.823535727
N standby_group_11.log => +FRA/EAST/ONLINELOG/group_11.273.823535737
N standby_group_12.log => +FRA/EAST/ONLINELOG/group_12.274.823535745
N standby_group_13.log => +FRA/EAST/ONLINELOG/group_13.275.823535757
N standby_group_14.log => +FRA/EAST/ONLINELOG/group_14.276.823535763
N standby_group_15.log => +FRA/EAST/ONLINELOG/group_15.277.823535771
+ asmcmd find +data 'group*'
+data/EAST/ONLINELOG/group_1.261.823276235
+data/EAST/ONLINELOG/group_2.262.823276241
+data/EAST/ONLINELOG/group_3.263.823276247
+data/HPYMUSIC/ONLINELOG/group_1.271.824213881
+data/HPYMUSIC/ONLINELOG/group_2.272.824213887
+data/HPYMUSIC/ONLINELOG/group_3.273.824213895
+ asmcmd find +data 'spf*'
+data/EAST/PARAMETERFILE/spfile.265.823277967
+data/EAST/spfileeast.ora
+ asmcmd ls -l data/east/CONTROLFILE
Type Redund Striped Time Sys Name
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.260.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.270.823277705
+ asmcmd find +fra 'group*'
+fra/EAST/ONLINELOG/group_1.257.823276237
+fra/EAST/ONLINELOG/group_10.272.823535727
+fra/EAST/ONLINELOG/group_11.273.823535737
+fra/EAST/ONLINELOG/group_12.274.823535745
+fra/EAST/ONLINELOG/group_13.275.823535757
+fra/EAST/ONLINELOG/group_14.276.823535763
+fra/EAST/ONLINELOG/group_15.277.823535771
+fra/EAST/ONLINELOG/group_2.258.823276245
+fra/EAST/ONLINELOG/group_3.259.823276251
+fra/EAST/ONLINELOG/group_7.269.823535685
+fra/EAST/ONLINELOG/group_8.270.823535695
+fra/EAST/ONLINELOG/group_9.271.823535703
+fra/HPYMUSIC/ONLINELOG/group_1.261.824213883
+fra/HPYMUSIC/ONLINELOG/group_2.262.824213889
+fra/HPYMUSIC/ONLINELOG/group_3.263.824213897
+ asmcmd find +fra 'spf*'
+ asmcmd ls -l fra/east/CONTROLFILE
Type Redund Striped Time Sys Name
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.256.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.260.823277707
++ date +%y%m%d,%H%M%S
+ echo END,130823,195712
[grid@tokyo1 shell]$
■ 以下、スタンバイ側 ■ ■ ■ ■ ■ ■ ■
export ORACLE_SID=HPYMUSIC
sqlplus / as sysdba
startup nomount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
create spfile='+data/west/spfilewest.ora' from pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt';
srvctl stop database -d west -f
srvctl start database -d west -o open
srvctl start database -d west -o mount
srvctl start database -d west
startup mount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
srvctl start database -d west -o open
srvctl config database -d west
srvctl status database -d west
alter database recover managed standby database disconnect from session;
select name, database_role, open_mode from gv$database;
srvctl modify database -d west -s open
○ spfile を作成する
export ORACLE_SID=HPYMUSIC
sqlplus / as sysdba
startup nomount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
create spfile='+data/west/spfilewest.ora' from pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt';
○ 落とす
srvctl stop database -d west -f
○ open したいが起動しない( Mounted (Closed) 状態で起動することもある)。
srvctl start database -d west -o open
PRCR-1079 : リソースora.west.dbの起動に失敗しました
CRS-2674: Start of 'ora.west.db' on 'osaka1' failed
○ open したいが起動しない( Mounted (Closed) 状態で起動することもある)。
srvctl start database -d west -o mount
PRCR-1079 : リソースora.west.dbの起動に失敗しました
CRS-2674: Start of 'ora.west.db' on 'osaka1' failed
○ open したいが起動しない( Mounted (Closed) 状態で起動することもある)。
srvctl start database -d west
PRCR-1079 : リソースora.west.dbの起動に失敗しました
CRS-2674: Start of 'ora.west.db' on 'osaka1' failed
○ 起動するがエラーあり( alert_HPYMUSIC.log )
startup mount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
[oracle@osaka1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on 金 8月 23 19:05:35 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
アイドル・インスタンスに接続しました。
SQL> startup mount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
ORACLEインスタンスが起動しました。
Total System Global Area 839282688 bytes
Fixed Size 2217992 bytes
Variable Size 515901432 bytes
Database Buffers 314572800 bytes
Redo Buffers 6590464 bytes
データベースがマウントされました。
Error 12154 received logging on to the standby
FAL[client, ARC3]: Error 12154 connecting to HPYMUSIC for fetching gap sequence
Errors in file /u01/app/oracle/diag/rdbms/west/HPYMUSIC/trace/HPYMUSIC_arc3_25690.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
Errors in file /u01/app/oracle/diag/rdbms/west/HPYMUSIC/trace/HPYMUSIC_arc3_25690.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
○ open にならず Mounted (Closed) としかなってくれない。
srvctl start database -d west -o open
[oracle@osaka1 dbs]$ srvctl start database -d west -o open
[oracle@osaka1 dbs]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE osaka1
ora.FRA.dg
ONLINE ONLINE osaka1
ora.LISTENER.lsnr
ONLINE ONLINE osaka1
ora.asm
ONLINE ONLINE osaka1 Started
Cluster Resources
ora.cssd
1 ONLINE ONLINE osaka1
ora.diskmon
1 ONLINE ONLINE osaka1
ora.west.db
1 ONLINE INTERMEDIATE osaka1 Mounted (Closed)
○
srvctl config database -d west
srvctl status database -d west
[oracle@osaka1 dbs]$ srvctl config database -d west
一意のデータベース名: west
データベース名: HPYMUSIC
Oracleホーム: /u01/app/oracle/product/11.2.0/dbhome_1
Oracleユーザー: grid
spfile: +data/west/spfilewest.ora
ドメイン:
開始オプション: open
停止オプション: immediate
データベース・ロール: physical_standby
管理ポリシー: AUTOMATIC
ディスク・グループ: DATA,FRA
サービス:
[oracle@osaka1 dbs]$ srvctl status database -d west
データベースは実行中です。
○ mrp プロセスが起動するが、Read Only ではない。
alter database recover managed standby database disconnect from session;
select name, database_role, open_mode from gv$database;
[oracle@osaka1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on 金 8月 23 19:33:08 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
に接続されました。
SQL>
SQL> alter database recover managed standby database disconnect from session;
データベースが変更されました。
SQL> select name, database_role, open_mode from gv$database;
NAME DATABASE_ROLE
OPEN_MODE
HPYMUSIC PHYSICAL STANDBY
MOUNTED
[root@osaka1 app]# ps -ef |egrep -i mrp
oracle 26269 1 0 19:33 ? 00:00:00 ora_mrp0_HPYMUSIC
○ modify しても open にならない。
srvctl modify database -d west -s open
[oracle@osaka1 dbs]$ srvctl modify database -d west -s open
[oracle@osaka1 dbs]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE osaka1
ora.FRA.dg
ONLINE ONLINE osaka1
ora.LISTENER.lsnr
ONLINE ONLINE osaka1
ora.asm
ONLINE ONLINE osaka1 Started
Cluster Resources
ora.cssd
1 ONLINE ONLINE osaka1
ora.diskmon
1 ONLINE ONLINE osaka1
ora.west.db
1 ONLINE INTERMEDIATE osaka1 Mounted (Closed)standby 側に、アーカイブログファイルを少し適用すれば open できそうに感じます。
これを解決するためには、以下のエラーを解決するのがよいと思います。
・ORA-12154: TNS: 指定された接続識別子を解決できませんでした
両ノードの tnsnames.ora を確認させて貰えますか?
この推測が間違えていたら再検討しましょう。 -
How can we achive active/active cluster setup with Oracle
Hi Experts,
How can we achieve active/active database setup with oracle WITHOUT USING RAC.. As far as I know it's impossible (unless I'm wrong)..
We are using Oracle 11.2.0.1 64bit on Windows 2008 server. We deployed Oracle FailSafe but that's more of an active/passive solution based on a windows cluster.
The other solution we were thinking about is to use DataGuard and replication.. two servers.. the oracle instance running on one server generating logs, and the other server receive the logs and apply them to the physical standby db.. Still, this is not a real active/active setup.
So, is it possible to run 2 servers in an active/active cluster and have the oracle database in an active/active setup or have the instance running on multiple nodes (at the same time)?
ThanksLet me give you a brief explanation of what the situation is and you can be the judge..
My client have four databases with the smallest one being 20GB and the biggest around the 35gb (SGA 750mb to 1.4gb (Tiny by DB standards) and probably on a normal day, you can run all four of them on a decent desktop).. The DBs are used to keep track of people information. Through out the year, the databases are almost sitting idle, and by idle I mean, the odd update here and there, the odd report..etc. No hard real processing of any sort. Two days of the year (end of year) we have all the operators consolidating records and what's not and they will be pounding away entering data and updating the tables - with hourly reports that goes to 3rd parties. The client expects a 99.99 up time and availability during those 2 days.
Now, tell me, How can I justify using RAC and spending hundreds of thousands of dollars in licensing and what ever extra costs introduced by the complexity of the environment for the above scenario knowing that I don't have any real use for RAC for 363 days of the year; and we MIGHT need it for 2 days of the year? This is the dilemma we're facing.
Thanks
Edited by: rsar001 on Sep 3, 2010 9:42 AM -
ABAP+JAVA stack and oracle dataguard
Hi all,
I use Oracle Dataguard for my Data recovery System.
How can I manage tha Java part of nw2004s. In case of failover or switchover, the ABAP part is OK but the J2EE cannot start.
Thank's for your help,
Regards
Oracle 10.2.0.2 on Windows 2003 x64 R2It's ok guys. I just replace the j2ee/cluster directory of standby system by the one of source system.
The j2e engine can be started. -
Unable to bring up ASM on 2nd node of a 2-node Cluster
Having a very wierd problem on a 2-node cluster. I can only bring up on ASM instance at a time. If i bring up the second, it hangs. This is what the second (hung) instance puts in the alert log:
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as /ORAUTL/oraasm/product/ASM/dbs/arch
Autotune of undo retention is turned off.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
large_pool_size = 12582912
instance_type = asm
cluster_interconnects = 192.168.0.12
cluster_database = TRUE
instance_number = 2
remote_login_passwordfile= EXCLUSIVE
background_dump_dest = /ORAUTL/oraasm/admin/+ASM2/bdump
user_dump_dest = /ORAUTL/oraasm/admin/+ASM2/udump
core_dump_dest = /ORAUTL/oraasm/admin/+ASM2/cdump
pga_aggregate_target = 0
Cluster communication is configured to use the following interface(s) for this instance
192.168.0.12
Fri Nov 21 21:10:48 2008
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=5428
DIAG started with pid=3, OS id=5430
PSP0 started with pid=4, OS id=5432
LMON started with pid=5, OS id=5434
LMD0 started with pid=6, OS id=5436
LMS0 started with pid=7, OS id=5438
MMAN started with pid=8, OS id=5442
DBW0 started with pid=9, OS id=5444
LGWR started with pid=10, OS id=5446
CKPT started with pid=11, OS id=5448
SMON started with pid=12, OS id=5458
RBAL started with pid=13, OS id=5475
GMON started with pid=14, OS id=5487
Fri Nov 21 21:10:49 2008
lmon registered with NM - instance id 2 (internal mem no 1)
Fri Nov 21 21:10:49 2008
Reconfiguration started (old inc 0, new inc 2)
ASM instance
List of nodes:
0 1
Global Resource Directory frozen
Communication channels reestablished
After this it hangs. i've checked everything. CRS is fine.
I suspect its the kernel revision. This is a cluster of two v890's. Kernel rev is 127127-11. Anyone seen this issue ?
thanksResponses in-line:
Have you got any issue reported from Lock Monitor's (LMON) ? (those messages are in the alert.log are summaries of the reconfiguration event.
No issues that I have seen. I see trc files on both nodes for lmon, but neither contain errors.Do you have any post issues on the date that issue began (something with Reconfiguration started) ?
This is a new build. Its going to be a DR environment (Dataguard Physical Standby), so we've never managed to get ASM up yet.Do you have any other errors on the second node on the date the issue appears (some ORA-27041 or other messages) errors?
No errors at all.What is the result of a crs_stat -t ?
HA Resource Target State
ora.vzdfwsdbp01.LISTENER_VZDFWSDBP01.lsnr ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp01.gsd ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp01.ons ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp01.vip ONLINE ONLINE on vzdfwsdbp01
ora.vzdfwsdbp02.LISTENER_VZDFWSDBP02.lsnr ONLINE ONLINE on vzdfwsdbp02
ora.vzdfwsdbp02.gsd ONLINE ONLINE on vzdfwsdbp02
ora.vzdfwsdbp02.ons ONLINE ONLINE on vzdfwsdbp02
ora.vzdfwsdbp02.vip ONLINE ONLINE on vzdfwsdbp02
ASM isn't registered with CRS/OCR yet. I did add it at one time, but it didnt seem to make any difference.What is the release of your installation 10.2.0.4? Otherwise control if you can upgrade CRS, ASM and your RDBMS to that release.
CRS, ASM and Oracle will be 10.2.0.3Can't go to 10.2.0.4 yet as primary site is at 10.2.0.3 on a live system.
Can you please tell us what is the OS / Hardware in use?
Solaris 10, Sun v890$ uname -a
SunOS dbp02 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-V890
What is the result of that on the second node:
even a startup nomount hangs on second node.connect sqlplus / as sysdba;
startup nomount
desc v$asmdiskgroup;
select name, mount from v$diskgroup;
In the case that no group is mounted do
alter database mount diskgroup 'your diskgroupname';
What is the result of that?
thanks
-toby -
Hello Everyone,
I am facing a issue with Dataguard setup. Following is the description:
Purpose:
Setup a Dataguard using Oracle Data Guard Solution between Production & DR(Physical standby) databases.
Problem Statement:
In case the network connectivity interrupted between Primary database and Physical Standby database, the Primary database is unable to respond to application servers. This issue occurred when the log shipment process is on. However, if the log shipment of Oracle Data Guard is stopped then production database/ system is working fine even if the connectivity between Primary database and Physical Standby database is interrupted.
Standby database is configured in high performance mode.
Environment:
Database Software Primary and Standby Server – Oracle10g Enterprise with Partition option, 64 bit, Version – 10.2.0.4
Primary Database server is configured with Two Sun M5000 nodes in OS cluster environment, Active and Passive Mode, Sun Cluster Suite 3.2 and OS Solaris 10
Standby Database Server is configured, Server – V890, OS Solaris 10
Java based multiple application are connected with Primary database using JDBC type 4 driver to processed the request.
Two independent IPMP are configured on Primary database server, one for application network and second for data guard network.
Application network are configured with dedicated switch and data guard network is connected with different switch.
Single listener is configured on Physical IP and Application is connecting to database through virtual IP dynamically assigned through cluster serviceSQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;
PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PRIMARY
SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;
PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY
And this is the Alert log file snapshot and all other necessary information.
Errors in file /oracle/admin/prtp/udump/prtp_rfs_3634.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 00:01:49 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 00:01:49 2010
FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
FAL[server, ARC0]: FAL archive failed, see trace file.
Sat Aug 28 00:01:49 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Sat Aug 28 00:01:49 2010
ORACLE Instance prtp - Archival Error. Archiver continuing.
Sat Aug 28 00:01:49 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 3636
RFS[2]: Not using real application clusters
Sat Aug 28 00:01:49 2010
Errors in file /oracle/admin/prtp/udump/prtp_rfs_3636.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:29:41 2010
Thread 1 advanced to log sequence 24582 (LGWR switch)
Current log# 6 seq# 24582 mem# 0: /oradata1/prtp/redo-log/redo06_1.log
Current log# 6 seq# 24582 mem# 1: /oradata2/prtp/redo-log/redo06_2.log
LGWR: Standby redo logfile selected for thread 1 sequence 24583 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 01:29:42 2010
Thread 1 advanced to log sequence 24583 (LGWR switch)
Current log# 7 seq# 24583 mem# 0: /oradata1/prtp/redo-log/redo07_1.log
Current log# 7 seq# 24583 mem# 1: /oradata2/prtp/redo-log/redo07_2.log
Sat Aug 28 01:44:38 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24584 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 01:44:38 2010
Thread 1 advanced to log sequence 24584 (LGWR switch)
Current log# 8 seq# 24584 mem# 0: /oradata1/prtp/redo-log/redo08_1.log
Current log# 8 seq# 24584 mem# 1: /oradata2/prtp/redo-log/redo08_2.log
Sat Aug 28 01:59:39 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24585 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 01:59:39 2010
Thread 1 advanced to log sequence 24585 (LGWR switch)
Current log# 1 seq# 24585 mem# 0: /oradata1/prtp/redo-log/redo01_1.log
Current log# 1 seq# 24585 mem# 1: /oradata2/prtp/redo-log/redo01_2.log
Sat Aug 28 02:14:38 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24586 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 02:14:38 2010
Thread 1 advanced to log sequence 24586 (LGWR switch)
Current log# 2 seq# 24586 mem# 0: /oradata1/prtp/redo-log/redo02_1.log
Current log# 2 seq# 24586 mem# 1: /oradata2/prtp/redo-log/redo02_2.log
Sat Aug 28 02:29:39 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24587 for destination LOG_ARCHIVE_DEST_2
Sat Aug 28 02:29:39 2010
Thread 1 advanced to log sequence 24587 (LGWR switch)
Current log# 3 seq# 24587 mem# 0: /oradata1/prtp/redo-log/redo03_1.log
Current log# 3 seq# 24587 mem# 1: /oradata2/prtp/redo-log/redo03_2.log
Sat Aug 28 02:44:38 2010
LGWR: Standby redo logfile selected for thread 1 sequence 24588 for destination LOG_ARCHIVE_DEST_2
Errors in file /oracle/admin/prtp/udump/prtp_rfs_9611.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
FAL[server, ARC0]: FAL archive failed, see trace file.
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Sat Aug 28 01:27:56 2010
ORACLE Instance prtp - Archival Error. Archiver continuing.
Sat Aug 28 01:27:56 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[18]: Assigned to RFS process 9613
RFS[18]: Not using real application clusters
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/udump/prtp_rfs_9613.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16009: remote archive log destination must be a STANDBY database
Sat Aug 28 01:27:56 2010
FAL[server, ARC0]: Error 16009 creating remote archivelog file 'prtp'
FAL[server, ARC0]: FAL archive failed, see trace file.
Sat Aug 28 01:27:56 2010
Errors in file /oracle/admin/prtp/bdump/prtp_arc0_29429.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Sat Aug 28 01:27:56 2010
ORACLE Instance prtp - Archival Error. Archiver continuing.
Sat Aug 28 01:29:39 2010
Thread 1 cannot allocate new log, sequence 24581
Private strand flush not complete
Current log# 4 seq# 24580 mem# 0: /oradata1/prtp/redo-log/redo04_1.log
Current log# 4 seq# 24580 mem# 1: /oradata2/prtp/redo-log/redo04_2.log
NAME TYPE VALUE
O7_DICTIONARY_ACCESSIBILITY boolean FALSE
active_instance_count integer
aq_tm_processes integer 1
archive_lag_target integer 900
asm_diskgroups string
asm_diskstring string
asm_power_limit integer 1
audit_file_dest string /oracle/ora10g/rdbms/audit
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string NONE
background_core_dump string partial
background_dump_dest string /oracle/admin/prtp/bdump
backup_tape_io_slaves boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
circuits integer
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
commit_point_strength integer 1
commit_write string
compatible string 10.2.0
control_file_record_keep_time integer 7
control_files string /oradata1/prtp/control/control
01.ctl, /oradata2/prtp/control
/control02.ctl, /oradata3/prtp
/control/control03.ctl
core_dump_dest string /oracle/admin/prtp/cdump
cpu_count integer 48
create_bitmap_area_size integer 8388608
create_stored_outlines string
cursor_sharing string FORCE
cursor_space_for_time boolean TRUE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_block_buffers integer 0
db_block_checking string FALSE
db_block_checksum string TRUE
db_block_size integer 8192
db_cache_advice string ON
db_cache_size big integer 6G
db_create_file_dest string
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
db_domain string
db_file_multiblock_read_count integer 16
db_file_name_convert string
db_files integer 200
db_flashback_retention_target integer 0
db_keep_cache_size big integer 0
db_name string prtp
db_recovery_file_dest string
db_recovery_file_dest_size big integer 0
db_recycle_cache_size big integer 0
db_unique_name string prtp
db_writer_processes integer 6
dbwr_io_slaves integer 0
ddl_wait_for_locks boolean FALSE
dg_broker_config_file1 string /oracle/ora10g/dbs/dr1prtp.dat
dg_broker_config_file2 string /oracle/ora10g/dbs/dr2prtp.dat
dg_broker_start boolean FALSE
disk_asynch_io boolean TRUE
dispatchers string
distributed_lock_timeout integer 60
dml_locks integer 19380
drs_start boolean FALSE
event string 10511 trace name context forev
er, level 2
fal_client string prtp
fal_server string stndby
fast_start_io_target integer 0
fast_start_mttr_target integer 600
fast_start_parallel_rollback string LOW
file_mapping boolean FALSE
fileio_network_adapters string
filesystemio_options string asynch
fixed_date string
gc_files_to_locks string
gcs_server_processes integer 0
global_context_pool_size string
global_names boolean FALSE
hash_area_size integer 131072
hi_shared_memory_address integer 0
hs_autoregister boolean TRUE
ifile file
instance_groups string
instance_name string prtp
instance_number integer 0
instance_type string RDBMS
java_max_sessionspace_size integer 0
java_pool_size big integer 160M
java_soft_sessionspace_limit integer 0
job_queue_processes integer 10
large_pool_size big integer 560M
ldap_directory_access string NONE
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
local_listener string
lock_name_space string
lock_sga boolean FALSE
log_archive_config string
log_archive_dest string
log_archive_dest_1 string location=/archive/archive-log/
MANDATORY
log_archive_dest_10 string
log_archive_dest_2 string service=stndby LGWR
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string enable
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string ENABLE
log_archive_dest_state_3 string enable
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string arc_%t_%s_%r.arc
log_archive_local_first boolean TRUE
log_archive_max_processes integer 2
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 20971520
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
logmnr_max_persistent_sessions integer 1
max_commit_propagation_delay integer 0
max_dispatchers integer
max_dump_file_size string UNLIMITED
max_enabled_roles integer 150
max_shared_servers integer
nls_calendar string
nls_comp string
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
olap_page_pool_size big integer 0
open_cursors integer 4500
open_links integer 30
open_links_per_instance integer 30
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
os_authent_prefix string ops$
os_roles boolean FALSE
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_max_servers integer 960
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
pga_aggregate_target big integer 3G
plsql_ccflags string
plsql_code_type string INTERPRETED
plsql_compiler_flags string INTERPRETED, NON_DEBUG
plsql_debug boolean FALSE
NAME TYPE VALUE
plsql_native_library_dir string
plsql_native_library_subdir_count integer 0
plsql_optimize_level integer 2
plsql_v2_compatibility boolean FALSE
plsql_warnings string DISABLE:ALL
pre_11g_enable_capture boolean FALSE
pre_page_sga boolean FALSE
processes integer 4000
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced
rdbms_server_dn string
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
recyclebin string OFF
remote_archive_enable string true
remote_dependencies_mode string TIMESTAMP
remote_listener string
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
replication_dependency_tracking boolean TRUE
resource_limit boolean TRUE
resource_manager_plan string
resumable_timeout integer 0
rollback_segments string
serial_reuse string disable
service_names string prtp
session_cached_cursors integer 0
session_max_open_files integer 10
sessions integer 4405
sga_max_size big integer 20G
sga_target big integer 20G
shadow_core_dump string partial
shared_memory_address integer 0
shared_pool_reserved_size big integer 214748364
shared_pool_size big integer 4G
shared_server_sessions integer
shared_servers integer 0
skip_unusable_indexes boolean TRUE
smtp_out_server string smtp.banglalinkgsm.com
sort_area_retained_size integer 0
sort_area_size integer 65536
spfile string /oradata1/prtp/pfile/spfileprt
p.ora
sql92_security boolean FALSE
sql_trace boolean FALSE
sql_version string NATIVE
sqltune_category string DEFAULT
standby_archive_dest string ?/dbs/arch
standby_file_management string AUTO
star_transformation_enabled string FALSE
statistics_level string TYPICAL
streams_pool_size big integer 0
tape_asynch_io boolean TRUE
thread integer 0
timed_os_statistics integer 0
timed_statistics boolean TRUE
trace_enabled boolean TRUE
tracefile_identifier string
transactions integer 4845
transactions_per_rollback_segment integer 5
undo_management string AUTO
undo_retention integer 15000
undo_tablespace string UNDOTBS
use_indirect_data_buffers boolean FALSE
user_dump_dest string /oracle/admin/prtp/udump
utl_file_dir string
workarea_size_policy string AUTO -
Red Hat Cluster Suite without the use of RAC
Hi,
I want to install Oracle 9i/10g database on two nodes in linux redhat AS 3. I want to use Red Hat Cluster Suite for clustering and failover the Oracle db without the use of RAC. Is it possible and would it be sufficient enough ? Does Red Hat Cluster Suite support with Oracle 10g or just 9i ? With Red HAt Cluster Suite, would it require to have one virtual IP ?
ThanksIf you want failover all you need is a dataguard solution and skip out on cluster suite.
If you want Distributed Database then you need RAC.. You can't implement a 9i/10g cluster per say by using an OS solution vs an Oracle solution.
Node A - Primary
Node B - Secondary
Node A -> B (DataGuard redo replication)
Process Management scripts to switch users from A to B and vice versa pending recovery of instances. Cluster suite has no involvement outside of what may be nice nice system management capability. -
Encountering ORA-01152 when implementing DataGuard
Working of a 2-node cluster hosting 11.1.0.7 on a Linux RH4 platform, and attempting to implement DataGuard on a single node with same OS and same DB version, and followed document http://www.oracle.com/technology/deploy/availability/pdf/dataguard11g_rac_maa.pdf
The "duplicate target " command works successfully, and having recreated the sp file on the standby node, when I attempt to startup the standby database I get the following error.
SQL> startup
ORACLE instance started.
Total System Global Area 534462464 bytes
Fixed Size 2161400 bytes
Variable Size 314574088 bytes
Database Buffers 209715200 bytes
Redo Buffers 8011776 bytes
Database mounted.
ORA-10458: standby database requires recovery
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '+DATA/abcdg/datafile/system.258.697912407'
I have re-tried several times without success - any ideas anybody?I've skimmed the PDF and I think what has happened is that you may have slightly diverged from the sequence shown by issuing "startup" instead of "startup mount", the difference being that "startup" tries to open the standby database in read only mode, which is when ORA-01152 is thrown.
I'm assuming here that the rman duplicate operation does not include a recovery phase, if that is true then the restored datafiles will need recovering to a consistent state before the database can be opened in read-only mode.
In short you probably just need to issue:
recover managed standby database disconnect;and let managed recovery run the archive logs in. Once it is consistent then subsequent "startup" commands should work without error.
If you don't actually want it open read-only then you can just leave it in a mount state and running managed recovery. -
Hello guys;
I want to use a cluster oracle server and another server in another place as a backup server to copy the database on line from the cluster server.
If I want to make the backup server a primary server if something wrong happened to the cluster server.
what the steps should I do? or if there is any document to read I'll be thankful
thanks in advanceHi,
What do you refer by saying cluster ? If you want to have a HA (High Availability) with Active/Passive cluster then you can go with 3rd party cluster vendor, like HACMP from IBM, ServiceGuard from HP and so on. If you want to create this with Oracle technologies then you are talking about Oracle RAC One Node. Of course you always go with Oracle RAC (Real Application Cluster) if you want to have a Active/Active cluster. These are in terms of cluster and High Availability.
There is another point if you want to create a DR (Disaster Recovery) of you primary database then you would consider implementing Oracle Dataguard.
Oracle has a rich documentation for blueprints and best practices, my sincerely advice is to get familiar with it:
http://www.oracle.com/technetwork/database/features/availability/maa-090890.html
Regards,
Sve
Maybe you are looking for
-
HT5312 delete itunes and start over
How do I delete itunes and completely start over but not lose the money on card?
-
Can you upgrade to the new iMovie if originally from an iLife disk?
I have a MacBook Pro early 2011 model. It came with the mac app store, and the iLife dvd which contained iMovie/iPhoto. Can I still update to the new releases of iMovie and iPhoto for free or do I have to buy them all over again through the Mac App s
-
Iphoto library distorted thumbnails AND links missing
while moving some pictures from my imac (2.1 Gh G5) to my ibook (800 Mh G4) the iphoto program running on my Imac told me it need to "update". Clicking OK resulted in COMPLETE ruin of all my thumbnail photos. They are all distorted and some are missi
-
I'm not too happy with the iOS-7 upgrade to MyPhotos. To do a slide show, you need to manually select each photo in the album, and then you can start the slide show. (How about a "select all" option?) Then when you close MyPhotos, it clears all t
-
First Complete Time Machine Restore
I've used TM to restore a few documents that I've accidentally erased but today's the first time I've used it for recovering my hard drive. My boot drive threw up a SMART failure warning earlier this week so I ordered a replacement and installed it t