Rollback a dataguard parameter
I am trying to roll back my data guard setup parameters while primary database is online.
But I cannot make destination 2 back to NULL (blank)
SQL> alter system set log_archive_dest_2=null scope=both;
alter system set log_archive_dest_2=null scope=both
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_2" not allowed with SPFILE
ok, close thread:
SQL> alter system set log_archive_dest_2='' scope=both;
System altered.
Similar Messages
-
Dataguard Parameter Issue - Issue Resolved
I have used DBCA to create a new database and have taken the default values for the parameters, including pga_aggregate_target=94371840. I am now attempting to use GRID to create a physical standby database. I follow the instructions for Data Guard, but it fails during the last steps with the error "ORA-1078 failure in processing system parameters. LRM-00116 syntax error at 'pga_aggregate_ta' following '='.
1. What am I doing wrong?
2. It this caused by user error or have I encountered a bug?
3. Do I need to change my parameter to a specific value?
Ta
Message was edited by:
user458496I am currently using 10gR2 (10.2.0.2) from my source and destination databases. My GRID version is 10.1.0.3. I think that many of my problems will be resolved once I patch up my GRID.
The error is only able to list the first 16 digits of the parameter causing it to not show completely, however the parameter has been entered correctly in my init.ora and spfile.ora.
I resolved the problem of not being able to start the database using the spfile by re-creating it using the pfile. However, the clonging problem still persists.
As mentioned before, I will apply the necessary patches to GRID then try again.
Thank you. -
ROLLBACK SEGMENT의 MINEXTENTS를 20 이상으로 하면 좋은 이유
제품 : ORACLE SERVER
작성날짜 : 2003-06-19
ROLLBACK SEGMENT의 MINEXTENTS를 20 이상으로 하면 좋은 이유
=========================================================
PURPOSE
이 자료는 다음과 같은 주제에 대하여 소개하는 자료이다.
이 문서는 database application의 요구 사항을 충족시키기 위해 고려되어
져야 할 rollback segment tablespace 구성에 관한 내용을 담고 있다.
Creating, Optimizing, and Understanding Rollback Segments
-Rollback Segment 구성과 기록 방식
-Transaction에 Rollback Segment를 할당하는 Oracle 내부 메커니즘
-Rollback Segment 크기와 갯수
-Rollback Segment의 크기와 갯수 결정을 위한 테스트
-Rollback Segment extent의 크기와 갯수
-Rollback Segment의 minextents를 20 이상으로 하면 좋은 이유?
-Rollback Segment의 Optimal storage parameter와 Shrink
Explanation
Rollback Segment 구성과 기록 방식
Rollback segment는 extent라 불리는 연속적인 여러 개의 block으로 구성된다.
Rollback segment는 ordered circular 방식으로 extent를 쓰게 되는데,
current extent가 full이 되면 next extent로 옮겨 가며 사용하게 된다.
Transaction은 rollback segment 내의 current location에 record를 쓴 다음,
record의 size 만큼 current pointer를 옮겨 간다.
Rollback segment에 현재 record가 쓰여지고 있는 위치를 "Head"라고 한다.
또한, "Tail"이란 용어는 rollback segment에서 가장 오래된 active
transaction record의 시작 위치가 되는 부분을 말한다.
Transaction에 Rollback Segment를 할당하는 Oracle 내부 메커니즘
새로운 transaction이 rollback segment 를 요청하면, 각 rollback segment
를 이용하고 있는 active transaction 갯수를 확인하여 가장 적은 갯수의
active transaction 을 가진 rollback segment를 할당하게 된다.
Rollback segment는 transaction load를 처리하기에 충분한 크기를 가져야
하고, 필요한 만큼의 rollback segment를 사용할 수 있도록 적당한 갯수의
rollback segment를 가져야 한다.
1. 한 transaction은 단 하나의 rollback segment만을 사용할 수 있다.
2. 같은 extent에 여러 transaction이 기록할 수 있다.
3. Rollback segment의 Head는 Tail에 의해 현재 사용 중인 extent를
침범하지 않는다.
4. 링 형태로 구성되어 있는 rollback segment의 extent들은 다음 extent를
찾을 때 절대 건너 뛰는 일이 없으며, 순서를 뒤바꾸어 사용하지도 않는다.
5. Head가 next extent를 찾지 못하면, 새로운 extent를 추가로 할당하고,
그 extent를 링 안에 포함시킨다.
위와 같은 원리를 감안할 때, transaction size 뿐만 아니라 transaction
time도 상당히 중요한 고려 사항이라는 것을 알 수 있다.
Rollback Segment 크기와 갯수
Rollback segment size가 충분한지 판단하는 기준은 transaction activity에
직접적으로 영향을 받는다. 주로 일어나는 transaction activity에 근거하여
rollback segment size를 결정하여야 하고, 잘 일어나지 않는 특수한 경우의
큰 transaction이 문제라면 별도의 rollback segment로 관리되어야 한다.
Transaction 발생 중 Head가 너무 빨리 wrap around 시켜서 tail을 catch하
지 않도록 하여야 하며, 자주 변경되는 data에 대해 long-running query가
수행되었을 경우 read-consistency가 유지될 수 있도록 rollback segment
가 wrap around되지 않아야 한다.
Rollback segment 갯수를 적당히 잡아야 하는 이유는 process들 간에
contention을 방지하기 위함이고, V$WAITSTAT, V$ROLLSTAT, V$ROLLNAME
view를 통해서 contention을 확인할 수 있으며, 조회문은 다음과 같다.
sqlplus system/manager
select rn.name, (rs.waits/rs.gets) rbs_header_wait_ratio
from v$rollstat rs, v$rollname rn
where rs.usn = rn.usn
order by 1;
위의 query에 의해 조회된 rbs_header_wait_ratio 가 0.01 보다 크면,
rollback segment 갯수를 추가한다.
Rollback Segment의 크기와 갯수 결정을 위한 테스트
1. Rollback segment tablespace 생성
2. 테스트하기 위해 생성할 Rollback segment 갯수 결정
3. 같은 크기의 extent로 rollback segment 생성
extent 갯수는 최대 확장 시 10 - 30 개 정도가 되도록 extent 크기를 결정
4. Rollback segment의 minextents는 2이다.
5. 테스트할 rollback segment와 system rollback segment만 online 상태로 한다.
6. Transaction을 수행하고, 필요하면 application을 load한다.
7. Rollback segment contention을 확인한다.
8. Rollback segment가 최대 얼마까지 확장하는지 모니터링한다.
Rollback Segment extent의 크기와 갯수
Rollback segment가 자라나는 최대 사이즈를 알 수 있는데, 이 수치를
"minimum coverage size"라 한다. 만약, contention이 발생한다면 rollback
segment 갯수를 늘려 가면 테스트를 반복한다. 또한, extent 갯수가 10개
미만이나 30개 이상이 될 필요가 있다면 extent 크기를 늘리거나 줄이면서
테스트를 반복해 나가면 된다.
Rollback segment의 extent 크기를 정할 때, 각 extent는 모두 같은 크기로
생성할 것을 recommend한다.
Rollback tablespace의 크기는 extent size의 배수로 지정한다.
최적의 성능을 위한 rollback segment의 minextents는 20 이상이어야 한다.
Rollback Segment의 minextents를 20 이상으로 하면 좋은 이유?
Rollback segment는 dynamic하게 allocate되고, 더 이상 필요 없게 되었을 때
(만약, Optimal parameter가 셋팅되어 있으면) 모두 commit된 extent에
대해서는 optimal size 만큼만 남기고 release(deallocate)된다.
Rollback segment가 적은 수의 extent를 가질 수록, space 할당/해제 시
extent 수가 많을 때보다 큰 사이즈의 space가 할당되고, 해제된다.
다음과 같은 예를 들어 보자.
200M 정도의 rollback segment가 있는데, 100M 짜리 2개의 extent로 이루어져
있다고 가정해보자. 이 rollback segment에 추가로 space를 할당해야 할 일이
생겼을 때, 모든 rollback segment extent는 같은 크기를 가져야 한다는 점을
감안할 때, 100M 짜리 extent를 하나 더 할당해야 할 것이다.
이 결과 직전의 rollback segment 크기에 비하여 50% 만큼의 크기 증가분이
생겨나게 된 것인데, 실제 필요로 하는 space보다 더 많은 space가 할당되었을
것이다.
이와 반대로, 10M 짜리 extent 20개로 구성된 200M 짜리 rollback segment를
생각해보자.
여기에 추가로 space를 할당해야 할 일이 생겼을 때, 10M 짜리 extent 하나만
추가되면 되는 것이다.
Rollback segment가 20개 또는 그 이상의 extent로 구성되어 있다면 extent가
하나 더 증가할 경우가 생겼을 때, rollback segment의 전체 크기가 5% 이상은
늘어나지 않는다는 것이다.
즉, space의 할당과 해제 작업이 보다 유연하고 쉽게 일어날 수 있다.
요약하면, rollback segment의 extent 갯수를 20 이상으로 잡으면 space
할당과 해제가 "보다" 수월해진다.
실제로 extent 갯수를 20 이상으로 잡았을 때, 처리 속도가 훨씬 빨라진다는
사실이 많은 테스트 결과 밝혀졌다.
한가지 확실한 사실은, space를 할당하고 해제하는 작업은 cost가 적게 드는
작업이 아니라는 사실이다.
실제로 extent가 할당/해제되는 작업이 일어날 때, performance가 저하되는
일이 발생한다는 것이다.
Extent 하나에 대한 cost는 별 문제가 안 된다고 할지라도, rollback segment
는 끊임없이 space를 할당하고 해제하는 작업을 반복하기 때문에 작은 크기의
extent를 갖는 것이 cost 측면에서 훨씬 효율적이라는 결론이다.
Rollback Segment의 Optimal storage parameter와 Shrink
Optimal은 deallocate 시에 rollback segment 내에 optimal size 만큼의
extents를 유지하기 위해 사용하는 rollback segment storage parameter이다.
다음과 같은 명령으로 사용한다.
alter rollback segment r01 storage (optimal 1m);Optimal size는 storage 절 안에서 기술되어야 한다.
Optimal size 이상이 되면, 모두 commit된 extent에 대해서는 optimal size
만큼만 남기고 release된다.
즉, optimal에서 지정한 크기 만큼만 rollback segment를 유지하겠다는
뜻이며, 일정한 크기로 늘어났다가 다음번 tx이 해당 rbs를 취할 경우
optimal size만큼 resize하는 option이다.
rbs의 가장 최근에 사용된 extent가 다 차서 다른 extent를 요구할 때
이 optimal size와 rbs size를 비교하게 되며, 만약 rbs size가 더 크다면
active tx에 관여하지 않는 tail extent에 대하여 deallocation이 이루어진다.
특정 rollback segment가 너무 큰 space를 차지해서 다른 rollback segment가
extent를 발생할 수 있는 여유 공간을 부족하게 만들기 때문에 이를 극복하기
위해서 optimal size를 지정할 필요가 있다.
즉, optimal parameter를 지정하면 space availability 측면에서 효율적이다.
다음과 같이 shrink 명령을 수행하는데, size를 지정하지 않으면 optimal
size 만큼 shrink된다.
alter rollback segment [rbs_name] shrink to [size];Shrink 명령 수행 후, 바로 줄어들지 않는 경우가 있는데,
transaction이 있는 경우는 줄어들지 않고, transaction이 종료되면 줄어든다.
Optimal이 적용되는 시간은 session이 빠져 나가고 약 5~10 분 정도 걸린다.
적당한 OPTIMAL SIZE?
=> 20 ~ 30 extents 정도가 적당한데, batch job의 성격에 따라 size는 달라
지며 각 optimal의 합이 datafile의 size를 넘어도 전혀 상관없다.
Optimal size를 initial, next와 같게 주면 extent가 발생하는 매번 shrink가
일어나므로 좋지 않다.
RBS들의 평균 크기를 구하여 이것을 optimal 크기로 지정하여 사용하는 것을
권한다.
다음의 query를 이용하여 peak time에 rollback segment들의 평균 크기를 구한다.
select initial_extent + next_extent * (extents-1) "Rollback_size", extents
from dba_segments
where segment_type ='ROLLBACK';
이 크기의 평균값(bytes)을 rollback segment들의 optimal size로 사용할 수
있다.
주의할 사항은 너무 자주 shrink된다거나 optimal 값을 너무 작게 주면
ora-1555 : snapshot too old error가 발생할 확률이 높아지므로,
사용하지 않는 것이 좋을 수도 있고, 되도록 큰 값으로 셋팅해야 한다.
Rollback segment의 optimal size를 확인할 수 있는 view는 V$ROLLSTAT
이라는 dynamic view로서 OPTSIZE column에서 확인이 가능하다.
Example
none
Reference Documents
<Note:69464.1> -
ORACLE PARALLEL SERVER (OPS)
제품 : ORACLE SERVER
작성날짜 : 2004-08-13
ORACLE PARALLEL SERVER (OPS)
==============================
PURPOSE
다음은 OPS(ORACLE PARALLEL SERVER) 의 구조에 대해서 알아본다.
SCOPE
Standard Edition 에서는 Real Application Clusters 기능이 10g(10.1.0) 이상 부터 지원이 됩니다.
Explanation
1. Parallel Server Architecture
OPS는 다수의 Loosely Coupled System들 간의 원활한 Resource
Sharing을 위해 제공하는 DLM(PCM)을 이용하여 동시에 다수의
사용자가 각 Node를 통해 한 Database를 Access함으로써 System의
가용성과 전체적인 성능을 극대화시키기 위한 다중처리 System
구성이다.
(1) Loosely Coupled System
SMP Machine과 같이 Tightly Coupled System들 간에 Data Files,
Print Queue 같은 Resource를 공유하기 위한 Shared Disk
Architecture로 Node간의 정보전송은 Common High-Speed Bus를
이용한다.
(2) Distributed Lock Manager(DLM)
Loosely Coupled System에서 Resource Sharing을 조정,관리하는
Software로 Application들이 같은 Resource에 대해 동시에 Access를
요구할 때 서로간의 Synchronization을 유지하며 Conflict가
발생하지 않도록 하는 기능을 담당한다.
다음은 DLM의 주요 Service이다
- Resource에 대한 Current "ownership" 유지 관리
- Application Process들의 Resource Request Accept
- 요구한 Resource가 가용할 경우 해당 Process에 공지
- Resource에 대한 Exclusive Access 허용
(3) Parallel Cache Management(PCM)
Data File내의 하나 이상의 Data Block을 관리하기 위해
할당되는 Distributed Lock을 PCM Lock이라 하며 어떤 Instance가
특정 Resource를 Access하기 위해서는 반드시 그 Resource의 Master
Copy의 "owner"가 되어야 한다.
이는 곧 그 Resource를 Cover하는 Distributed Lock의 "owner"가
됨을 의미한다. 이러한 "owner ship"은 다른 Instance가 동일
Data Block 또는 그 PCM Lock에 의해 Cover되고 있는 다른 Data
Block에 대한 Update 요구가 있을때까지 계속 유지된다.
"owner ship"이 한 Instance에 다른 Instance로 전이 되기 전에
변경된 Data Block은 반드시 Disk로 "write"하므로 각 Node의
Instance간 Cache Coherency는 철저하게 보장된다.
2. Parallel Server의 특성
- Network상의 각 Node에서 Oracle Instance 기동 가능
- 각 Instance는 SGA + Background Processes로 구성
- 모든 Instance는 Control File과 Datafile들을 공유
- 각 Instance는 자신의 Redo Log를 보유
- Control File, Datafiles, Redo Log Files는 하나 이상의
Disk에 위치
- 다수의 사용자가 각 Instance를 통해 Transaction 실행 가능
- Row Locking Mode 유지
3. Tuning Focus
서로 다른 Node를 통해서 하나의 Database 구성하의 Resource를
동시에 사용하는 OPS 환경에서 Data의 일관성과 연속성을 위한
Instance간의 Lock Managing은 불가피한 현실이다. 즉, 위에서
언급한 Instance간의 Resource "owner ship"의 전이(pinging 현상)
와 같은 Overhead를 최소화하기 위해선 효율적인 Application
Partition(Job의 분산)이 가장 중요한 현실 Factor이다.
다시 말해 서로 다른 Node를 통한 동일 Resource에의 Cross-Access
상황을 최소화해야 함을 의미한다.
다음은 OPS 환경에서 Database Structure 차원의 Tuning Point로서
PCM Lock 관련 GC(Global Constant) Parameters와 Storage에 적용할
Options 및 기타 필요한 사항이다.
(1) Initial Parameters
OPS 환경에서 PCM Locks를 의해 주는 GC(Global Constant)
Parameters는 Lock Managing에 절대적인 영향을 미치며 각 Node마다
반드시 동일한 Value로 설정(gc_lck_procs 제외)되어야 한다.
일반적인 UNIX System에서 GC Parameters로 정의하는 PCM Locks의
총합은 System에서 제공하는 DLM Configuration 중 "Number of
Resources"의 범위 내에서 설정 가능하다.
- gc_db_locks
PCM Locks(DistributedLocks)의 총합을 정의하는 Parameter로
gc_file_to_locks Parameter에 정의한 Locks의 합보다 반드시
커야 한다.
너무 작게 설정될 경우 하나의 PCM Lock이 관리하는 Data Blocks가
상대적으로 많아지므로 Pinging(False Pinging) 현상의 발생
가능성도 그만큼 커지게 되며 그에 따른 Overhead로 인해 System
Performance도 현격히 저하될 가능성이 크다. 따라서 가능한
최대의 값으로 설정해야 한다.
- False Pinging
하나의 PCM Lock이 다수의 Data Blocks를 관리하므로 자신의
Block이 아닌 같은 PCM 관할하의 다른 Block의 영향으로 인해
Pining현상이 발생할 수 있는 데 이를 "False Pinging"이라 한다.
Database Object별 발생한 Pinging Count는 다음과 같이 확인할 수
있으며 sum(xnc) > 5(V$PING) 인 경우에 더욱 유의해야 한다.
- gc_file_to_locks
결국 gc_db_locks에 정의된 전체 PCM Locks는 각 Datafile 당
적절히 안배가 되는데 전체 Locks를 운용자의 분석 결과에 의거
각 Datafile에 적절히 할당하기 위한 Parameter이다.
운용자의 분석 내용은 각 Datafile 내에 존재하는 Objects의 성격,
Transaction Type, Access 빈도 등의 세부 사항을 포함하되 전체
PCM Locks 대비 Data Blocks의 적절하고도 효율적인 안배가
절대적이다.
이 Parameter로 각 Datafile당 PCM Locks를 안배한 후 Status를
다음의 Fixed Table로 확인할 수 있다.
Sample : gc_db_locks = 1000
gc_file_to_locks = "1=500:5=200"
X$KCLFI ----> 정의된 Bucket 확인
Fileno Bucket
1 1
2 0
3 0
4 0
5 2
X$KCLFH ----> Bucket별 할당된 Locks 확인
Bucket Locks Grouping Start
0 300 1 0
1 500 1 300
2 200 1 800
gc_files_to_locks에 정의한 각 Datafile당 PCM Locks의 총합은 물론
gc_db_locks의 범위를 초과할 수 없다.
다음은 각 Datafile에 할당된 Data Blocks의 수를 알아보는 문장이다.
select e.file_id id,f.file_name name,sum(e.blocks) allocated,
f.blocks "file size"
from dba_extents e,dba_data_files f
where e.file_id = f.file_id
group by e.file_id,f.file_name,f.blocks
order by e.file_id;
- gc_rollback_segments
OPS로 구성된 각 Node의 Instance에 만들어진 Rollback Segment
(Init.ora의 rollback_segments에 정의한)의 총합을 정의한다.
다수의 Instance가 Rollback Segment를 공용으로 사용할 수는 있으나
OPS 환경에서는 그로 인한 Contention Overhead가 엄청나므로 반드시
Instance당 독자적인 Rollback Segment를 만들어야 하며 Instance간
동일한 이름의 부여는 불가하다.
select count(*) from dba_rollback_segs
where status='ONLINE';
위의 결과치 이상의 값으로 정해야 한다.
- gc_rollback_locks
하나의 Rollback Segment에서 동시에 변경되는 Rollback Segment
Blocks에 부여하는 Distributed Lock의 수를 정의한다.
Total# of RBS Locks = gc_rollback_locks * (gc_rollback_segments+1)
위에서 "1"을 더한 것은 System Rollback Segment를 고려한 것이다.
전체 Rollback Segment Blocks 대비 적절한 Locks를 정의해야 한다.
다음은 Rollback Segment에 할당된 Blocks의 수를 알아보는 문장이다.
select s.segment_name name,sum(r.blocks) blocks
from dba_segments s,dba_extents r
where s.segment_name = r.segment_name
and s.segment_type = 'ROLLBACK'
group by s.segment_name;
- gc_save_rollback_locks
특정 시점에서 어떤 Tablespace 내의 Object를 Access하고 있는
Transaction이 있어도 그 Tablespace를 Offline하는 것은 가능하다.
Offline 이후에 발생된 Undo는 System Tablespace내의 "Differred
Rollback Segment"에 기록, 보관 됨으로써 Read Consistency를
유지한다. 이 때 생성되는 Differred Rollback Segment에 할당하는
Locks를 정의하는 Parameter이다.
일반적으로 gc_rollback_locks의 값과 같은 정도로 정의하면 된다.
- gc_segments
모든 Segment Header Blocks를 Cover하는 Locks를 정의한다. 이 값이
작을 경우에도 Pinging 발생 가능성은 그 만큼 커지므로 해당
Database에 정의된 Segments 이상으로 설정해야 한다.
select count(*) from dba_segments
where segment_type in ('INDEX','TABLE','CLUSTER');
- gc_tablespaces
OPS 환경에서 동시에 Offline에서 Online으로 또는 Online에서
Offline으로 전환 가능한 Tablespace의 최대값을 정의하는 것으로
안전하게 설정하기 위해서 Database에 정의된 Tablespace의 수만큼
설정한다.
select count(*) from dba_tablespaces;
- gc_lck_procs
Background Lock Process의 수를 정하는 것으로 최대 10개까지
설정(LCK0-LCK9)할 수 있다. 기본적으로 하나가 설정되지만 필요에
따라 수를 늘려야 한다.
(2) Storage Options
- Free Lists
Free List는 사용 가능한 Free Blocks에 대한 List를 의미한다.
Database에서 새롭게 가용한 Space를 필요로 하는 Insert나
Update시엔 반드시 Free Space List와 관련 정보를 가지고 있는
Blocks Common Pool을 검색한 후 필요한 만큼의 충분한 Blocks가
확보되지 않으면 Oracle은 새로운 Extent를 할당하게 된다.
특정 Object에 동시에 다수의 Transaction이 발생한 경우 다수의
Free Lists는 그만큼 Free Space에 대한 Contention을 감소시킨다.
결국 Free List의 개수는 Object의 성격과 Access Type에 따라
적절히 늘림으로써 커다란 효과를 거둘 수 있다.
예를 들면 Insert나 크기가 늘어나는 Update가 빈번한 Object인
경우엔 Access 빈도에 따라 3 - 5 정도로 늘려줘야 한다.
- freelist groups
Freelist group의 수를 정의하며 전형적으로 OPS 환경에서
Instance의 수만큼 설정한다. 특정 Object의 Extent를 특정
Instance에 할당하여 그 Instance에 대한 Freelist Groups를
유지하므로 Instance별 Free List 관리도 가능하다.
(3) 기타
- Initrans
동시에 Data Block을 Access할 때 필요한 Transaction Entries에
대한 초기치를 의미하며 Entry당 23Byte의 Space를 미리 할당한다.
기본값은 Table이 "1" 이고 Index와 Cluster는 "2" 이다. Access가
아주 빈번한 Object인 경우 Concurrent Transactions를 고려하여
적절히 설정한다.
4. Application Partition
OPS Application Design의 가장 중요한 부분으로 Partitioning의
기본 원리는 다음과 같다.
. Read Intensive Data는 Write Intensive Data로부터 다른
Tablespaces로 분리한다.
. 가능한 한 하나의 Application은 한 Node에서만 실행되도록
한다. 이는 곧 다른 Application들에 의해 Access되는 Data에
대한 Partition을 의미한다.
. 각 Node마다 Temporary Tablespaces를 할당한다.
. 각 Node마다 Rollback Segments를 독립적으로 유지한다.
5. Backup & Recovery
일반적으로 OPS 환경의 Sites는 대부분 24 * 365 Online 상황이므로
전체적인 Database 운영은 Archive Log Mode & Hot Backup으로 갈
수에 없으며 Failure 발생시 얼마나 빠른 시간 안에 Database를
완벽하게 복구 할 수 있는 지가 최대 관건이라 하겠다.
모든 Backup & Recovery에 관한 일반적인 내용은 Exclusive Mode
Database 운영 환경에서와 동일하다.
(1) Backup
- Hot Backup Internal
Archive Mode로 DB를 정상적으로 운영하며 Online Data Files를
Backup하는 방법으로 Tablespace 단위로 행해진다.
alter tablespace - begin backup이 실행되면 해당 Tablespace를
구성하는 Datafiles에 Checkpoint가 발생되어 Memory상의 Dirty
Buffers를 해당 Datafiles(Disk)에 "Write"함과 동시에 Hot Backup
Mode에 있는 모든 Datafiles의 Header에 Checkpoint SCN을 Update
하는데 이는 Recovery시에 중요한 Point가 된다.
또한 alter tablespace - end backup이 실행되기 전까지 즉,
Hot Backup이 행해지는 동안 해당 Datafiles는 "fuzzy" Backup
Data가 생성되며 특정 Record의 변형 시에도 해당 Block이 Redo
Log에 기록 되므로 다수의 Archive File이 더 생성되는 것을 볼 수
있다. 따라서 Admin이 해당 Datafiles를 모두 Backup하고도 end
backup을 실행하지 않으면 전체 인 System 성능에 심각한 영향을
미치게 되므로 특히 주의해야 한다.
Hot Backup 중인지의 여부는 다음 문장을 통해 확인할 수 있다.
select * from v$backup; -> status 확인
- Hot Backup Step (Recommended)
① alter system archive log current
② alter tablespace tablespacename begin backup
③ backup the datafiles,control files,redo log files
④ alter tablespace tablespacename end backup
⑤ alter database backup controlfile to 'filespec'
⑥ alter database backup controlfile to trace noresetlogs(safety)
⑦ alter system archive log current
(2) Recovery
- Instance Failure시
OPS 환경에서 한 Instance의 Failure시 다른 Instance의 SMON이
즉시 감지하여 Failed Instance에 대한 Recovery를 자동으로
수행한다. 이 때 운영중인 Instance는 Failed Instance가 생성한
Redo Log Entries와 Rollback Images를 이용하여 Recovery한다.
Multi-Node Failure시엔 바로 다음에 Open 된 Instance가 그 역할을
담당하게 된다. 아울러 Failed Instance가 Access하던 모든 Online
Datafiles에 대한 Recovery도 병행되는 데 이런 과정의 일부로
Datafiles에 관한 Verification이 실패하여 Instance Recovery가 되지
않을 경우엔 다음 SQL Statement를 실행시키면 된다.
alter system check datafiles global;
- Media Failure시
다양한 형태로 발생하는 Media Failure시엔 Backup Copy를
Restore한 후 Complete 또는 Incomplete Media Recovery를 행해야
하며 이는 Exclusive Mode로 Database를 운영할 때와 전혀 다르지
않다.
Node별 생성된 즉, Thread마다 생성된 모든 Archived Log Files는
당연히 필요하며 많은 OPS Node 중 어디에서든지 Recovery 작업을
수행할 수 있다.
- Parallel Recovery
Instance 또는 Media Failure시 ORACLE 7.1 이상에서는 Instance
Level(init.ora) 또는 Command Level(Recover--)에서 Parallel
Recovery가 가능하다. 여러 개의 Recovery Processes를 이용하여
Redo Log Files를 동시에 읽어 해당 After Image를 Datafiles에
반영시킬 수 있다. Recovery_Parallelism Parameter는 Concurrent
Recovery Processes의 수를 정하며 Parallel_Max_Servers Parameter의
값을 초과할 수는 없다.
(3) 운영 시 발생 가능한 Error
- ORA-1187 발생
ORA-1187 : can not read from file name because it
failed verification tests.
(상황) 하나의 Node에서 create tablespace ... 한 상태에
정상적으로 운영하던 중 다른 Node를 통해 특정 Object를
Access하는데 ORA-1187 발생.
(원인) 다른 Node에서 raw disk의 owner, group, mode 등을
Tablespace가 생성된 후 뒤늦게 전환.
(Admin의 Fault)
(조치) SQL> alter system check datafiles global;
Reference Documents
--------------------hal lavender wrote:
Hi,
I am trying to achieve Load Balancing & Failover of Database requests to two of the nodes in 8i OPS.
Both the nodes are located in the same data center.
Here comes the config of one of the connection pools.
<JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
InitialCapacity="10" MaxCapacity="25" Name="db1Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.160:1421:dbinst01" />
<JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
InitialCapacity="10" MaxCapacity="25" Name="db2Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.161:1421:dbinst01" />
<JDBCMultiPool AlgorithmType="Load-Balancing" Name="pooledConnection598011"
PoolList="db1Connection598011,db2Connection598011" Targets="ngusCluster12,ngusCluster34" />
Please let me know , if you need further information
HalHi Hal. All that seems fine, as it should be. Tell me how you
enact a failure so that you'd expect one pool to still be good
when the other is bad.
thanks,
Joe -
Archivelogs not transforing to standby
Hi,
In dataguard parameter are configured in
primary database to transfer archivelogs like log_archive_dest_2='service=to_orcl
and standby_file_management=auto, and created directory structure also.
standby database side: standby_archive_dest='/u01/app/oracle/arch/orcl'
Listener is working and tnsnames also working.
But archivelogs are not transforming from primary to standby.
can you please let me know why it is like this.
thanks a lotvenky wrote:
log_archive_dest_state_2=enable and log_archive_dest_2='service=to_orcl'
like this parameters set.Couple of points must be looked into when there is issue of archive logs are not shipped to standby :
1.It should be :
log_archive_dest_2 = SERVICE=MARKTEST LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
2.Are you able to conn sys/pwd@stby as sysdba If yes then it means there is no connectivity issue; if no then; it is the reason thats why logs are not being shipped to standby site. It is due to issue with password file or something issue with service name. Just drop and create a new/freshpassword file with the same password on both primary and standby.
3.Check whether your OS user is part of ORA_DBA group.
Regards
Girish Sharma -
Hi, I'm doing a "proof of concept" on XA transaction spanning two servers.
In order to get it as simple as possible I have configured two servers and
deployed the same MDB on both. I'm also only using JMS resources to simplify
the test.
The goal is to send a message to two different JMS servers (each deployed in
a Weblogic server) both coordinated by an XA UserTransaction started in the
client.
The deployed MDB of each server just listens to a topic and prints out the
received TextMessage.
The configuration is as follows:
* Server 1
Domain name: a
Server name: a00
config.xml (extract):
<JMSServer Name="a.a00.JMSServer" PagingStore="a.a00.JMSFileStore"
Store="a.a00.JMSFileStore" Targets="a00">
<JMSTopic JNDIName="xajms.test.Topic" Name="XA JMS Proof of Concept"
StoreEnabled="true"/>
</JMSServer>
<JMSFileStore Directory="config/a/a00.jmsfilestore"
Name="a.a00.JMSFileStore"/>
<JMSConnectionFactory JNDIName="a.a00.JMSConnectionFactory"
Name="a.a00.JMS Connection Factory" Targets="a00"
UserTransactionsEnabled="true" XAConnectionFactoryEnabled="true"/>
* Server 2
Domain name: b
Server name: b00
config.xml (extract):
<JMSServer Name="b.b00.JMSServer" PagingStore="b.b00.JMSFileStore"
Store="b.b00.JMSFileStore" Targets="b00">
<JMSTopic JNDIName="xajms.test.Topic" Name="XA JMS Proof of Concept"
StoreEnabled="true"/>
</JMSServer>
<JMSFileStore Directory="config/b/b00.jmsfilestore"
Name="b.b00.JMSFileStore"/>
<JMSConnectionFactory JNDIName="b.b00.JMSConnectionFactory"
Name="b.b00.JMS Connection Factory" Targets="b00"
UserTransactionsEnabled="true" XAConnectionFactoryEnabled="true"/>
and here is an extract of client code:
068 UserTransaction utx;
069
070 InitialContext ctx1,ctx2;
071
072 System.out.println("Retrieving initial context for server 1");
073 ctx1 = (InitialContext) getInitialContext(SERVER_1,PORT_1);
074
075 System.out.println("Retrieving initial context for server 2");
076 ctx2 = (InitialContext) getInitialContext(SERVER_2,PORT_2);
077
078 utx = (UserTransaction)
ctx1.lookup("javax.transaction.UserTransaction");
079 utx.setTransactionTimeout(30);
080
081 System.out.println("Begining transaction");
082 utx.begin();
083
084 System.out.println("Sending message to server 1");
085 ref.send(ctx1, JMS_FACTORY_1, args[0]);
086
087 System.out.println("Sending message to server 2");
088 ref.send(ctx2, JMS_FACTORY_2, args[0]);
089
090 System.out.println("Ending transaction...");
091 if (args[1].equals("commit"))
092 utx.commit();
093 else
094 utx.rollback();
end the result of executing it from a third machine:
CASE I:
Retrieving initial context for server 1
Retrieving initial context for server 2
Begining transaction
Sending message to server 1
Sending message to server 2
Ending transaction...
javax.transaction.SystemException: SubCoordinator 'b00+127.0.0.1:7001+b+'
not available
Start server side stack trace:
javax.transaction.SystemException: SubCoordinator 'b00+127.0.0.1:7001+b+'
not available
at
weblogic.transaction.internal.TransactionImpl.abort(TransactionImpl.java:965
at
weblogic.transaction.internal.ServerSCInfo.startPrepare(ServerSCInfo.java:20
0)
at
weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTran
sactionImpl.java:1619)
at
weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTra
nsactionImpl.java:217)
at
weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransaction
Impl.java:189)
at
weblogic.transaction.internal.CoordinatorImpl.commit(CoordinatorImpl.java:68
at
weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
at
weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:360)
at
weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:329)
at
weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:2
2)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:140)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:121)
End server side stack trace
<<no stack trace available>>
--------------- nested within: ------------------
weblogic.transaction.RollbackException: SubCoordinator
'b00+127.0.0.1:7001+b+' not available
Start server side stack trace:
javax.transaction.SystemException: SubCoordinator 'b00+127.0.0.1:7001+b+'
not available
at
weblogic.transaction.internal.TransactionImpl.abort(TransactionImpl.java:965
at
weblogic.transaction.internal.ServerSCInfo.startPrepare(ServerSCInfo.java:20
0)
at
weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTran
sactionImpl.java:1619)
at
weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTra
nsactionImpl.java:217)
at
weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransaction
Impl.java:189)
at
weblogic.transaction.internal.CoordinatorImpl.commit(CoordinatorImpl.java:68
at
weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
at
weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:360)
at
weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:329)
at
weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:2
2)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:140)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:121)
--------------- nested within: ------------------
weblogic.transaction.RollbackException: SubCoordinator
'b00+127.0.0.1:7001+b+' not available - with nested exception:
[javax.transaction.SystemException: SubCoordinator 'b00+127.0.0.1:7001+b+'
not available]
at
weblogic.transaction.internal.TransactionImpl.throwRollbackException(Transac
tionImpl.java:1524)
at
weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTra
nsactionImpl.java:265)
at
weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransaction
Impl.java:189)
at
weblogic.transaction.internal.CoordinatorImpl.commit(CoordinatorImpl.java:68
at
weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
at
weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:360)
at
weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:329)
at
weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:2
2)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:140)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:121)
End server side stack trace
- with nested exception:
[javax.transaction.SystemException: SubCoordinator 'b00+127.0.0.1:7001+b+'
not available
Start server side stack trace:
javax.transaction.SystemException: SubCoordinator 'b00+127.0.0.1:7001+b+'
not available
at
weblogic.transaction.internal.TransactionImpl.abort(TransactionImpl.java:965
at
weblogic.transaction.internal.ServerSCInfo.startPrepare(ServerSCInfo.java:20
0)
at
weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTran
sactionImpl.java:1619)
at
weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTra
nsactionImpl.java:217)
at
weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransaction
Impl.java:189)
at
weblogic.transaction.internal.CoordinatorImpl.commit(CoordinatorImpl.java:68
at
weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
at
weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:360)
at
weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:329)
at
weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:2
2)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:140)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:121)
End server side stack trace
at
weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.
java:85)
at
weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:136)
at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
at $Proxy3.commit(Unknown Source)
at
weblogic.transaction.internal.TransactionImpl.commit(TransactionImpl.java:29
4)
at
weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManag
erImpl.java:247)
at
weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManag
erImpl.java:240)
at xajms.client.Send2BothTopics.main(Send2BothTopics.java:110)
real 1m6.530s
user 0m1.380s
sys 0m0.050s
If I swap message sending order the output changes to:
CASE II:
Retrieving initial context for server 1
Retrieving initial context for server 2
Begining transaction
Sending message to server 2
Sending message to server 1
Ending transaction...
weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds
Start server side stack trace:
weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds
at
weblogic.transaction.internal.ServerTransactionImpl.wakeUp(ServerTransaction
Impl.java:1228)
at
weblogic.transaction.internal.ServerTransactionManagerImpl.processTimedOutTr
ansactions(ServerTransactionManagerImpl.java:488)
at
weblogic.transaction.internal.TransactionManagerImpl.wakeUp(TransactionManag
erImpl.java:1629)
at
weblogic.transaction.internal.ServerTransactionManagerImpl.wakeUp(ServerTran
sactionManagerImpl.java:451)
at
weblogic.transaction.internal.TransactionManagerImpl$1.run(TransactionManage
rImpl.java:1595)
at java.lang.Thread.run(Thread.java:479)
End server side stack trace
<<no stack trace available>>
--------------- nested within: ------------------
weblogic.transaction.RollbackException: Timed out tx=0:8c9a9deee02fe21e
after 30 seconds
Start server side stack trace:
weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds
at
weblogic.transaction.internal.ServerTransactionImpl.wakeUp(ServerTransaction
Impl.java:1228)
at
weblogic.transaction.internal.ServerTransactionManagerImpl.processTimedOutTr
ansactions(ServerTransactionManagerImpl.java:488)
at
weblogic.transaction.internal.TransactionManagerImpl.wakeUp(TransactionManag
erImpl.java:1629)
at
weblogic.transaction.internal.ServerTransactionManagerImpl.wakeUp(ServerTran
sactionManagerImpl.java:451)
at
weblogic.transaction.internal.TransactionManagerImpl$1.run(TransactionManage
rImpl.java:1595)
at java.lang.Thread.run(Thread.java:479)
--------------- nested within: ------------------
weblogic.transaction.RollbackException: Timed out tx=0:8c9a9deee02fe21e
after 30 seconds - with nested exception:
[weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds]
at
weblogic.transaction.internal.TransactionImpl.throwRollbackException(Transac
tionImpl.java:1524)
at
weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTra
nsactionImpl.java:265)
at
weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransaction
Impl.java:189)
at
weblogic.transaction.internal.CoordinatorImpl.commit(CoordinatorImpl.java:68
at
weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
at
weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:360)
at
weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:329)
at
weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:2
2)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:140)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:121)
End server side stack trace
- with nested exception:
[weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds
Start server side stack trace:
weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds
at
weblogic.transaction.internal.ServerTransactionImpl.wakeUp(ServerTransaction
Impl.java:1228)
at
weblogic.transaction.internal.ServerTransactionManagerImpl.processTimedOutTr
ansactions(ServerTransactionManagerImpl.java:488)
at
weblogic.transaction.internal.TransactionManagerImpl.wakeUp(TransactionManag
erImpl.java:1629)
at
weblogic.transaction.internal.ServerTransactionManagerImpl.wakeUp(ServerTran
sactionManagerImpl.java:451)
at
weblogic.transaction.internal.TransactionManagerImpl$1.run(TransactionManage
rImpl.java:1595)
at java.lang.Thread.run(Thread.java:479)
End server side stack trace
at
weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.
java:85)
at
weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:136)
at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
at $Proxy3.commit(Unknown Source)
at
weblogic.transaction.internal.TransactionImpl.commit(TransactionImpl.java:29
4)
at
weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManag
erImpl.java:247)
at
weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManag
erImpl.java:240)
at xajms.client.Send2BothTopics.main(Send2BothTopics.java:110)
real 1m32.231s
user 0m1.390s
sys 0m0.030s
not to be said that I have "some" questions about it:
1. Why it doesn't work?
2. Why order of message sending brings two different results?
3. In CASE I:
What is the meaning of the exception "javax.transaction.SystemException:
SubCoordinator 'b00+127.0.0.1:7001+b+' not available"
Why is it using the loopback (127.0.0.1) ip? Is that code being runned
inside the server with the SERVER_2 (192.168.1.3) ip?
4. In case II:
How is that the message of the exception is
"weblogic.transaction.internal.TimedOutException: Timed out
tx=0:8c9a9deee02fe21e after 30 seconds" when it's clear the program took
more than one and a half minutes to execute?
Most of the time is spent after printing the "Ending transaction..." line.
The first part of the program is executed in a few seconds (about 2 or 3 it
depends of the time to get initial context against both servers)
I'm really puzzled with this. I don't know what to think about it. Is this
so weird nobody have tried something like this?
I have done some other test for which the results are:
a) Commenting lines related to UserTransaction (82,91-94)
It works OK. Both servers receive the message.
b) Commenting lines 85 or 88 (related to sending messages to each server)
It works OK. Both servers receive the message. I pressume there is no
"real" XA transaction as there is only one resource involved on it.
c) In CASE I launching the client with "rollback" as second parameter I
neither get any exception nor messages are delivered which would be OK
d) In CASE II launching the client with "rollback" as second parameter I get
a "javax.transaction.SystemException: Timeout during rollback processing"
and messages are not delivered.
Any idea?. I really need this to work in order to free our server for heavy
asyncronous loads and deploy new stuff on it.
PS.
I attach both servers config.xml files and both MDB and client full code. If
you are going to test it you must add a new user and group to weblogic
default file realm:
New user: subscriber
New group: xajms
of course you must add the new user to the new group in order to make it
work.
Here are the lines extracted from server's domain "fileRealm.properties"
file reflecting the new user and group (defined on ejb-jar.xml and
weblogic-ejb-jar.xml files on the "role" section):
group.xajms=subscriber
user.subscriber=0x6b9a704bebc709dd083edd61ed000236eb23987f
The client receives two parameters. The first one is a string to be sent as
message body, the second one is either "commit" or "rollback" just to test
XA transaction doing it's work as expected.
Thanks in advance.
Regards.
Ignacio.
[xajmspoc.zip]
do you connect to 10g database inside onBusinessEvent using Non-XA/XA connection??. I guess it could be bcoz of Non-XA connection (or) XA Without Global Transaction support, and this connection is listed with the transaction manager and it is trying to commit it after onMessage.
For Global Transactions options, go to your weblogic console and check the following path.
(Datasource -> Advanced Options -> Global Transactions -> Honor Global Transactions should be true)
Let me know if you need any help
Rao Kotha. -
How can I find out if the rollback optimal parameter size has been set ?
What is the command to find out if the rollback optimal parameter size has been set? And what is the command to find out what is the rollback optimal parameter set to?
You can find the OPTIMAL size in v$rollstat.optsize
-
Oracle 10g dataguard Connect-time failover Parameter 15 minutes
Hello,
we running a system with DataGuard in Maximum Availability protection mode and one physical Standby Database. There are two connections between primary and standby. I configured listener and tnsnames to perform a Connect-time failover in case one connection is down. In first place it seemed to work fine but since it has happened that the primary always hangs for exactly 15 minutes until the failover is performed. (ALL redologs need archiving)
I think this must be the default value of a Parameter but reading docs for hours I can't find it.
Can anybody help me?
Best regards
FritzIt's like this. The connection failover I'm talking about is only between the primary database and standby database. I have two connections between them. That makes two addresses for the service (servicetostandby) specified in the log_archive_dest used by dataguard. In my case the log_archive_dest Paramater looks like this:
log_archive_dest_3 = 'SERVICE servicetostandby OPTIONAL LGWR SY
NC AFFIRM VALID_FOR=(ONLINE_LO
GFILES,PRIMARY_ROLE) DB_UNIQUE
NAME=standby REOPEN=10 NETTIME
OUT=9'
The tnsnames.ora entry looks like this:
servicetostandby =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = address1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = address2)(PORT = 1521))
(FAILOVER=true)
(LOAD_BALANCE=false)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = standby)
So if either address1 or address2 is down dataguard should use the other address to connect to the standby database.
Maybe because of the Maximum Availability protection mode the primary database does not perform if the standby database is not available. I'm not sure about that. But fact is that if I power off the standby database, the primary database hangs only for 9 seconds. That's specified in the net_time_out attribut of the log_archive_dest Parameter. So I'm quite sure that something in the oracle net service causes the problem.
With the "primary hangs" I mean: After the used connection (say address1) went down the redologfiles of the primary database get filled and then it stops for 15 minutes. The messages in the alert.log:
Thread 1 cannot allocate new log, sequence 1304
All online logs needed archiving
After 15 minutes the connection via address2 is established and everything works fine.
I hope I made myself clear this time and didn't confuse you totally.
Edited by: user633694 on Apr 14, 2009 8:13 AM -
The OPTIMAL storage parameter in the rollback segment
Hi,
in metalink note Subject: ORA-01555 "Snapshot too old" in Very Large Databases (if using Rollback Segments)
Doc ID: 45895.1
I see :
Solution 1d:
Don't use the OPTIMAL storage parameter in the rollback segment. but how not to use the OPTIMAL storage parameter in the rollback segment ?
Thank you.If you are using undo_management=AUTO (in 9i or higher) then there is no "OPTIMAL" setting.
"OPTIMAL" is when using Manual Undo Management with Rollback Segments created by the DBA.
If you are using Manual Undo Management, check your Rollback Segments. The Optimal size would be visible in V$ROLLSTAT.
select a.segment_name a, b.xacts b, b.waits c, b.shrinks e, b.wraps f,
b.extends g, b.rssize/1024/1024 h, b.optsize/1024/1024 i,
b.hwmsize/1024/1024 j, b.aveactive/1024/1024 k , b.status l
-- from v$rollname a, v$rollstat b
from dba_rollback_segs a, v$rollstat b
where a.segment_id = b.usn(+)
and b.status in ('ONLINE', 'PENDING OFFLINE','FULL')
order by a.segment_name
/To unset the Optimal setting you can run
alter rollback segment SEGMENT_NAME storage (optimal NULL);Note that if you unset OPTIMAL, then your Rollback Segments will remain at very large sizes if and when they grow running large transactions ("OPTIMAL" is the pre-9i method for Oracle to automatically shrink Rollback Segments). You can manually SHRINK or DROP and CREATE Rollback Segments then. -
Optimal Parameter in rollback segment
The OPTIMAL parameter assumes growth, followed by a shrink. If the rollback segment is sized properly, they should never grow.
Is RollBack segment block size not fixed? then why shrink and sizing ?
pls help me.
ManishThe OPTIMAL parameter assumes growth, followed by a
shrink. If the rollback segment is sized properly,
they should never grow.That's right, a properly sized rollback segment should never grow, or do it within a reasonable limit.
Is RollBack segment block size not fixed? then why
shrink and sizing ?This parameter doesn't refer to the size of the block, which is always fixed, but to the size of the rollback segment. A RBS will grow as far as the transaction continues. Once it grows it doesn't automatically shrink. So you have to manually issue the 'Alter ... shrink ..' command, this resizes the RBS to its minimum size (a couple of extents), If required RBS will grow again. In order for the RBS not to be shrinked to the minimum size and back to the average size, you declare de OPTIMAL value, so the shrink reduces the RBS to the average size directly when a shrink command is issued.
By the way, why are you using RBS are you using an 8i or previous Oracle version? -
hello
i am setting up dataguard on 10g winxp,
i want that the standby database have the same name like the primary , but wouldnt that create a conflict in this parameter
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,chicago)'
thanks?hello
i was reading the documentation its says that this parameter specifies the
DB_UNIQUE_NAME of the primary and standby databases
is that the same as their services names ?
thanks again -
DataGuard DB_NAME parameter
On 10.2 database,
Is it possible for DB_NAME parameter to be different in both Primary and Physical Standby databases?
I know DB_UNIQUE_NAME parameter must be different. But want to know whether DB_NAME can also be different.Instances are distinguished by SID. Thus, two instances, particularly if running from the same ORACLE_HOME, have to have different SIDs if on the same server. However, Oracle has an additional check on DB_NAME as well. Two instances cannot have the same DB_NAME unless DB_UNIQUE_NAME is specified.
DB_NAME is always the same when you create a Physical Standby. You can't change the DB_NAME -- if you change the value in the init/spfile parameter file, a startup will error out because it checks the DB_NAME with that in the DataFile Headers. Since the DataFiles of a Physical Standby are a physical clone of the primary, the DB_NAME has to be the same. (the same applies for DBID -- the DBID of the Primary and the Standby are the same and that is why you have to be careful about using a common RMAN Repository for both databases -- there are MetaLink notes about how to do Standby Backups).
DB_UNIQUE_NAME doesn't get stored in the DataFile headers.
Hemant K Chitale -
Error in Creation of Dataguard for RAC
My pfile of RAC looks like:
RACDB2.__large_pool_size=4194304
RACDB1.__large_pool_size=4194304
RACDB2.__shared_pool_size=92274688
RACDB1.__shared_pool_size=92274688
RACDB2.__streams_pool_size=0
RACDB1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDB/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDB/bdump'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0.1.0'
*.control_files='+DATA/racdb/controlfile/current.260.627905745','+FLASH/racdb/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDB/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASH'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDBXDB)'
*.fal_client='RACDB'
*.fal_server='RACDG'
RACDB1.instance_number=1
RACDB2.instance_number=2
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASH/RACDB/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDB'
*.log_archive_dest_2='SERVICE=RACDG VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='DEFER'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_listener='LISTENERS_RACDB'
*.remote_login_passwordfile='exclusive'
*.service_names='RACDB'
*.sga_target=167772160
*.standby_file_management='AUTO'
RACDB2.thread=2
RACDB1.thread=1
*.undo_management='AUTO'
RACDB2.undo_tablespace='UNDOTBS2'
RACDB1.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/RACDB/udump'
My pfile of Dataguard Instance in nomount state looks like:
RACDG.__db_cache_size=58720256
RACDG.__java_pool_size=4194304
RACDG.__large_pool_size=4194304
RACDG.__shared_pool_size=96468992
RACDG.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDG/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDG/bdump'
##*.cluster_database_instances=2
##*.cluster_database=true
*.compatible='10.2.0.1.0'
##*.control_files='+DATA/RACDG/controlfile/current.260.627905745','+FLASH/RACDG/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDG/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATADG'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASHDG'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDGXDB)'
*.FAL_CLIENT='RACDG'
*.FAL_SERVER='RACDB'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASHDG/RACDG/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_2='SERVICE=RACDB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDB'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
##*.remote_listener='LISTENERS_RACDG'
*.remote_login_passwordfile='exclusive'
SERVICE_NAMES='RACDG'
sga_target=167772160
standby_file_management='auto'
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='/u01/app/oracle/admin/RACDG/udump'
DB_UNIQUE_NAME=RACDG
and here is what I am doing on the standby location:
[oracle@dg01 ~]$ echo $ORACLE_SID
RACDG
[oracle@dg01 ~]$ rman
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jul 17 21:19:21 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDG (not mounted)
RMAN> connect target sys/xxxxxxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-17 22:27:08
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-17 22:27:10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl4.ctl
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl4.ctl tag=TAG20070717T201921
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:23
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-17 22:27:34
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2007 22:27:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database
RMAN>
Any help to clear this error will be apprecited.......
Message was edited by:
Bal
nullHi
Thanks everybody for helping me on this issue...........
As suggested, I had taken the parameter log_file_name_convert and db_file_name_convert out of my RAC primary database but still I am getting the same error.
Any help will be appriciated..............
SQL> show parameter convert
NAME TYPE VALUE
db_file_name_convert string
log_file_name_convert string
SQL>
oracle@dg01<3>:/u01/app/oracle> rman
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jul 18 17:07:49 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDB (not mounted)
RMAN> connect target sys/xxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-18 17:10:53
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-18 17:10:54
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl5.ctr
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl5.ctr tag=TAG20070718T170529
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:33
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-18 17:11:31
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/18/2007 17:11:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database -
Difference between rollback and issue_rollback(null)
In key-Next-Item i kept issue_rollback(null) command,then application working fine.but when i keep just normal rollback command in the same place,it showing do u want to save changes message on my front_end application.
So can any one tell that what is the diffrence between the normal ROLLBACK command and issue_rollback(null) procedure.Since the documentation warns against using Issue_Rollback outside of an On-Rollback trigger, it may not be doing anything.
The plain rollback statement is interpreted by Forms as a Clear_Form with no parameters (which means using the default parameter values, which means it is doing a Clear_Form(Ask_commit,To_savepoint).
If you really want to undo any changes already made by the user when he hits the Tab or Enter key, you should issue the Clear_Form with the proper parameters. And then you have the issue of re-querying the data.
Maybe you should explain what you really want to happen in the trigger. -
DataGuard Standby 環境で db が open できない.
DataGuard Standby 環境で db が open できなくて困っています。
詳しい方いれば、原因と対処方法を教えていただきたいと思います。
create した spfile が正確に読み込まれていないことが原因のようなのですが。。
なぜ読み込めないのかが分からない状態です。
pfile を読み込んだ起動は可能ですが、spfile を読み込んだ起動の前提条件とは、
何なのでしょうか。単に整合性が取れていないだけなのか、それとも、、。
[grid@osaka1 shell]$ asmcmd
ASMCMD> ls
DATA/
FRA/
ASMCMD> cd data
ASMCMD> ls
ASM/
WEST/
ASMCMD> ls -l
Type Redund Striped Time Sys Name
Y ASM/
N WEST/
ASMCMD>
ASMCMD> cd west
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
spfilewest.ora
ASMCMD> ls -l
Type Redund Striped Time Sys Name
N CONTROLFILE/
N DATAFILE/
N ONLINELOG/
N PARAMETERFILE/
N TEMPFILE/
N spfilewest.ora => +DATA/WEST/PARAMETERFILE/spfile.257.824236121
ASMCMD> pwd
+data/west
ASMCMD>
ASMCMD> cd para*
ASMCMD> ls -l
Type Redund Striped Time Sys Name
PARAMETERFILE MIRROR COARSE AUG 23 18:00:00 Y spfile.257.824236121
ASMCMD>
ASMCMD> pwd
+data/west/PARAMETERFILE
ASMCMD> quit
[grid@osaka1 shell]$
[oracle@osaka1 dbs]$ more initHPYMUSIC.ora
SPFILE='+DATA/west/spfilewest.ora'
[oracle@osaka1 dbs]$
よろしくお願い致します。
ps.
ORA-12154 は整合性の問題であるので、それを合わせれば消えると思っています。
そもそも RAC を前提としていたのですが、それを standalone に置き換えて検証始めた結果、
こうなってしまっています。
open できない原因が ORA-12154 だったりして。。
■ プライマリの場合
○ open するほうは、シンプルに以下だけで open することが確認できる。
が、db_name を変更した関係で
「ORA-12154: TNS: 指定された接続識別子を解決できませんでした」
が出続けている。
ORA-12154 は db が open できない原因とは無関係かもしれない。
srvctl stop database -d east -f
srvctl start database -d east -o open
srvctl config database -d east
srvctl status database -d east
○ 参考出力
set linesize 500 pages 0
col value for a90
col name for a50
select name, value
from v$parameter
where name in ('db_name','db_unique_name','log_archive_config', 'log_archive_dest_1','log_archive_dest_2',
'log_archive_dest_state_1','log_archive_dest_state_2', 'remote_login_passwordfile',
'log_archive_format','log_archive_max_processes','fal_server','db_file_name_convert',
'log_file_name_convert', 'standby_file_management');
SQL>
db_file_name_convert
log_file_name_convert
log_archive_dest_1
log_archive_dest_2 SERVICE=HPYMUSIC SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=west
log_archive_dest_state_1 enable
log_archive_dest_state_2 enable
fal_server
log_archive_config
log_archive_format %t_%s_%r.dbf
log_archive_max_processes 4
standby_file_management AUTO
remote_login_passwordfile EXCLUSIVE
db_name HPYMUSIC
db_unique_name HPYMUSIC ← ▼ db_name だけを変更したつもりが db_unique_name も変更されていた
14行が選択されました。
[oracle@tokyo1 shell]$ srvctl stop database -d east -f
[oracle@tokyo1 shell]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Cluster Resources
ora.east.db
1 OFFLINE OFFLINE Instance Shutdown
[oracle@tokyo1 shell]$ srvctl start database -d east -o open
[oracle@tokyo1 shell]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE tokyo1
ora.FRA.dg
ONLINE ONLINE tokyo1
ora.LISTENER.lsnr
ONLINE ONLINE tokyo1
ora.asm
ONLINE ONLINE tokyo1 Started
Cluster Resources
ora.cssd
1 ONLINE ONLINE tokyo1
ora.diskmon
1 ONLINE ONLINE tokyo1
ora.east.db
1 ONLINE ONLINE tokyo1 Open ← ▼
[oracle@tokyo1 shell]$
[oracle@tokyo1 shell]$ srvctl config database -d east
一意のデータベース名: east
データベース名: east
Oracleホーム: /u01/app/oracle/product/11.2.0/dbhome_1
Oracleユーザー: grid
spfile: +DATA/east/spfileeast.ora
ドメイン:
開始オプション: open
停止オプション: immediate
データベース・ロール: PRIMARY
管理ポリシー: AUTOMATIC
ディスク・グループ: DATA,FRA
サービス:
[oracle@tokyo1 shell]$ srvctl status database -d east
データベースは実行中です。
Fri Aug 23 19:44:10 2013
Error 12154 received logging on to the standby
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_7579.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
[oracle@tokyo1 dbs]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/dbs
[oracle@tokyo1 dbs]$
[oracle@tokyo1 dbs]$
[oracle@tokyo1 dbs]$ more 2013.08.23_east_pfile.txt
HPYMUSIC.__db_cache_size=301989888
HPYMUSIC.__java_pool_size=4194304
HPYMUSIC.__large_pool_size=8388608
HPYMUSIC.__pga_aggregate_target=339738624
HPYMUSIC.__sga_target=503316480
HPYMUSIC.__shared_io_pool_size=0
HPYMUSIC.__shared_pool_size=176160768
HPYMUSIC.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/east/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='+DATA/east/controlfile/current.270.823277705','+FRA/east/controlfile/current.
260.823277707'
*.db_block_checking='TRUE'
*.db_block_checksum='TRUE'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='HPYMUSIC'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=3038773248
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=HPYMUSICXDB)'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=842006528
*.nls_language='JAPANESE'
*.nls_territory='JAPAN'
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'
Fri Aug 23 19:49:38 2013
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
ARCH: Warning; less destinations available than specified
by LOG_ARCHIVE_MIN_SUCCEED_DEST init.ora parameter
Autotune of undo retention is turned on.
IMODE=BR
ILAT =27
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options.
Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfileHPYMUSIC.ora
System parameters with non-default values:
processes = 150
nls_language = "JAPANESE"
nls_territory = "JAPAN"
memory_target = 804M
control_files = "+DATA/east/controlfile/current.270.823277705"
control_files = "+FRA/east/controlfile/current.260.823277707"
db_block_checksum = "TRUE"
db_block_size = 8192
compatible = "11.2.0.0.0"
log_archive_dest_2 = "SERVICE=HPYMUSIC SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=west"
log_archive_format = "%t_%s_%r.dbf"
db_create_file_dest = "+DATA"
db_recovery_file_dest = "+FRA"
db_recovery_file_dest_size= 2898M
standby_file_management = "AUTO"
undo_tablespace = "UNDOTBS1"
db_block_checking = "TRUE"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=HPYMUSICXDB)"
audit_file_dest = "/u01/app/oracle/admin/east/adump"
audit_trail = "DB"
db_name = "HPYMUSIC"
open_cursors = 300
diagnostic_dest = "/u01/app/oracle"
Fri Aug 23 19:49:39 2013
PMON started with pid=2, OS id=8442
Fri Aug 23 19:49:39 2013
VKTM started with pid=3, OS id=8444 at elevated priority
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Fri Aug 23 19:49:39 2013
GEN0 started with pid=4, OS id=8448
Fri Aug 23 19:49:39 2013
DIAG started with pid=5, OS id=8450
Fri Aug 23 19:49:39 2013
DBRM started with pid=6, OS id=8452
Fri Aug 23 19:49:39 2013
PSP0 started with pid=7, OS id=8454
Fri Aug 23 19:49:39 2013
DIA0 started with pid=8, OS id=8456
Fri Aug 23 19:49:39 2013
MMAN started with pid=9, OS id=8458
Fri Aug 23 19:49:39 2013
DBW0 started with pid=10, OS id=8460
Fri Aug 23 19:49:39 2013
LGWR started with pid=11, OS id=8462
Fri Aug 23 19:49:39 2013
CKPT started with pid=12, OS id=8464
Fri Aug 23 19:49:39 2013
SMON started with pid=13, OS id=8466
Fri Aug 23 19:49:39 2013
RECO started with pid=14, OS id=8468
Fri Aug 23 19:49:39 2013
RBAL started with pid=15, OS id=8470
Fri Aug 23 19:49:39 2013
ASMB started with pid=16, OS id=8472
Fri Aug 23 19:49:39 2013
MMON started with pid=17, OS id=8474
Fri Aug 23 19:49:39 2013
MMNL started with pid=18, OS id=8478
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
NOTE: initiating MARK startup
starting up 1 shared server(s) ...
Starting background process MARK
Fri Aug 23 19:49:39 2013
MARK started with pid=20, OS id=8482
NOTE: MARK has subscribed
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Reusing ORACLE_BASE from an earlier startup = /u01/app/oracle
Fri Aug 23 19:49:39 2013
ALTER DATABASE MOUNT
NOTE: Loaded library: System
SUCCESS: diskgroup DATA was mounted
ERROR: failed to establish dependency between database HPYMUSIC and diskgroup resource ora.DATA.dg
SUCCESS: diskgroup FRA was mounted
ERROR: failed to establish dependency between database HPYMUSIC and diskgroup resource ora.FRA.dg
Fri Aug 23 19:49:46 2013
NSS2 started with pid=24, OS id=8572
Successful mount of redo thread 1, with mount id 2951868947
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE MOUNT
ALTER DATABASE OPEN
LGWR: STARTING ARCH PROCESSES
Fri Aug 23 19:49:47 2013
ARC0 started with pid=26, OS id=8574
ARC0: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Fri Aug 23 19:49:48 2013
ARC1 started with pid=27, OS id=8576
Fri Aug 23 19:49:48 2013
ARC2 started with pid=28, OS id=8578
ARC1: Archival started
ARC2: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC2: Becoming the heartbeat ARCH
Fri Aug 23 19:49:48 2013
ARC3 started with pid=29, OS id=8580
LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Error 12154 received logging on to the standby
Fri Aug 23 19:49:51 2013
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_lgwr_8462.trc:
ORA-12154: TNS: ?????????????????????
Error 12154 for archive log file 2 to 'HPYMUSIC'
LGWR: Failed to archive log 2 thread 1 sequence 8 (12154)
Thread 1 advanced to log sequence 8 (thread open)
Thread 1 opened at log sequence 8
Current log# 2 seq# 8 mem# 0: +DATA/hpymusic/onlinelog/group_2.272.824213887
Current log# 2 seq# 8 mem# 1: +FRA/hpymusic/onlinelog/group_2.262.824213889
Successful open of redo thread 1
Fri Aug 23 19:49:51 2013
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Aug 23 19:49:51 2013
SMON: enabling cache recovery
Error 12154 received logging on to the standby
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_8578.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
Archived Log entry 7 added for thread 1 sequence 7 ID 0xaff1210d dest 1:
Error 12154 received logging on to the standby
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc3_8580.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
FAL[server, ARC3]: Error 12154 creating remote archivelog file 'HPYMUSIC'
FAL[server, ARC3]: FAL archive failed, see trace file.
Errors in file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc3_8580.trc:
ORA-16055: FALリクエストが拒否されました。
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance HPYMUSIC - Archival Error. Archiver continuing.
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
No Resource Manager plan active
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Fri Aug 23 19:49:55 2013
QMNC started with pid=32, OS id=8590
Completed: ALTER DATABASE OPEN
Fri Aug 23 19:49:59 2013
Starting background process CJQ0
Fri Aug 23 19:49:59 2013
CJQ0 started with pid=33, OS id=8609
Fri Aug 23 19:49:59 2013
db_recovery_file_dest_size of 2898 MB is 6.38% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
[root@tokyo1 app]#
[root@tokyo1 app]# more /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_8578.trc
Trace file /u01/app/oracle/diag/rdbms/hpymusic/HPYMUSIC/trace/HPYMUSIC_arc2_8578.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: tokyo1.oracle11g.jp
Release: 2.6.18-348.12.1.el5
Version: #1 SMP Wed Jul 10 05:28:41 EDT 2013
Machine: x86_64
Instance name: HPYMUSIC
Redo thread mounted by this instance: 1
Oracle process number: 28
Unix process pid: 8578, image: [email protected] (ARC2)
*** 2013-08-23 19:49:51.707
*** SESSION ID:(15.1) 2013-08-23 19:49:51.707
*** CLIENT ID:() 2013-08-23 19:49:51.707
*** SERVICE NAME:() 2013-08-23 19:49:51.707
*** MODULE NAME:() 2013-08-23 19:49:51.707
*** ACTION NAME:() 2013-08-23 19:49:51.707
Redo shipping client performing standby login
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:49:51.972 4132 krsh.c
Error 12154 received logging on to the standby
*** 2013-08-23 19:49:51.972 869 krsu.c
Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:49:51.973 4132 krsh.c
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
*** 2013-08-23 19:49:51.973 2747 krsi.c
krsi_dst_fail: dest:2 err:12154 force:0 blast:1
*** 2013-08-23 19:50:49.816
Redo shipping client performing standby login
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:50:50.070 4132 krsh.c
Error 12154 received logging on to the standby
*** 2013-08-23 19:50:50.070 869 krsu.c
Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:50:50.070 4132 krsh.c
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
*** 2013-08-23 19:50:50.070 2747 krsi.c
krsi_dst_fail: dest:2 err:12154 force:0 blast:1
*** 2013-08-23 19:51:51.147
Redo shipping client performing standby login
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:51:51.403 4132 krsh.c
Error 12154 received logging on to the standby
*** 2013-08-23 19:51:51.403 869 krsu.c
Error 12154 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
Error 12154 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'HPYMUSIC'
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
*** 2013-08-23 19:51:51.403 4132 krsh.c
PING[ARC2]: Heartbeat failed to connect to standby 'HPYMUSIC'. Error is 12154.
*** 2013-08-23 19:51:51.403 2747 krsi.c
krsi_dst_fail: dest:2 err:12154 force:0 blast:1
[root@tokyo1 app]#
[grid@tokyo1 shell]$ ./grid_info_east-x.sh
+ export ORACLE_SID=+ASM
+ ORACLE_SID=+ASM
+ LOGDIR=/home/grid/log
+ PRIMARYDB=east_DGMGRL
+ STANDBYDB=
+ PASSWORD=dataguard
+ mkdir -p /home/grid/log
++ date +%y%m%d,%H%M%S
+ echo 'asm info,130823,195709'
+ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on 金 8月 23 19:57:09 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Automatic Storage Management option
に接続されました。
SQL> SQL>
SYSDATE
13-08-23
SQL> SQL> SQL>
NAME TYPE VALUE
asm_diskgroups string FRA
asm_diskstring string /dev/sd*1
asm_power_limit integer 1
asm_preferred_read_failure_groups string
audit_file_dest string /u01/app/11.2.0/grid/rdbms/audit
audit_sys_operations boolean FALSE
audit_syslog_level string
background_core_dump string partial
background_dump_dest string /u01/app/grid/diag/asm/+asm/+ASM/trace
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
core_dump_dest string /u01/app/grid/diag/asm/+asm/+ASM/cdump
db_cache_size big integer 0
db_ultra_safe string OFF
db_unique_name string +ASM
diagnostic_dest string /u01/app/grid
event string
file_mapping boolean FALSE
filesystemio_options string none
ifile file
instance_name string +ASM
instance_number integer 1
instance_type string asm
large_pool_size big integer 12M
ldap_directory_sysauth string no
listener_networks string
local_listener string
lock_name_space string
lock_sga boolean FALSE
max_dump_file_size string unlimited
memory_max_target big integer 272M
memory_target big integer 272M
nls_calendar string
nls_comp string BINARY
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
os_authent_prefix string ops$
os_roles boolean FALSE
pga_aggregate_target big integer 0
processes integer 100
remote_listener string
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
service_names string +ASM
sessions integer 172
sga_max_size big integer 272M
sga_target big integer 0
shadow_core_dump string partial
shared_pool_reserved_size big integer 6081740
shared_pool_size big integer 0
sort_area_size integer 65536
spfile string +DATA/asm/asmparameterfile/registry.253.823204697
sql_trace boolean FALSE
statistics_level string TYPICAL
timed_os_statistics integer 0
timed_statistics boolean TRUE
trace_enabled boolean TRUE
user_dump_dest string /u01/app/grid/diag/asm/+asm/+ASM/trace
workarea_size_policy string AUTO
++ date +%y%m%d,%H%M%S
+ echo 'asmcmd info,130823,195709'
+ asmcmd ls -l
State Type Rebal Name
MOUNTED NORMAL N DATA/
MOUNTED NORMAL N FRA/
+ asmcmd ls -l 'data/asm/*'
Type Redund Striped Time Sys Name
ASMPARAMETERFILE MIRROR COARSE AUG 11 19:00:00 Y REGISTRY.253.823204697
+ asmcmd ls -l 'data/east/*'
Type Redund Striped Time Sys Name
+data/east/CONTROLFILE/:
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.260.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.270.823277705
+data/east/DATAFILE/:
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y SYSAUX.257.823276133
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y SYSAUX.267.823277615
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y SYSTEM.256.823276131
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y SYSTEM.266.823277615
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y UNDOTBS1.258.823276133
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y UNDOTBS1.268.823277615
DATAFILE MIRROR COARSE AUG 12 15:00:00 Y USERS.259.823276133
DATAFILE MIRROR COARSE AUG 23 19:00:00 Y USERS.269.823277615
+data/east/ONLINELOG/:
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_1.261.823276235
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_2.262.823276241
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_3.263.823276247
+data/east/PARAMETERFILE/:
PARAMETERFILE MIRROR COARSE AUG 23 12:00:00 Y spfile.265.823277967
+data/east/TEMPFILE/:
TEMPFILE MIRROR COARSE AUG 12 15:00:00 Y TEMP.264.823276263
TEMPFILE MIRROR COARSE AUG 23 19:00:00 Y TEMP.274.823277733
N spfileeast.ora => +DATA/EAST/PARAMETERFILE/spfile.265.823277967
+ asmcmd ls -l 'fra/east/*'
Type Redund Striped Time Sys Name
+fra/east/ARCHIVELOG/:
Y 2013_08_12/
Y 2013_08_15/
Y 2013_08_19/
Y 2013_08_22/
Y 2013_08_23/
+fra/east/CONTROLFILE/:
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.256.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.260.823277707
+fra/east/ONLINELOG/:
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_1.257.823276237
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_10.272.823535727
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_11.273.823535737
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_12.274.823535745
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_13.275.823535757
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_14.276.823535763
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_15.277.823535771
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_2.258.823276245
ONLINELOG MIRROR COARSE AUG 12 15:00:00 Y group_3.259.823276251
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_7.269.823535685
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_8.270.823535695
ONLINELOG MIRROR COARSE AUG 23 19:00:00 Y group_9.271.823535703
+fra/east/STANDBYLOG/:
N standby_group_07.log => +FRA/EAST/ONLINELOG/group_7.269.823535685
N standby_group_08.log => +FRA/EAST/ONLINELOG/group_8.270.823535695
N standby_group_09.log => +FRA/EAST/ONLINELOG/group_9.271.823535703
N standby_group_10.log => +FRA/EAST/ONLINELOG/group_10.272.823535727
N standby_group_11.log => +FRA/EAST/ONLINELOG/group_11.273.823535737
N standby_group_12.log => +FRA/EAST/ONLINELOG/group_12.274.823535745
N standby_group_13.log => +FRA/EAST/ONLINELOG/group_13.275.823535757
N standby_group_14.log => +FRA/EAST/ONLINELOG/group_14.276.823535763
N standby_group_15.log => +FRA/EAST/ONLINELOG/group_15.277.823535771
+ asmcmd find +data 'group*'
+data/EAST/ONLINELOG/group_1.261.823276235
+data/EAST/ONLINELOG/group_2.262.823276241
+data/EAST/ONLINELOG/group_3.263.823276247
+data/HPYMUSIC/ONLINELOG/group_1.271.824213881
+data/HPYMUSIC/ONLINELOG/group_2.272.824213887
+data/HPYMUSIC/ONLINELOG/group_3.273.824213895
+ asmcmd find +data 'spf*'
+data/EAST/PARAMETERFILE/spfile.265.823277967
+data/EAST/spfileeast.ora
+ asmcmd ls -l data/east/CONTROLFILE
Type Redund Striped Time Sys Name
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.260.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.270.823277705
+ asmcmd find +fra 'group*'
+fra/EAST/ONLINELOG/group_1.257.823276237
+fra/EAST/ONLINELOG/group_10.272.823535727
+fra/EAST/ONLINELOG/group_11.273.823535737
+fra/EAST/ONLINELOG/group_12.274.823535745
+fra/EAST/ONLINELOG/group_13.275.823535757
+fra/EAST/ONLINELOG/group_14.276.823535763
+fra/EAST/ONLINELOG/group_15.277.823535771
+fra/EAST/ONLINELOG/group_2.258.823276245
+fra/EAST/ONLINELOG/group_3.259.823276251
+fra/EAST/ONLINELOG/group_7.269.823535685
+fra/EAST/ONLINELOG/group_8.270.823535695
+fra/EAST/ONLINELOG/group_9.271.823535703
+fra/HPYMUSIC/ONLINELOG/group_1.261.824213883
+fra/HPYMUSIC/ONLINELOG/group_2.262.824213889
+fra/HPYMUSIC/ONLINELOG/group_3.263.824213897
+ asmcmd find +fra 'spf*'
+ asmcmd ls -l fra/east/CONTROLFILE
Type Redund Striped Time Sys Name
CONTROLFILE HIGH FINE AUG 12 15:00:00 Y Current.256.823276231
CONTROLFILE HIGH FINE AUG 23 19:00:00 Y Current.260.823277707
++ date +%y%m%d,%H%M%S
+ echo END,130823,195712
[grid@tokyo1 shell]$
■ 以下、スタンバイ側 ■ ■ ■ ■ ■ ■ ■
export ORACLE_SID=HPYMUSIC
sqlplus / as sysdba
startup nomount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
create spfile='+data/west/spfilewest.ora' from pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt';
srvctl stop database -d west -f
srvctl start database -d west -o open
srvctl start database -d west -o mount
srvctl start database -d west
startup mount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
srvctl start database -d west -o open
srvctl config database -d west
srvctl status database -d west
alter database recover managed standby database disconnect from session;
select name, database_role, open_mode from gv$database;
srvctl modify database -d west -s open
○ spfile を作成する
export ORACLE_SID=HPYMUSIC
sqlplus / as sysdba
startup nomount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
create spfile='+data/west/spfilewest.ora' from pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt';
○ 落とす
srvctl stop database -d west -f
○ open したいが起動しない( Mounted (Closed) 状態で起動することもある)。
srvctl start database -d west -o open
PRCR-1079 : リソースora.west.dbの起動に失敗しました
CRS-2674: Start of 'ora.west.db' on 'osaka1' failed
○ open したいが起動しない( Mounted (Closed) 状態で起動することもある)。
srvctl start database -d west -o mount
PRCR-1079 : リソースora.west.dbの起動に失敗しました
CRS-2674: Start of 'ora.west.db' on 'osaka1' failed
○ open したいが起動しない( Mounted (Closed) 状態で起動することもある)。
srvctl start database -d west
PRCR-1079 : リソースora.west.dbの起動に失敗しました
CRS-2674: Start of 'ora.west.db' on 'osaka1' failed
○ 起動するがエラーあり( alert_HPYMUSIC.log )
startup mount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
[oracle@osaka1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on 金 8月 23 19:05:35 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
アイドル・インスタンスに接続しました。
SQL> startup mount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_for_standby_HPYMUSIC.txt'
ORACLEインスタンスが起動しました。
Total System Global Area 839282688 bytes
Fixed Size 2217992 bytes
Variable Size 515901432 bytes
Database Buffers 314572800 bytes
Redo Buffers 6590464 bytes
データベースがマウントされました。
Error 12154 received logging on to the standby
FAL[client, ARC3]: Error 12154 connecting to HPYMUSIC for fetching gap sequence
Errors in file /u01/app/oracle/diag/rdbms/west/HPYMUSIC/trace/HPYMUSIC_arc3_25690.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
Errors in file /u01/app/oracle/diag/rdbms/west/HPYMUSIC/trace/HPYMUSIC_arc3_25690.trc:
ORA-12154: TNS: 指定された接続識別子を解決できませんでした
○ open にならず Mounted (Closed) としかなってくれない。
srvctl start database -d west -o open
[oracle@osaka1 dbs]$ srvctl start database -d west -o open
[oracle@osaka1 dbs]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE osaka1
ora.FRA.dg
ONLINE ONLINE osaka1
ora.LISTENER.lsnr
ONLINE ONLINE osaka1
ora.asm
ONLINE ONLINE osaka1 Started
Cluster Resources
ora.cssd
1 ONLINE ONLINE osaka1
ora.diskmon
1 ONLINE ONLINE osaka1
ora.west.db
1 ONLINE INTERMEDIATE osaka1 Mounted (Closed)
○
srvctl config database -d west
srvctl status database -d west
[oracle@osaka1 dbs]$ srvctl config database -d west
一意のデータベース名: west
データベース名: HPYMUSIC
Oracleホーム: /u01/app/oracle/product/11.2.0/dbhome_1
Oracleユーザー: grid
spfile: +data/west/spfilewest.ora
ドメイン:
開始オプション: open
停止オプション: immediate
データベース・ロール: physical_standby
管理ポリシー: AUTOMATIC
ディスク・グループ: DATA,FRA
サービス:
[oracle@osaka1 dbs]$ srvctl status database -d west
データベースは実行中です。
○ mrp プロセスが起動するが、Read Only ではない。
alter database recover managed standby database disconnect from session;
select name, database_role, open_mode from gv$database;
[oracle@osaka1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on 金 8月 23 19:33:08 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
に接続されました。
SQL>
SQL> alter database recover managed standby database disconnect from session;
データベースが変更されました。
SQL> select name, database_role, open_mode from gv$database;
NAME DATABASE_ROLE
OPEN_MODE
HPYMUSIC PHYSICAL STANDBY
MOUNTED
[root@osaka1 app]# ps -ef |egrep -i mrp
oracle 26269 1 0 19:33 ? 00:00:00 ora_mrp0_HPYMUSIC
○ modify しても open にならない。
srvctl modify database -d west -s open
[oracle@osaka1 dbs]$ srvctl modify database -d west -s open
[oracle@osaka1 dbs]$ /u01/app/11.2.0/grid/bin/crsctl status resource -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE osaka1
ora.FRA.dg
ONLINE ONLINE osaka1
ora.LISTENER.lsnr
ONLINE ONLINE osaka1
ora.asm
ONLINE ONLINE osaka1 Started
Cluster Resources
ora.cssd
1 ONLINE ONLINE osaka1
ora.diskmon
1 ONLINE ONLINE osaka1
ora.west.db
1 ONLINE INTERMEDIATE osaka1 Mounted (Closed)standby 側に、アーカイブログファイルを少し適用すれば open できそうに感じます。
これを解決するためには、以下のエラーを解決するのがよいと思います。
・ORA-12154: TNS: 指定された接続識別子を解決できませんでした
両ノードの tnsnames.ora を確認させて貰えますか?
この推測が間違えていたら再検討しましょう。
Maybe you are looking for
-
I just bought an iPod shuffle and would like to transfer the songs that I have purchased on my iPad onto the iPod shuffle. I don't know how???????
-
Read-Only data fields/parameters
Hi How can I make a data-field (parameter) Read-Only? For instance : let's say, someone (an Admin) has filled in data into a user form, and saved it. Someone else (who does not have similar Admin status) signs in, and views the form. I want the secon
-
How do I boot from an external hard drive?
I have a MacBook Intel. I attached a USB hard drive and made a partition for an system image and another for Time Machine. I did a system Restore to the Image partition. I would like to test it to see if I can back up to it. Is there a key or somethi
-
Function module to create Inbound delivery with reference to Purchase Order
Hi experts, I want to create Inbound delivery with refernce to Purchase Order. But I want to create item wise. For example If one purchase order is there with 10 line items. 10, 20, 30, 40, 50......100. If I am showing report for Purchase Order with
-
Healing brush working half of the time
Hello guys. I ve started to use Photoshop cc and I am having really irritating issue with it that I never had with previous versions. My healing brush working with 50/50 success rate. After I pressed ALT then I am putting my pen on skin that I want t