Snapshot_id duplicated in sys.slog$
We have a master site with 20 snapshot sites. There are about 60 master tables with snapshot logs.
I have doubts regardind the snapshot_id.
In dba_registered_snapshots the snapshot_id is unique, but in sys.slog$, dba_snapshot_logs there are pairs of records with the same snapshot_id. One of the records matches the corresponding record in dba_registered_snapshots while the other has as a master table one that seems to be totally unrelated.
Does it make sense?
select *
from dba_snapshot_logs
where snapshot_id=4156
NAF46,MONEDAS,MLOG$_MONEDAS,,NO,YES,NO,2003/12/01 8:13:46 ,4156
NAF46,ARINTCL,MLOG$_ARINTCL,,NO,YES,NO,2003/12/05 2:33:34 ,4156
select *
from dba_registered_snapshots
where snapshot_id=4156
NAF46,SS_MONEDAS_OF,RICAPE01.WAKED.COM.CO,YES,NO,PRIMARY KEY,4156,ORACLE 8 SNAPSHOT,
The query_txt is:
SELECT "MONEDAS"."ID_MONEDA" "ID_MONEDA","MONEDAS"."DESC_MONEDA" "DESC_MONEDA","MONEDAS"."DESCRIPCION" "DESCRIPCION","MONEDAS"."SIGNO" "SIGNO","MONEDAS"."REDONDEO" "REDONDEO","MONEDAS"."NUMDECI" "NUMDECI","MONEDAS"."TSTAMP" "TSTAMP" FROM "MONEDAS"@WAKED.WAKED.COM.CO "MONEDAS"
No, the error has nothing to do with Service Broker. Or for that matter, query notifications, which is the feature you are actually using. (Query notifications uses Service Broker, but Service Broker != Query notification.)
You would get the same error if you had a trigger in MyObjects that tried to update the same row for both deletions. A snapshot transaction gives you a consistent view of the database in a certain point in time. Consider this situation:
Snapshot transaction A that started at time T update a row R at time T2. Snapshot transaction B starts at time T1 updates the same row at time T3. Had they been regular non-snapshot transaction, transaction B would have been blocked already when it tried
to read R, but snapshot transactions do not get blocked. But if B would be permitted to update R, the update from transaction A would be lost. Assume that the update is an incremental one, for instance updating cash balance for an account. You can see that
this is not permittable.
In your case, the row R happens to be a row in an internal table for query notifications, but it is the application design which is the problem. There is no obvious reason to use snapshot isolation in your example since you are only deleting. And there is
even less reason to have two transactions and connections for the task.
Erland Sommarskog, SQL Server MVP, [email protected]
Similar Messages
-
Snapshot Logs not being purged
Hi All,
I have a problem where my snapshot logs are not being purged due to materlized veiw dropped from the remote site.
I have looked at DBMS_SNAPSHOT.PURGE_SNAPSHOT_FROM_LOG porcedure to remove the orphaned entries in materialized view logs.
I want to execute this procedure based on the snapshot ID from SYS.SLOG$ using the snapshot_id parameter. Since the target snapshot is not listed in the list of registered snapshots (DBA_REGISTERED_SNAPSHOTS).
Please find the results of the below query to determine which entries in SYS.SLOG$ at the master site are no longer being used.
SELECT NVL (r.NAME,'-') snapname
, snapid
, NVL (r.snapshot_site, 'not registered') snapsite
, snaptime
FROM SYS.slog$ s
, dba_registered_snapshots r
WHERE s.snapid = r.snapshot_id(+) AND r.snapshot_id IS NULL;
SNAPNAME SNAPID SNAPSITE SNAPTIME
- 435 not registered 27/09/2010 22:11
- 456 not registered 27/09/2010 22:11
Please let me know if there is any other method to purge the logs safely?DBMS_MVIEW.PURGE_MVIEW_FROM_LOG is overloaded.
If you have the MVIEW_ID, you do not hvae to specify the (MVIEW_OWNER, MVIEW_NAME, MVIEW_SITE) parameters.
Check the PLSQL Procedures documentation on DBMS_MVIEW.
Hemant K Chitale
http://hemantoracledba.blogspot.com -
SNAPSHOT LOG의 데이타가 지워지지 않는 이유와 강제로 지우는 방법 (V7 ~ V8I)
제품 : ORACLE SERVER
작성날짜 : 2002-05-09
SNAPSHOT LOG의 데이타가 지워지지 않는 이유와 강제로 지우는 방법
(V7 ~ V8I)
====================================================
PURPOSE
사용하고 있는 snapshot의 refresh에는 이상이 없는데 master table의
sanpshot log가 지워지지 않고 계속 늘어만 가는 경우가 있을 수 있다.
이러한 경우 master site의 snapshot log가 증가하면서 space문제를
발생시킬 수도 있고, log의 내용을 refresh해가는 속도도 문제가 될 수
있으므로 여기에서는 그 원인과 조치 사항을 정리한다.
Explanation
1. snapshot log의 데이타가 지워지는 원리
snapshot log의 데이타가 지워지는 순간은 snapshot의 refresh time 때,
refresh 후, 해당 데이타에 대해서 MLOG$_<table_name>내의 SNAPTIME과
SYS.SLOG$의 해당 master table에 대한 snapshot들의 SNAPTIME을 비교해 보아
SYS.SLOG$.SNAPTIME이 MLOG$_<table_name>.SNAPTIME보다 미래이거나 같으면
그때 해당 데이타를 지우게 되는 것이다.
즉 다음 조건을 만족하는 때이다.
SYS.SLOG$.SNAPTIME >= MLOG$_<table_name>.SNAPTIME
그 이유는 snapshot log의 각 데이타는 master table에 연결된 여러 snapshot 중
첫 snapshot의 refresh시에만 refresh해당 시간이 SNAPTIME에 설정되고,
그 이후에 같은 데이타를 다른 snapshot 이 refresh해가도 값이 변경되지
않는다.
아무도 refresh하지 않은 경우는 default로 Oracle7의 경우 null이고,
Oracle8의 경우는 4000년 1월 1일로 설정된다.
SYS.SLOG$의 경우는 master table에 대해 연결된 모든 fast refresh로
refresh해가는 snapshot의 정보가 포함되어 있는데 각 snasphot이 refresh해
갈때마다 해당 snapshot에 대한 SNAPEIME이 변경된다.
그러므로 모든 snapshot이 snapshot log의 해당 레코드를 refresh해가는 경우
SYS.SLOG$의 SNAPTIME은 해당 MLOG$_<table_name>의 해당 레코드의 SNAPTIME
보다 미래이거나 같게 된다.
만약 SYS.SLOG$의 특정 snapshot의 SNAPTIME이 과거인채로 계속 refresh를
안해간다면 그래서 이 값이 snapshot log의 SNAPTIME보다 더 과거라면,
결국 해당 데이타를 refresh해가지 않은 snapshot이 존재하지 않는다는
것이므로 snapshot log의 데이타는 지워질 수 없는 것이다.
이러한 현상이 발생하는 경우는 master table에 연결되어 있는 하나의
snapshot site가 snapshot을 drop하지 않은 상태에서 장기간 데이타베이스
자체를 down시켜 둔 채 사용하지 않거나 아예 database를 remove시킨 경우이다.
그외에도 해당 master table에 snapshot을 test하다가 지우지 않고
그대로 둔 경우 등 이러한 문제는 실제 자주 발생하는 편이다.
2. snapshot log의 데이타를 강제로 지우는 방법
이렇게 사용하지 않는 snapshot으로 인해 비정상적으로 snapshot log의 크기가
커지는 경우, 가장 쉽게는 해당 snapshot을 찾아 drop해주면 문제는 바로
해결된다.
(1) snapshot 확인하는 방법
많은 경우 이러한 방치된 snapshot이 존재하는지 자체를 몰라 snapshot log가
계속 증가하게 되는데 이때는 다음과 같이 master site에서 snapshot 정보들을
확인할 수 있다.
- Oracle7의 경우:
Oracle7에서는 해당 master table에 걸려있는 모든 fast refresh의 snapshot의
정보를 확인할 수 있는 view는 SYS.SLOG$뿐이었다. 예를 들어 master table
이름이 DEPT인 경우 조회 방법은 다음과 같다.
SQL>connect sys/manager
SQL>select * from sys.slog$ where master = 'DEPT';
이 경우 snapshot을 나타내는 column은 SNAPSHOT으로 날짜로 표시된다.
이 snapshot을 실제 snapshot site에서 확인하려면 예상되는 snapshot
site에서 다음과 같이 조회하여 확인할 수 있다.
SQL>select * from sys.snap$ where master = 'DEPT';
- Oracle8의 경우
Oracle7에서는 위와 같이 snapshot의 정보를 master site에서 확인하는
것이 번거로운 문제점이 있어서 Oracle8부터는 snapshot을 나타내기 위한
snapshot id라는 개념이 추가되었고,
DBA_REGISTERED_SNAPSHOTS view가 생성되어 이 view를 통해 쉽게 snapshot를
찾는것이 가능해 졌다.
SQL>select snapid from sys.slog$ where master = 'DEPT';
SQL>select * from dba_registered_snapshots
where snapshot_id=위에서 확인한 번호;
이렇게 확인하면 해당 snapsot이 어느 site에 존재하는지에 대한 master
site의 database link이름 등도 확인 가능하다.
(2) 강제로 snapshot정보를 master site에서 제거하는 방법
위와 같이 확인하였는데도 snapshot을 snapshot site에서 직접 drop할 수
없는 환경이라면, master site에서 해당 snapshot에 대한 정보를 아예 지워
과거의 SNAPTIME을 가지는 snapshot이 SYS.SLOG$에 남지 않게 하면 된다.
SLOG$는 dictionary table이므로 바로 delete문장을 이용하여 해당 정보를
지워서는 안되며 다음과 같은 package를 이용한다.
- Orcle7의 경우:
SQL>exec DBMS_SNAPSHOT.PURGE_LOG('DEPT',2);
해당 master table에 걸려 있는 snapshot중 가장 과거의 SNAPTIME을 가지는
몇개의 snapshot정보를 지울 것인가를 지정하는 방식이다.
이 예의 경우는 DEPT table에 걸려있는 snapshot중 가장 이전에 refresh해간
두개의 snapshot의 정보를 master site의 SYS.SLOG$ 에서 제거한다.
- Oracle8의 경우:
Oracle7에서도 앞에서 설명한 DBMS_SNAPSHOT.PURGE_LOG는 여전히
사용가능하다.
그 외에 Oracle8에서 새로 추가된 snapshot id를 이용한 다음과 같은
package를 사용가능하다.
PROCEDURE PURGE_SNAPSHOT_FROM_LOG
Argument Name Type In/Out Default?
SNAPSHOT_ID BINARY_INTEGER IN
SNAPOWNER VARCHAR2 IN
SNAPNAME VARCHAR2 IN
SNAPSITE VARCHAR2 IN
예를 들어 다음과 같이 사용하면 된다.
SQL>exec DBMS_SNAPSHOT.PURGE_SNAPSHOT_FROM_LOG
(10, 'SCOTT', 'DEPT_SNAP', 'SNP8I.WORLD'); -
MV Logs not getting purged in a Logical Standby Database
We are trying to replicate a few tables in a logical standby database to another database. Both the source ( The Logical Standby) and the target database are in Oracle 11g R1.
The materialized views are refreshed using FAST REFRESH.
The Materialized View Logs created on the source ( the Logical Standby Database) are not getting purged when the MV in the target database is refreshed.
We checked the entries in the following Tables: SYS.SNAP$, SYS.SLOG$, SYS.MLOG$
When a materialized view is created on the target database, a record is not inserted into the SYS.SLOG$ table and it seems like that's why the MV Logs are not getting purged.
Why are we using a Logical Standby Database instead of the Primary ? Because, the load on the Primary Database is too much and the machine doesn't have enough resources to support MV based replication. The CPU usage is 95% all the time. The appplication owner won't allow us to go against the Primary database.
Do we have to do anything different in terms of Configuration/Privileges etc. because we are using a Logical Standby Database as a source ?
Thanks in Advance.We have a 11g RAC database in solaris OS where there is huge gap in archive log apply.
Thread Last Sequence Received Last Sequence Applied Difference
1 132581 129916 2665
2 108253 106229 2024
3 107452 104975 2477
The MRP0 process seems not to be working also.Almost 7000+ archives lag in standby if compared with primary database.
i suggest you to go with Incremental rollforward backups to make it SYNC, use this below link for step by step procedure.
http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/
Here questions.
1) Whether those archives are transported & just not applied?
2) Is in production do you have archives or backup of archives?
3) What you have found errors in alert log file?
post
SQL> select severity,message,error_code,timestamp from v$dataguard_status where dest_id=2;
4) What errors in primary database alert log file?
Also post
select ds.dest_id id
, ad.status
, ds.database_mode db_mode
, ad.archiver type
, ds.recovery_mode
, ds.protection_mode
, ds.standby_logfile_count "SRLs"
, ds.standby_logfile_active active
, ds.archived_seq#
from v$archive_dest_status ds
, v$archive_dest ad
where ds.dest_id = ad.dest_id
and ad.status != 'INACTIVE'
order by
ds.dest_id
/Also check errors from standby database. -
Sccm boot from thumb drive - failed to find task sequence.
The computer should be generating its own netbios name when booting but what is happening when booting from
the SCCM 2012 thumb drive FAILED TO FIND TASK SEQUENCE to grab image off of network drive.
When I real the SMSTS.log file it is getting its name pulling names off of computers that are in AD.
ALERT ALERT! ALERT!Run this query for the MAC and see if there are any results. Could be that the MAC is assigned to a machine already and you either need to deploy the OS to a collection and add that machine to the collection, or it is possible the GUID is duplicated.
select SYS.netbios_name0, SD.itemkey, MAC.MAC_Addresses0, SD.SMS_Unique_Identifier0,SD.Hardware_ID0,SD.Name0,SD.Unknown0,SD.Obsolete0,SD.Active0,SD.Decommissioned0,SD.Creation_Date0,SD.SMBIOS_GUID0
from System_DISC SD FULL JOIN
v_r_system SYS ON SD.itemkey = SYS.resourceid FULL JOIN
System_MAC_Addres_ARR MAC ON SD.itemkey = MAC.ItemKey
--where SD.Unknown0 = 1
where MAC_Addresses0 = '%:AD:D5'
--OR SD.SMBIOS_GUID0 = '' -
(SNAPSHOT) ORA-12004/ORA-12034 처리에 대해서 (7.X-8.X)
제품 : ORACLE SERVER
작성날짜 : 2000-11-03
fast refresh가 안 되는 ORA-12004 처리에 대해서 (7.x - 8.x)
=========================================================
snapshot 생성 시 같은 master table에 대해서 일부 snapshot site는 문제없이
사용을 하고 있는데 추가적으로 생성한 snapshot에 대해서만 fast refresh를
할 수 없다는 ora-12004 (Oracle8에서는 ORA-12034)가 발생되는 경우가 있다.
잘되는 snapshot과 같은 조건과, 환경의 경우 왜 새로 생성하는 곳만 문제가
되는지 설명이 되지 않는 경우가 있다면 다음 문서의 내용이 그 답이 될 수
있다.
기본적인 snapshot 환경과 생성에 문제가 없었는지는 <Bul:11186>을 통해 확인하
는 것이 도움이 된다.
1. 문제가 발생하는상황
~~~~~~~~~~~~~~~~~~~~~~
문제가 발생하는 snapshot의 master table에 대해서 이미 다른 snapshot이
만들어져 있고, 그 snapshot은 정상적으로 refresh가 되고 있는 상황,
그리고 master table의 data가 매우 많아서 새로 snpahsot을 만들거나,
refresh하는 것이 시간이 오래 걸리는 경우 주로 발생한다.
2. 문제의 원인
~~~~~~~~~~~~~~
master table의 snapshot log가 있는 table에 대해서, snapshot이 추가로
생성되고 나면 snapshot을 생성하기 시작한 시간과, 기존의 snapshot이 log를
refresh해간 시간을 비교하여 새로운 snapshot 생성시작 시간이 더 빠르면
ora-12004가 발생한다.
즉, snapshot을 생성하기 시작한 이후의 변경분이 이미 snapshot log에서
지워진 경우 새로 새성된 snapshot은 그 부분에 대한 변경을 반영할 수 없기
때문에 이후에 snapshot log 만을 의존해서 변경분만을 가져오는 fast
refresh를 수행하면 안된다는 것이다.
실제 이 mechanism이 이용하는 table은 sys user의 sys.mlog$, sys.slog$,
sys.snap$ table이다.
snapshot이 생성되기 시작한 시간은 sys.snap$의 snaptime에 기록이 되는데,
이 시간을 master table의 sys.mlog$의 oldest값과 비교하여 snaptime이 oldest
보다 이전 시간이면 ora-12004가 발생하는 것이다.
sys.mlog$의 oldest는 master table의 snapshot log 즉, mlog$_<table_name>에
저장된 data가 지워지는, 즉 purge되는 작업을 일으킨 마지막 refresh의
시간을 나타낸다. 하나의 master table에 여러개의 snapshot이 연결되어 있는
경우 모든 snapshot이 refresh를 해가야 해당 log data를 지우게 되므로,
일부만 refresh해간 경우에는 oldest값이 변경되지 않는다. mlog$_<table_name>
에 data가 없는 경우 refresh가 되면 정보는 변경된다.
oracle8의 경우 primary key base snapshot인 경우 oldest_pk를 확인한다.
fast refresh snapshot을 생성할때 이러한 oldest와 snaptime의 비교가 이루어
지게 되는데 snaptime이 oldest보다 늦으면 master site의 sys.slog$에 새로운
snapshot에 대한 entry가 추가되어 이후부터 정상적으로 refresh가 진행되고,
snaptime이 더 빠르면 sys.slog$에 기록이 되지 않아 이후에 refresh시에
slog$정보에서 entry를 찾지 못하여 ora-12004를 발생시키게 되는 것이다.
즉, snapshot생성시에는 오류 메시지를 만나지 않게 된다.
fast refresh로 지정된 snapshot이 생성되기 시작했는데도 mlog$_<table_name>의
data가 지워지는 이유는 이미 같은 master table에 걸려 있는 sys.slog$에
기록된 snapshot이 모두 refresh를 해가고 나면 그 log가 snapshot log에서
지워지기 때문인데, snapshot이 아직 생성되기 전에는 sys.slog$에 기록되지
않아 기존 snapshot만 refresh가 끝나면 지워지게 되기 때문이다.
그러나 snapshot생성 문장은 처음 master table의 data를 가져오기 시작한
시점을 유지하기 때문에 snapshot생성 중 변경된 부분에 대해서 변경된
것을 읽지 않고 rollback segment에서 변경 이전 image를 읽어 snapshot을
구성한다. snapshot 생성시간이 오래 걸리는 경우 생성 중 다른 transaction에
의해 변경된 data는 이후에 mlog$에서 fast refresh를 통해 가져가게 되나,
만약 이때 이미 존재하는 다른 snapshot이 refresh를 수행해 그 log data가
지워지게 되면 새로 생성된 snapshot은 snapshot log의 data만을 이용해
refresh해서는 master table과 같은 data를 유지할 수 없게 되는 것이다.
예를 들어 다음과 같은 작업을 가정하자.
master table: dept
기존 snapshot: A site의 dept_snap이라는 snapshot이 dept table에 대해 존재
새로 생성하는 snapshot이름: B site의 new_snap
----------|-----------------|--------------|--------------|-------------
시간 1:00 1:30 2:00 2:30
1:00 - B site에서 new_snap생성 시작
1:30 - master table dept변경
2:00 - A site에서 기존의 dept_snap이 refresh, 10초 refresh종료
2:30 - B site의 new_snap생성 종료
B site의 new_snap는 1시에 snapshot생성을 시작해 2시 30분에 종료되는데,
이때 1시 30분에 누군가 master table을 변경하여도 new_snap은 이시간의
변경 전 data를 읽어 snaphot을 구성한다.
new_snap의 sys.snap$에서 snaptime은 1시가 된다.
master table의 sys.mlog$는 dept_snap이 2시에 snapshot을 refresh해가고
mlog$_dept의 row를 지우기 때문에 oldest값으로 2시가 된다.
new_snap을 refresh시도할 때 master site의 sys.mlog$의 dept에 대한
oldest값이 2시인데, new_snap의 sys.snap$의 snaptime은 그 이전인
1시이기 때문에 master site의 sys.slog$에 entry를 추가하지 않고,
ora-12004를 유발시킨다.
3. 해결 방법
~~~~~~~~~~~~
결국 문제가 되는 것은 snapshot 생성 중에 같은 table에 대한 다른 snapshot이
refresh를 하는 것이 문제이다.
그러므로 생성하고자 하는 snapshot에 대한 master table의 다른 snapshot들을
refresh하지 못하도록 하거나, refresh하지 않는 시간에 새로운 snapshot을
생성하여야 한다.
refresh 못하도록 막는 방법으로는 일시적으로 snapshot job을 broken시킨 후
snapshot이 생성된 후 다시 broken을 false로 하면 된다.
10번 job을 broken시키는 것과 다시 해제하는 예이다.
os>sqlplus scott/tiger
SQL>select job, what from user_jobs;
SQL>exec dbms_job.broken(10,true);
SQL>exec dbms_job.broken(10,false);
refresh 시에 ora-12004가 발생하여도 다시 complete refresh를 수행하면
complete refresh 중 다른 snapshot이 refresh를 해가지 않는 경우,
master site의 sys.slog에 entry가 추가되면서 이후에는 fast refresh가
수행된다.
그러나 complete refresh를 수행하는 동안 기존의 다른 snapshot이 refresh를
해가게 되면, 아직 새로 생성한 snapshto의 정보가 master site의 sys.slog$에
기록되지 않았기 때문에 새로 생성한 snapshot은 고려하지 않고 바로
mlog$_<table_name>의 log data를 지우게 된다. 이렇게 되면 여전히
complete refresh가 끝나도 sys.slog$에 entry가 추가되지 않고, 이후의
fast refresh는 ora-12004를 발생시키게 된다.The current one I am implementing is a development system. The database is less than 100GB. 800MB of PSAPUNDO is sufficient for our development usage.
Follow up on Problem 3:
I created another undo tablespace PSAPUNDO2(undodata.dbf) with size of 5GB. I switched undo tablespace to PSAPUNDO2 and placed PSAPUNDO(undo.data1) offline. With PSAPUNDO2 online and PSAPUNDO offline, I started brspace -f dbcreate and encountered the error below at Step 2 Export User tablespace:
BR0301E SQL error -376 at location BrStattabCreate-3
ORA-00376: file 17 cannot be read at this time
ORA-01110: data file 17: '/oracle/DVT/sapdata1/undo_1/undo.data1'
ORA-06512: at 'SYS.DBMS_STATS", line 5317
ORA-06512: at line 1
I aborted the process and verified that SAP is able to run with this settings. I started CheckDB in DB13 and it shows me these messages:
BR0301W SQL error -376 at location brc_dblog_open-5
ORA-00376: file 17 cannot be read at this time
ORA-01110: data file 17: '/oracle/DEV/sapdata1/undo_1/undo.data1'
BR0324W Insertion of database log header failed
I don't understand then. I have already switched the undo tablespace from PSAPUNDO to PSAPUNDO2. Why the message above still appears? Once I put PSAPUNDO online, CheckDB completes successfully without warning.
I did show parameter undo_tablespace and the result is PSAPUNDO2(5GB).
So exactly, what's going on? Can anyone advise?
===============================================
I have managed to clear the message in DB13 after dropping PSAPUNDO tablespace including contents and datafiles. This is mentioned is OSS note 600141 pg 8 as below:
Note: You cannot just set the old rollback-tablespace PSAPROLL to offline instead of deleting it properly. This results in ORA-00376 in connection with ORA-01110 error messages. PSAPROLL must remain ONLINE until it is deleted. (Oracle bug 3635653)
Message was edited by:
Annie Chan -
Creating a duplicated database from active database
I'm having problem trying to duplicate a database from a target database (PROD) to the auxiliary one (AUX).
This is what I've configured:
listener.ora
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = PROD)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1/)
(SID_NAME = PROD)
(SID_DESC =
(GLOBAL_DBNAME = AUX)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1/)
(SID_NAME = AUX)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = my_ip)(PORT = 1521))
ADR_BASE_LISTENER = /u01/app/oracletnsnames.ora
AUX =
(DESCRIPTION =
(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = my_ip)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = AUX)
PROD =
(DESCRIPTION =
(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = my_ip)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PROD)
)/u01/app/oracle/product/11.2.0/db_1/dbs/initAUX.ora
db_name='AUX'
cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapwPROD /u01/app/oracle/product/11.2.0/db_1/dbs/orapwAUX- Started PROD database (open and in archive log mode)
- Started AUX database (in nomount mode and usinf pfile='/u01/app/oracle/product/11.2.0/db_1/dbs/initAUX.ora')
But when I issue the following command I get connected to auxiliary database (not started) :
rman target sys/oracle@PROD auxiliary sys/oracle@AUX
connected to target database: PROD (DBID=196032563)
connected to auxiliary database (not started)And executing the duplicate command it returns:
RMAN> duplicate target database to AUX from active database spfile parameter_value_convert '/DATA/PROD/','+DG_DATA_AUX' set db_create_file_dest='+DG_DATA_AUX';
Starting Duplicate Db at 2010-09-28:13:36:50
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 09/28/2010 13:36:50
RMAN-06403: could not obtain a fully authorized session
RMAN-04006: error from auxiliary database: ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directorylsnrctl stopped and restarted several times. This is the output of lsnrctl status command:
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 28-SEP-2010 13:45:14
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=my_ip)(PORT=1521)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 28-SEP-2010 12:44:50
Uptime 0 days 0 hr. 1 min. 22 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/my_ip/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=my_ip)(PORT=1521)))
Services Summary...
Service "PROD" has 1 instance(s).
Instance "PROD", status READY, has 1 handler(s) for this service...
Service "PRODXDB" has 1 instance(s).
Instance "PROD", status READY, has 1 handler(s) for this service...
Service "AUX" has 2 instance(s).
Instance "AUX", status UNKNOWN, has 1 handler(s) for this service...
Instance "AUX", status BLOCKED, has 1 handler(s) for this service...
The command completed successfullyI have already created a duplicated database from a backup of the target one and the configuration was quite the same: the differences are that using active database duplication I needed to use orapw files (and they need to have the same password) and that I needed to connect using Oracle Net service name to connect to the auxiliary instance.. otherwise I got:
export ORACLE_SID=AUX
rman target sys/oracle@PROD auxiliary sys/oracle
connected to target database: PROD (DBID=196032563)
connected to auxiliary database: AUX (not mounted)
RMAN> duplicate target database to AUX from active database spfile parameter_value_convert '/DATA/PROD/','+DG_DATA_AUX' set db_create_file_dest='+DG_DATA_AUX';
Starting Duplicate Db at 2010-09-28:12:27:21
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 09/28/2010 12:27:21
RMAN-06217: not connected to auxiliary database with a net service nameWhere am I wrong?Thanks for your support. My instance was and is still in nomount mode. But I'm not able to connect this instance as auxiliary and in the right expected mode:
connected to auxiliary database: AUX (not mounted)
[oracle@my_ip oracle]$ env|grep SID
ORACLE_SID=AUX
[oracle@my_ip oracle]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Wed Sep 29 10:28:44 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
idle> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
idle> startup nomount pfile=/u01/app/oracle/product/11.2.0/db_1/dbs/initAUX.ora
ORACLE instance started.
Total System Global Area 217157632 bytes
Fixed Size 2211928 bytes
Variable Size 159387560 bytes
Database Buffers 50331648 bytes
Redo Buffers 5226496 bytes -
TIP 04: Duplicating a Database in 10g by Joel Pèrez
Hi OTN Readers!
Everyday I get connection on Internet and one of the first issues that
I do is to open the OTN main page to look for any new article or any
new news about the Oracle Technology. After I open the main page of
OTN Forums and I check what answers I can write to help some people
to work with the Oracle Technology and I decided to begin to write some
threads to help DBAs and Developers to learn the new features of 10g.
I hope you can take advantage of them which will be published here in
this forum. For any comment you can write to me directly to : [email protected] . Apart from your comments you can suggest to me any topic to write an article like this.
Please do not replay this thread, if you have any question related to
this I recommend you to open a new post. Thanks!
The tip of this thread is: Duplicating a Database in 10g
Joel Pérez
http://otn.oracle.com/expertsStep 6: Editing the file generated
The file generated is going to be like this:
Dump file f:\ora9i\admin\copy1\udump\copy1_ora_912.trc
Thu May 20 16:27:37 2004
ORACLE V9.2.0.1.0 - Production vsnsta=0
vsnsql=12 vsnxtr=3
Windows 2000 Version 5.0 Service Pack 4, CPU type 586
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Windows 2000 Version 5.0 Service Pack 4, CPU type 586
Instance name: copy1
Redo thread mounted by this instance: 1
Oracle process number: 10
Windows thread id: 912, image: ORACLE.EXE
*** SESSION ID:(9.38) 2004-05-20 16:27:37.000
*** 2004-05-20 16:27:37.000
# The following are current System-scope REDO Log Archival related
# parameters and can be included in the database initialization file.
# LOG_ARCHIVE_DEST=''
# LOG_ARCHIVE_DUPLEX_DEST=''
# LOG_ARCHIVE_FORMAT=ARC%S.%T
# REMOTE_ARCHIVE_ENABLE=TRUE
# LOG_ARCHIVE_MAX_PROCESSES=2
# STANDBY_FILE_MANAGEMENT=MANUAL
# STANDBY_ARCHIVE_DEST=%ORACLE_HOME%\RDBMS
# FAL_CLIENT=''
# FAL_SERVER=''
# LOG_ARCHIVE_DEST_1='LOCATION=f:\ora9i\RDBMS'
# LOG_ARCHIVE_DEST_1='MANDATORY NOREOPEN NODELAY'
# LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM SYNC'
# LOG_ARCHIVE_DEST_1='NOREGISTER NOALTERNATE NODEPENDENCY'
# LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED'
# LOG_ARCHIVE_DEST_STATE_1=ENABLE
# Below are two sets of SQL statements, each of which creates a new
# control file and uses it to open the database. The first set opens
# the database with the NORESETLOGS option and should be used only if
# the current versions of all online logs are available. The second
# set opens the database with the RESETLOGS option and should be used
# if online logs are unavailable.
# The appropriate set of statements can be copied from the trace into
# a script file, edited as necessary, and executed when there is a
# need to re-create the control file.
# Set #1. NORESETLOGS case
# The following commands will create a new control file and use it
# to open the database.
# Data used by the recovery manager will be lost. Additional logs may
# be required for media recovery of offline data files. Use this
# only if the current version of all online logs are available.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "COPY1" NORESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'C:\COPY1\COPY1\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\COPY1\COPY1\REDO02.LOG' SIZE 10M,
GROUP 3 'C:\COPY1\COPY1\REDO03.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\COPY1\COPY1\SYSTEM01.DBF',
'C:\COPY1\COPY1\UNDOTBS01.DBF',
'C:\COPY1\COPY1\CWMLITE01.DBF',
'C:\COPY1\COPY1\DRSYS01.DBF',
'C:\COPY1\COPY1\EXAMPLE01.DBF',
'C:\COPY1\COPY1\INDX01.DBF',
'C:\COPY1\COPY1\ODM01.DBF',
'C:\COPY1\COPY1\TOOLS01.DBF',
'C:\COPY1\COPY1\USERS01.DBF',
'C:\COPY1\COPY1\XDB01.DBF'
CHARACTER SET WE8ISO8859P1
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
# Commands to add tempfiles to temporary tablespaces.
# Online tempfiles have complete space information.
# Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\COPY1\COPY1\TEMP01.DBF'
SIZE 41943040 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
# End of tempfile additions.
# Set #2. RESETLOGS case
# The following commands will create a new control file and use it
# to open the database.
# The contents of online logs will be lost and all backups will
# be invalidated. Use this only if online logs are damaged.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "COPY1" RESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'C:\COPY1\COPY1\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\COPY1\COPY1\REDO02.LOG' SIZE 10M,
GROUP 3 'C:\COPY1\COPY1\REDO03.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\COPY1\COPY1\SYSTEM01.DBF',
'C:\COPY1\COPY1\UNDOTBS01.DBF',
'C:\COPY1\COPY1\CWMLITE01.DBF',
'C:\COPY1\COPY1\DRSYS01.DBF',
'C:\COPY1\COPY1\EXAMPLE01.DBF',
'C:\COPY1\COPY1\INDX01.DBF',
'C:\COPY1\COPY1\ODM01.DBF',
'C:\COPY1\COPY1\TOOLS01.DBF',
'C:\COPY1\COPY1\USERS01.DBF',
'C:\COPY1\COPY1\XDB01.DBF'
CHARACTER SET WE8ISO8859P1
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
# Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
# Commands to add tempfiles to temporary tablespaces.
# Online tempfiles have complete space information.
# Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\COPY1\COPY1\TEMP01.DBF'
SIZE 41943040 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
# End of tempfile additions.
#As you can see, you have there different ways to recreate the controlfile. In Our case, We are going to recreate the controlfiles so:
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "COPY2" RESETLOGS NOARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 'C:\COPY2\COPY2\REDO01.LOG' SIZE 10M,
GROUP 2 'C:\COPY2\COPY2\REDO02.LOG' SIZE 10M,
GROUP 3 'C:\COPY2\COPY2\REDO03.LOG' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'C:\COPY2\COPY2\SYSTEM01.DBF',
'C:\COPY2\COPY2\UNDOTBS01.DBF',
'C:\COPY2\COPY2\CWMLITE01.DBF',
'C:\COPY2\COPY2\DRSYS01.DBF',
'C:\COPY2\COPY2\EXAMPLE01.DBF',
'C:\COPY2\COPY2\INDX01.DBF',
'C:\COPY2\COPY2\ODM01.DBF',
'C:\COPY2\COPY2\TOOLS01.DBF',
'C:\COPY2\COPY2\USERS01.DBF',
'C:\COPY2\COPY2\XDB01.DBF'
CHARACTER SET WE8ISO8859P1
Note: two important issues to denote in the sentence above is the word "SET" instead of "REUSE" and the controlfiles must be recreated in RESETLOG mode because the database must be opened in RESETLOG mode.
If you use the word "REUSE" instead of "SET" the opening of the database is going to request recovery of the datafile of the tablespace system.
So, apply this to recreate the controlfiles:
- Start the service in windows for the database COPY2
- Get connection through SQL*Plus as system
- Shut down the database with shutdown abort
- Start the database up in nomount stage
- apply the sentence to recreate the controlfile.
C:\>SET ORACLE_SID=COPY2
C:\>sqlplus /nolog
SQL*Plus: Release 9.2.0.1.0 - Production on Thu May 20 16:46:49 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> conn sys as sysdba
Enter password:
Connected.
SQL>
SQL> shutdown abort
ORACLE instance shut down.
SQL>
SQL>
SQL> startup nomount
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
SQL>
SQL>
SQL> CREATE CONTROLFILE SET DATABASE "COPY2" RESETLOGS NOARCHIVELOG
2 -- SET STANDBY TO MAXIMIZE PERFORMANCE
3 MAXLOGFILES 50
4 MAXLOGMEMBERS 5
5 MAXDATAFILES 100
6 MAXINSTANCES 1
7 MAXLOGHISTORY 226
8 LOGFILE
9 GROUP 1 'F:\COPY2\COPY2\REDO01.LOG' SIZE 10M,
10 GROUP 2 'F:\COPY2\COPY2\REDO02.LOG' SIZE 10M,
11 GROUP 3 'F:\COPY2\COPY2\REDO03.LOG' SIZE 10M
12 -- STANDBY LOGFILE
13 DATAFILE
14 'F:\COPY2\COPY2\SYSTEM01.DBF',
15 'F:\COPY2\COPY2\UNDOTBS01.DBF',
16 'F:\COPY2\COPY2\CWMLITE01.DBF',
17 'F:\COPY2\COPY2\DRSYS01.DBF',
18 'F:\COPY2\COPY2\EXAMPLE01.DBF',
19 'F:\COPY2\COPY2\INDX01.DBF',
20 'F:\COPY2\COPY2\ODM01.DBF',
21 'F:\COPY2\COPY2\TOOLS01.DBF',
22 'F:\COPY2\COPY2\USERS01.DBF',
23 'F:\COPY2\COPY2\XDB01.DBF'
24 CHARACTER SET WE8ISO8859P1
25 ;
Control file created.
SQL>Joel Pérez
http://otn.oracle.com/experts -
Find sys schema for imp database in 9i
Hi Experts,
Based on ORACLE rcommandation, I will take exp/imp to full database to other server ( it is only way based on source DB condition)
To avoid duplicated system objects, i run a sql to get a schema list
==============================
SQL> select owner from dba_objects group by owner order by
OWNER
CRYSTAL
CTXSYS
MDSYS
NWEKRANESES
ODM
ODM_MTR
OWNER
OE
OLAPSYS
ORDPLUGINS
ORDSYS
OUTLN
PIDLPASCUA
PM
PUBLIC
QS
QS_ADM
QS_CBADM
OWNER
QS_CS
QS_ES
QS_OS
QS_WS
SH
SYS
SYSTEM
WKSYS
WMSYS
XDB
32 rows selected.
SQL>
I am not sure which scheam is a system schema for oracle 9.0.06.0 in above SQL
So that I only need to emp/imo user data and programmeing codes.
by the way, Does sys schema stored user codes information or objects in there?
Thanks for your help!
JIM
Edited by: user589812 on Apr 22, 2009 12:28 PMThanks for your help.
I got lots of message as
Column : DESC[RIBE] {[schema.]object[@connect_identifier]}
IMP-00019: row rejected due to ORACLE error 1
IMP-00003: ORACLE error 1 encountered
ORA-00001: unique constraint (SYSTEM.HELP_TOPIC_SEQ) violated
Column : DESCRIBE
Column : 9
Column :
IMP-00019: row rejected due to ORACLE error 1
IMP-00003: ORACLE error 1 encountered
ORA-00001: unique constraint (SYSTEM.HELP_TOPIC_SEQ) violated
Column : DISCONNECT
Column : 1
Column :
IMP-00019: row rejected due to ORACLE error 1
IMP-00003: ORACLE error 1 encountered
ORA-00001: unique constraint (SYSTEM.HELP_TOPIC_SEQ) violated
Column : DISCONNECT
Column : 2
Column : DISCONNECT
IMP-00019: row rejected due to ORACLE error 1
IMP-00003: ORACLE error 1 encountered
ORA-00001: unique constraint (SYSTEM.HELP_TOPIC_SEQ) violated
Column : DISCONNECT
I am reviewing your paperlink that ia ver usefull.
As DBA_mike said, it seems there are a potential issue if user creates object by system account.
This is very old DB without archived and documents. we need to migreated to 10g/11i
Thanks
JIM -
Errors during duplicating database.
Hello,
I'm duplicating my database (using RMAN) and I get the following error:
contents of Memory Script:
shutdown clone;
startup clone nomount;
executing Memory Script
database dismounted
Oracle instance shut down
RMAN-00571(...)
RMAN-04006:error from auxiliary database: ORA-12514: TNS: listener doesn not currently know of service requested in connect descriptor.
It's 10.2 database on Linux. Auxiliary database is named 'aux'
Tnsnames.ora and listener.ora seems to be ok, since I could connect to this instance (while it was in nomount state, before duplicating in RMAN) using
sqlplus sys/***@aux as sysdba
After I got the error I connected to aux using Bequeath, and checked it's status. It is shutdown since I got the error:
ORA-01034: ORACLE not available.
What can I do?
Thx. in advance
Aliq.On 9i, Oracle registers itself with the database when the database is opened.
This means in any other state you need to use SID= in your tnsnames.ora.
From your post it looks like this still applies to 10g.
I would try changing my tnsnames.ora.
You could of course, immediately after receiving the error, run lsnrctl services and check whether the service_name of the aux database is listed.
Sybrand Bakker
Senior Oracle DBA -
Connected to Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
Connected as SYS
SQL> exec admin.admin_dba_main.KILL_ORPH_2PHASE_COMMITS();
begin admin.admin_dba_main.KILL_ORPH_2PHASE_COMMITS(); end;
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_TRANSACTION", line 88
ORA-06512: at "ADMIN.ADMIN_DBA_MAIN", line 52I create this as the sys user but the owner of it is a schema called admin that owns several of our DBA scripts. The ADMIN schema has the DBA role, but as far I understand it, the SYS.DBMS_TRANSACTION needs to be run as either SYS or a SYSDBA user.
What am I missing here. What permission do I have to grant to get this working?Hello,
This post is duplicated:
ORA-01031: insufficient privileges for "SYS.DBMS_TRANSACTION"...FIX???
Best regards,
Jean-Valentin -
I am in the process of duplicating a RAC and it got stopped here
[lodbr02].oracle:/opt/oracle/soft/elo_stby/11.2.0/db_1/network/admin > rman TARGET sys/oracle@elo_1 auxiliary sys/oracle@ELO_1STBY
Recovery Manager: Release 11.2.0.2.0 - Production on Mon Mar 19 15:07:09 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: ELO (DBID=3640040877)
connected to auxiliary database: ELO (not mounted)
RMAN> DUPLICATE TARGET DATABASE
2> FOR STANDBY
3> FROM ACTIVE DATABASE
4> DORECOVER
5> SPFILE
6> SET db_unique_name='ELO_STBY' COMMENT 'Is standby'
7> SET LOG_ARCHIVE_DEST_2='SERVICE=ELO ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ELO'
8> SET FAL_SERVER='ELO' COMMENT 'Is primary'
9> NOFILENAMECHECK;
Starting Duplicate Db at 19-MAR-12
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=782 instance=ELO_1 device type=DISK
contents of Memory Script:
backup as copy reuse
targetfile '/opt/oracle/soft/elo/11.2.0/db_1/dbs/orapwelo' auxiliary format
'/opt/oracle/soft/elo_stby/11.2.0/db_1/dbs/orapwELO' targetfile
'+DGDATELO/elo/spfileelo.ora' auxiliary format
'/opt/oracle/soft/elo_stby/11.2.0/db_1/dbs/spfileELO_1.ora' ;
sql clone "alter system set spfile= ''/opt/oracle/soft/elo_stby/11.2.0/db_1/dbs/spfileELO_1.ora''";
executing Memory Script
Starting backup at 19-MAR-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=4 instance=elo_1 device type=DISK
Finished backup at 19-MAR-12
sql statement: alter system set spfile= ''/opt/oracle/soft/elo_stby/11.2.0/db_1/dbs/spfileELO_1.ora''
contents of Memory Script:
sql clone "alter system set db_unique_name =
''ELO_STBY'' comment=
''Is standby'' scope=spfile";
sql clone "alter system set LOG_ARCHIVE_DEST_2 =
''SERVICE=ELO ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ELO'' comment=
'''' scope=spfile";
sql clone "alter system set FAL_SERVER =
''ELO'' comment=
''Is primary'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
executing Memory Script
sql statement: alter system set db_unique_name = ''ELO_STBY'' comment= ''Is standby'' scope=spfile
sql statement: alter system set LOG_ARCHIVE_DEST_2 = ''SERVICE=ELO ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ELO'' comment= '''' scope=spfile
sql statement: alter system set FAL_SERVER = ''ELO'' comment= ''Is primary'' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 4275781632 bytes
Fixed Size 2233336 bytes
Variable Size 2466253832 bytes
Database Buffers 1795162112 bytes
Redo Buffers 12132352 bytes
contents of Memory Script:
sql clone "alter system set control_files =
''+DGDATELO/elo_stby/controlfile/current.256.778345667'', ''+DGFRAELO/elo_stby/controlfile/current.256.778345667'' comment=
''Set by RMAN'' scope=spfile";
backup as copy current controlfile for standby auxiliary format '+DGDATELO/elo_stby/controlfile/current.257.778345667';
restore clone controlfile to '+DGFRAELO/elo_stby/controlfile/current.257.778345667' from
'+DGDATELO/elo_stby/controlfile/current.257.778345667';
sql clone "alter system set control_files =
''+DGDATELO/elo_stby/controlfile/current.257.778345667'', ''+DGFRAELO/elo_stby/controlfile/current.257.778345667'' comment=
''Set by RMAN'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
executing Memory Script
sql statement: alter system set control_files = ''+DGDATELO/elo_stby/controlfile/current.256.778345667'', ''+DGFRAELO/elo_stby/controlfile/current.256.778345667'' comment= ''Set by RMAN'' scope=spfile
Starting backup at 19-MAR-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/opt/oracle/soft/elo/11.2.0/db_1/dbs/snapcf_elo_1.f tag=TAG20120319T150747 RECID=2 STAMP=778345668
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
Finished backup at 19-MAR-12
Starting restore at 19-MAR-12
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=976 instance=ELO_1 device type=DISK
channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 19-MAR-12
sql statement: alter system set control_files = ''+DGDATELO/elo_stby/controlfile/current.257.778345667'', ''+DGFRAELO/elo_stby/controlfile/current.257.778345667'' comment= ''Set by RMAN'' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 4275781632 bytes
Fixed Size 2233336 bytes
Variable Size 2466253832 bytes
Database Buffers 1795162112 bytes
Redo Buffers 12132352 bytes
contents of Memory Script:
sql clone 'alter database mount standby database';
executing Memory Script
sql statement: alter database mount standby database
contents of Memory Script:
set newname for clone tempfile 1 to new;
switch clone tempfile all;
set newname for clone datafile 1 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
backup as copy reuse
datafile 1 auxiliary format new
datafile 2 auxiliary format new
datafile 3 auxiliary format new
datafile 4 auxiliary format new
sql 'alter system archive log current';
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to +DGDATELO in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting backup at 19-MAR-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DGDATELO/elo/datafile/undotbs1.261.777745285How could I check what it is doing?SQL> select event,TOTAL_WAITS,TOTAL_TIMEOUTS,TOTAL_TIMEOUTS from v$session_event where sid=4;
EVENT TOTAL_WAITS TOTAL_TIMEOUTS TOTAL_TIMEOUTS
remote db operation 21 0 0
remote db file write 2317 0 0
Disk file operations I/O 32 0 0
RMAN backup & recovery I/O 2296 0 0
Network file transfer 26 0 0
KSV master wait 41 0 0
control file sequential read 203 0 0
control file single write 32 0 0
control file parallel write 30 0 0
db file sequential read 5 0 0
db file single write 1 0 0
SQL*Net message to client 74 0 0
SQL*Net message from client 74 0 0
events in waitclass Other 49 2 2
14 rows selected.Anything wrong? -
Duplicated events on iCal, and even more on iPod calendar
I am using the latest version of Mac OS X Lion and an iPod Touch 2G with the latest compatible iOS update (4.2.1)
I gave iCloud a try, but after syncing with my iPod, I wouldn't be able to create or edit events in my iCloud calendars (which were previously the "On My Mac" Calendars)
I decided to stop syncing iCal with iCloud so that I would be using "On My Mac" calendars that could be modified on my iPod.
I exported each calendar. Stopped syncing with iCloud (which deletes those calendars from my mac) and then imported them within new calendars that were named and organized as I previously had.
However, I started seeing duplicate or multiple event entries on my Mac and on the iPod Touch.
I tried to delete the duplicates from the Mac, and some of them delete, some don't.
It is weird because I have duplicates of the Address Book Birthday calendar too, which shouldn't have duplicated events that cannot be deleted.
Another thing that happens is that in iCal there are events that appear correctly, just one time. But checking on my iPod, I see the same event 7 times!
Please, somebody help and guide me. I am a heavy user of the calendar on both devices and need a fix soon.
Thanks!I have the exact same problem.
Can't figure it out.
It is very annoying and time consuming deleting the duplicate items. -
Another instance of severe new problem in iCal - events duplicated infinite
This is like other posted problems. I just went Leopard - Snow Leopard. Have used iCal for years and consider myself experienced. On my first attempt to add a repeating event following the OS upgrade, the repeating event was duplicated hundreds perhaps thousands of times before I even completed the entry in the popup dialog. As you all know, that dialog does not allow cancel or delete. The event is duplicated multiple times in the same day. It seems to have no termination date. It cannot be selected in order to delete it. The color is even a pastel shade of the group color that it was supposed to have.
I have turned off iSync and hope that I can remove theses ridiculous events before they spread to every Mac and Ipod.
I cannot find a database to delete or I'd delete it and try to resync from me.com.
Help, Apple -- we need your attention on this one. It is definitely a **** up in iCal.O.k. - I am not experienced in spite of many years of using this turkey. First of, the sync/don't sync button in preferences - forget it. iCal syncs with MobileMe no matter what the setting. Bummer - except that once the hundreds of duplicate, repeat events are transfered to MobileMe, you can delete them from the web interface - unlike the iCal application that forbids deleting unwanted events.
Second new discovery. You can put in a repeated event on the MobileMe website with no trouble but, bummer again, the new event on MobileMe does not sync back to the iCal on the iMac.
Final and most important discovery. I got the problem in the first place by blindly supplying the information asked for and, I kid you not, iCal 4 asks for the end date of a repeated event twice. If you answer the question twice, as I did, you get N * N events. So I hope I can avoid the problem in the future. But really, iCal needs methods to delete repeated events. Also as a former long-term Palm user, I sorely miss the option to restore a calendar from another machine. (the ability to sync or overwrite the local application). -
Error while duplicating a panel with control dynamically associated to a splitter
When you try to duplicate a panel where a control has been dynamically created and associated to a splitter, you receive error -153: Item is already attached to splitter control.
See attached project which shows this behaviour: you can move the splitter and have controls moved accordingly; you can also duplicate the panel whith the splitter fully operative. If you press "Duplicate Numeric" button, existing numeric is duplicated and the new control is attached to the splitter. After this, trying to duplicate the panel generates error -153. If you duplicate the control but do not add it to the splitter no error is received while duplicating the panel.
Tested in CVI2009SP1. There is no evidence of this error in Known Issues document for later versions.
Proud to use LW/CVI from 3.1 on.
My contributions to the Developer Zone Community
If I have helped you, why not giving me a kudos?
Attachments:
DplPnl.zip 7 KBHi Roberto,
Thanks for the nicely stripped down test program! Easy to reproduce bugs are a rare pleasure...
I did confirm the bug, which has been there since CVI 7.1. It is triggered by the presence, at the time that the panel is copied, of a splitter-attached control, which was itself duplicated from another control that had been loaded from a UIR (whether or not the original control is also attached to the splitter). Whew. That was long.
In simpler terms, it's not the fact that the control was dynamically attached to the splitter that is causing the problem. It's the fact that the control was duplicated from another control.
The problem happens because a duplicated control inherits the constant name of the original control. Normally, this is not a problem for runtime-only controls. However, the splitter was still assuming that constant names of its attached controls were all unique, and that's what caused the problem.
Bug id: 392640
Unfortunately, there isn't a great workaround, since you can't prevent the duplicated control from inheriting the constant name, nor can you change its constant name after the fact. I can think of two possibilities:
Don't duplicate the control. Create it using NewCtrl instead, and then apply the necessary configurations.
Temporarily detach the duplicate control from the splitter prior to copying the panel, then attach it back (in both the original panel and the duplicated panel) after the panel copy.
Luis
Maybe you are looking for
-
How to insert a specific record into alv
Hi everyone, here is my problem: I put an ALV GRID control in my screen , it display some records. and I create a new button on the toolbar, and in the "handle user command class" , I need to select a record from DB table into a work area.
-
How do you abort automatic network configuration setup to start fresh?
Hello all - I am the sole user of the sole Sun workstation where I work. We recently switched from a DHCP network to one which requires static IP addresses on our machines. I did a sys-unconfig to set the network to a "blank" state and rebooted the m
-
To select all the checkbox using valuechangelistner
Hi , I have a jsp where the check boxes are populated dynamically. i have one more check box to select all the check boxes. (like Select all) i have written a method in a bean and i have bound that as value changelistner. when i check values in conso
-
Help With wscompile On A config.xml
Hello, I'm a newbie to Jax-rpc, so please bear with I'm having difficulties compiling this config.xml: <?xml version="1.0" encoding="UTF-8"?> <configuration xmlns="http://java.sun.com/xml/ns/jax-rpc/ri/config"> <service name="GetTenQService" targetNa
-
Dear, I have developed a object which upload any type of file( mainly excel file ) form desktop. Now i want to restrict user to upload the excel file , if that file is open in desktop. Please give me idea how i ll solve this issue ?????? Thanks & Reg