On DML operations Locked held on Parent tables.
Hi All,
Am trying to delete records from a table such as
DELETE /*+ index(EVENT_JOURNAL row_ts_idx) */ FROM EVENT_JOURNAL WHERE trunc(ROW_TS)= TO_date('09/05/2012','DD/MM/YYYY');table has function based index on row_ts column
Plan
DELETE STATEMENT ALL_ROWSCost: 138 Bytes: 502 K Cardinality: 8 K
3 DELETE BENCHMARKDEV.EVENT_JOURNAL
2 TABLE ACCESS BY INDEX ROWID TABLE BENCHMARKDEV.EVENT_JOURNAL Cost: 138 Bytes: 502 K Cardinality: 8 K
1 INDEX RANGE SCAN INDEX BENCHMARKDEV.ROW_TS_IDX Cost: 23 Cardinality: 8 K Above delete operation takes much time to delete 273 rows
I have check that above DML perform getting locked on its parent table. such as
select OS_USER_NAME os_user,
PROCESS os_pid,
ORACLE_USERNAME oracle_user,
l.SID oracle_id,
decode(TYPE,
'MR', 'Media Recovery',
'RT', 'Redo Thread',
'UN', 'User Name',
'TX', 'Transaction',
'TM', 'DML',
'UL', 'PL/SQL User Lock',
'DX', 'Distributed Xaction',
'CF', 'Control File',
'IS', 'Instance State',
'FS', 'File Set',
'IR', 'Instance Recovery',
'ST', 'Disk Space Transaction',
'TS', 'Temp Segment',
'IV', 'Library Cache Invalidation',
'LS', 'Log Start or Switch',
'RW', 'Row Wait',
'SQ', 'Sequence Number',
'TE', 'Extend Table',
'TT', 'Temp Table', type) lock_type,
decode(LMODE,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive', lmode) lock_held,
decode(REQUEST,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive', request) lock_requested,
decode(BLOCK,
0, 'Not Blocking',
1, 'Blocking',
2, 'Global', block) status,
OWNER,
OBJECT_NAME
from v$locked_object lo,
dba_objects do,
v$lock l
where lo.OBJECT_ID = do.OBJECT_ID
AND l.SID = lo.SESSION_ID
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 Transaction Exclusive None Not Blocking BENCHMARKDEV BUSINESS_UNIT
oracle 6624 BENCHMARKDEV 216 DML Row-X (SX) None Not Blocking BENCHMARKDEV BUSINESS_UNITI don't know on what basis parent tables are get locked?
Please help me out to resolve..
Thanks & Regards
Sami
Edited by: Sami on May 16, 2012 3:14 PM
Edited by: Sami on May 16, 2012 3:15 PM
Hi Sami,
Your dont event need Function Based Index on ROW_TS. A normal B*Tree index on ROW_TS will do the trick and also reduce the time for index sorting.
DELETE FROM EVENT_JOURNAL WHERE ROW_TS >= TO_date('09/05/2012','DD/MM/YYYY') AND ROW_TS < TO_date('10/05/2012','DD/MM/YYYY');
{code}
{code}
Above delete operation takes much time to delete 273 rows
{code}
# Is EVENT_JOURNAL parent table and has foreign key references from other tables ?
# How many records are there in EVENT_JOURNAL ?
# Check data dictionary view - DBA_WAITERS to see waiting/holding session
Similar Messages
-
"cannot perform a DML operation inside a query" error when using table func
hello please help me
i created follow table function when i use it by "select * from table(customerRequest_list);"
command i receive this error "cannot perform a DML operation inside a query"
can you solve this problem?
CREATE OR REPLACE FUNCTION customerRequest_list(
p_sendingDate varchar2:=NULL,
p_requestNumber varchar2:=NULL,
p_branchCode varchar2:=NULL,
p_bankCode varchar2:=NULL,
p_numberOfchekbook varchar2:=NULL,
p_customerAccountNumber varchar2:=NULL,
p_customerName varchar2:=NULL,
p_checkbookCode varchar2:=NULL,
p_sendingBranchCode varchar2:=NULL,
p_branchRequestNumber varchar2:=NULL
RETURN customerRequest_nt
PIPELINED
IS
ob customerRequest_object:=customerRequest_object(
NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
condition varchar2(2000 char):=' WHERE 1=1 ';
TYPE rectype IS RECORD(
requestNumber VARCHAR2(32 char),
branchRequestNumber VARCHAR2(32 char),
branchCode VARCHAR2(50 char),
bankCode VARCHAR2(50 char),
sendingDate VARCHAR2(32 char),
customerAccountNumber VARCHAR2(50 char),
customerName VARCHAR2(200 char),
checkbookCode VARCHAR2(50 char),
numberOfchekbook NUMBER(2),
sendingBranchCode VARCHAR2(50 char),
numberOfIssued NUMBER(2)
rec rectype;
dDate date;
sDate varchar2(25 char);
TYPE curtype IS REF CURSOR; --RETURN customerRequest%rowtype;
cur curtype;
my_branchRequestNumber VARCHAR2(32 char);
my_branchCode VARCHAR2(50 char);
my_bankCode VARCHAR2(50 char);
my_sendingDate date;
my_customerAccountNumber VARCHAR2(50 char);
my_checkbookCode VARCHAR2(50 char);
my_sendingBranchCode VARCHAR2(50 char);
BEGIN
IF NOT (regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}$')
OR regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}[[:space:]]{1}[[:digit:]]{2}:[[:digit:]]{2}:[[:digit:]]{2}$')) THEN
RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,5));
ELSIF (p_sendingDate IS NOT NULL) THEN
dDate:=TO_DATE(p_sendingDate,'YYYY/MM/DD hh24:mi:ss','nls_calendar=persian');
dDate:=trunc(dDate);
sDate:=TO_CHAR(dDate,'YYYY/MM/DD hh24:mi:ss');
condition:=condition|| ' AND ' || 'sendingDate='||'TO_DATE('''||sDate||''',''YYYY/MM/DD hh24:mi:ss'''||')';
END IF;
IF (p_requestNumber IS NOT NULL) AND (cbdpkg.isspace(p_requestNumber)=0) THEN
condition:=condition|| ' AND ' || ' requestNumber='||p_requestNumber;
END IF;
IF (p_bankCode IS NOT NULL) AND (cbdpkg.isspace(p_bankCode)=0) THEN
condition:=condition|| ' AND ' || ' bankCode='''||p_bankCode||'''';
END IF;
IF (p_branchCode IS NOT NULL) AND (cbdpkg.isspace(p_branchCode)=0) THEN
condition:=condition|| ' AND ' || ' branchCode='''||p_branchCode||'''';
END IF;
IF (p_numberOfchekbook IS NOT NULL) AND (cbdpkg.isspace(p_numberOfchekbook)=0) THEN
condition:=condition|| ' AND ' || ' numberOfchekbook='''||p_numberOfchekbook||'''';
END IF;
IF (p_customerAccountNumber IS NOT NULL) AND (cbdpkg.isspace(p_customerAccountNumber)=0) THEN
condition:=condition|| ' AND ' || ' customerAccountNumber='''||p_customerAccountNumber||'''';
END IF;
IF (p_customerName IS NOT NULL) AND (cbdpkg.isspace(p_customerName)=0) THEN
condition:=condition|| ' AND ' || ' customerName like '''||'%'||p_customerName||'%'||'''';
END IF;
IF (p_checkbookCode IS NOT NULL) AND (cbdpkg.isspace(p_checkbookCode)=0) THEN
condition:=condition|| ' AND ' || ' checkbookCode='''||p_checkbookCode||'''';
END IF;
IF (p_sendingBranchCode IS NOT NULL) AND (cbdpkg.isspace(p_sendingBranchCode)=0) THEN
condition:=condition|| ' AND ' || ' sendingBranchCode='''||p_sendingBranchCode||'''';
END IF;
IF (p_branchRequestNumber IS NOT NULL) AND (cbdpkg.isspace(p_branchRequestNumber)=0) THEN
condition:=condition|| ' AND ' || ' branchRequestNumber='''||p_branchRequestNumber||'''';
END IF;
dbms_output.put_line(condition);
OPEN cur FOR 'SELECT branchRequestNumber,
branchCode,
bankCode,
sendingDate,
customerAccountNumber ,
checkbookCode ,
sendingBranchCode
FROM customerRequest '|| condition ;
LOOP
FETCH cur INTO my_branchRequestNumber,
my_branchCode,
my_bankCode,
my_sendingDate,
my_customerAccountNumber ,
my_checkbookCode ,
my_sendingBranchCode;
EXIT WHEN (cur%NOTFOUND) OR (cur%NOTFOUND IS NULL);
BEGIN
SELECT requestNumber,
branchRequestNumber,
branchCode,
bankCode,
TO_CHAR(sendingDate,'yyyy/mm/dd','nls_calendar=persian'),
customerAccountNumber ,
customerName,
checkbookCode ,
numberOfchekbook ,
sendingBranchCode ,
numberOfIssued INTO rec FROM customerRequest FOR UPDATE NOWAIT;
--problem point is this
EXCEPTION
when no_data_found then
null;
END ;
ob.requestNumber:=rec.requestNumber ;
ob.branchRequestNumber:=rec.branchRequestNumber ;
ob.branchCode:=rec.branchCode ;
ob.bankCode:=rec.bankCode ;
ob.sendingDate :=rec.sendingDate;
ob.customerAccountNumber:=rec.customerAccountNumber ;
ob.customerName :=rec.customerName;
ob.checkbookCode :=rec.checkbookCode;
ob.numberOfchekbook:=rec.numberOfchekbook ;
ob.sendingBranchCode:=rec.sendingBranchCode ;
ob.numberOfIssued:=rec.numberOfIssued ;
PIPE ROW(ob);
IF (cur%ROWCOUNT>500) THEN
CLOSE cur;
RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,4));
EXIT;
END IF;
END LOOP;
CLOSE cur;
RETURN;
END;Now what exactly would be the point of putting a SELECT FOR UPDATE in an autonomous transaction?
I think OP should start by considering why he has a function with an undesirable side effect in the first place. -
Query performance on same table with many DML operations
Hi all,
I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
If i created same table again newly with same data and fire the same select statement, it is taking less time.
My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
Thanks in advance,
PalTry searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
For example if you had a table like this where seq_no is populated by a sequence and indexed
seq_no NUMBER
processed_flag VARCHAR2(1)
trans_date DATEand then did deletes like:
DELETE FROM t
WHERE processed_flag = 'Y' and
trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
HTH
John -
How to know which DML operation is taking place on a table within a procedu
Hii all,
My DB Version
SQL> select *
2 from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionHow to find what DML Operation is taking place on a particular table within a procedure??
For suppose I've the below procedure
create table r_dummy
name varchar2(4000),
emp_id number
Create or replace procedure r_dummy_proc
p_name in varchar2,
p_emp_id in number
is
Begin
Update r_dummy
set name = p_name
where emp_id = p_emp_id;
if sql%rowcount > 1 then
dbms_output.put_line('Successfully updated employee name');
end if;
End;Here from the code I can identify that an update operation is taking place on table 'R_DUMMY'
But how to find that without actually viewing the code?? I've hundreds of procedures in my DB and would like to find what DML is taking place on which table and in which procedure.
Please help with some suggestions.And here is the solution
with t as
select distinct name,type,text,line
from user_source s
where regexp_like(s.text,'cp_ca_dtls','i')
x as
select name,line,text
from
select name,case when (regexp_instr(text,'(update)|(insert)|(delete)',1,1,1,'i') >0) and regexp_instr(ld,'CP_CA_DTLS',1,1,1,'i') >0
then line
else null
end as line,text
from
Select name,text,line,lead(text) over(partition by name order by line) ld
from user_source
where name in
select distinct name
from user_source
where upper(text) like '%CP_CA_DTLS%'
order by 1 nulls last
)where line is not null
select name,line,text
from t t1
where regexp_instr(text,'(update)|(insert)|(delete)',1,1,1,'i') >0
and exists
select 1
from t t2
where t1.name = t2.name
and t1.type = t2.type
and t1.line = t2.line
union
select name,line,text
from x -
DML operations improves perfomance on a Partioned Table?
Hi
We have a simple table (non-partitioned) and we do normal DML operations on it. If we convert that table into 5 partitions then does DML performance improves on it by 5times? To be very specific will READ and WRITES on table will improve? if YES than to which extent. *(considering Table size in TB)*. DB is 11g R2.
Regards
Edited by: 905133 on Dec 29, 2011 10:08 PM
Edited by: 905133 on Dec 29, 2011 10:33 PMCKLP,
I populated a table in my test environment called test with 10 columns and 7 million records.
its structure is
col1..col5 are of data type number and are locally indexed
col6..col 9 are of data type number (not indexed)
col 10 is of data type Date
table is partitioned on col10 by Range. (7 partition for a week (1 partition/day), 1million records in a partition)
when i inserted 7 million records (i.e one million record/partition). Avg insertion time/partition was 7.5 min. (I inserted records individually in a partition)
Then I created another table test2 with same no. of columns Indexed and equal amount of records. But increased the no. of partitions. 4 partitions/day 28 partitions for week
When I inserted 7 million records (one million records/ 4 partitions), Avg insertion Time was 7.1 min. (I inserted records individually in these 4 partition at a single time)
Insert into test values(...) format was used. and now i know Parallelism can't be used on this format.
So the point to discuss here is "why did not I achieve a better Insertion Time when i divided a daily partition to 4 partition per day". OR " How can i improve insertion time in such scenarios"
Oracle DB 11g R2, OS Linux 5, Sun Server x4200 with shared storage. -
ORA-01591: lock held by in-doubt distributed transaction 2.53.300807
SQL> select count(*) from TBCD_CCODMSG;
select count(*) from TBCD_CCODMSG
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 2.53.300807
SQL> select * from TBCD_CCODMSG where rownum =10;
select * from TBCD_CCODMSG where rownum =10
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 2.53.300807
SQL> alter session set events '1591 trace name errorstack level 10';
Session altered.
SQL> select * from TBCD_CCODMSG where rownum =10;
select * from TBCD_CCODMSG where rownum =10
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 2.53.300807
SQL> select object_name, object_type from user_objects where object_name = 'TBCD_CCODMSG';
OBJECT_NAME
OBJECT_TYPE
TBCD_CCODMSG
TABLE
SQL> select * from DBA_2PC_PENDING;
no rows selected
SQL> select * from DBA_2PC_NEIGHBORS where local_tran_id = '2.53.300807';
LOCAL_TRAN_ID IN_
DATABASE
DBUSER_OWNER I DBID SESS#
BRANCH
2.53.300807 in
EFT.WORLD
CD1 N 15f1353d 1
0B000B00011902000128보통은 오라클 Background 프로세스인 RECO 프로세스가
해당 문제를 자동으로 복구해줍니다.
하지만, 가끔씩 제대로 처리를 못해주는 경우는 DBA 가 아래 절차를 확인하여
수동으로 처리를 해줘야 하는 경우도 있습니다.
아래 내용을 참고하세요...
No. 12163
DISTRIBUTED TRANSACTION TROUBLESHOOTING (ORA-1591해결 방법)
==========================================================
다른 database를 이용하지 않는 local transaction이, 비정상 종료시 자동으로
rollback되는 것과는 달리, 분산 트랜잭션의 경우 2 phase commit수행 단계중에
fail이 발생하게 되면 관여된 일부 database에서는 rollback 혹은 commit이 되고,
일부는 distributed lock이 걸린 상태로 계속 지속될 수 있다.
이렇게 pending된 transaction에 대해서는 기본적으로 Oracle의 background
process인 RECO process가 자동으로 정리하여 주나, 경우에 따라 자동으로 정리가
되지 못하는 상황이 발생할 수 있다.
이렇게 정리가 되지 않아, distributed lock이 걸린 경우에는, 이후 관계된
table을 조회나 변경시 ora-1591 오류가 발생할 수 있으므로, distributed
transaction이 실패한 경우 db admin이 관여하여 pending된 transaction을
정리하여 줄 필요가 있다.
distributed transaction이 오류가 발생하거나, 혹은 이후에 ora-1591이 발생하는
경우, 조치 방법을 9단계의 STEP으로 정리하였다.
*** distributed transaction의 2 phase commit에 대한 개념 및 자세한 절차는
<korean bulletin:12185>를 참조한다.
[참고 1] 문서의 이해를 위해서 분산 환경에 포함된 node를 V817LOC와 V817REM으로
예를 들고, V817LOC node에서 transaction을 수행하였다고 가정한다.
[참고 2] 아래에 언급되는 dbms_transaction package는 기본적으로 catproc.sql
script에 의해 생성되나 만약 존재하지 않는다면,
cd $ORACLE_HOME/rdbms/admin directory의 dbmsutil.sql, prvtutil.plb
script를 sys user에서 수행하도록 한다.
(svrmgrl에서 connect internal에서 수행하는것이 일반적)
그리고 이 package는 항상 transaction의 맨 처음에 수행되어야 한다.
즉, 새로 session을 연결하여 수행하거나, 혹은 앞에 dml에 있었다면,
commit이나 rollback을 수행 후 이 package를 수행하여야 한다.
아래의 STEP중 STEP 1 ~ 3까지는 문제 해결을 위해 필수적인 단계는 아니므로 바로
문제를 시급히 해결해야 하는 경우 4번부터 확인하도록 한다.
STEP 1: alert.log file을 check한다.
bdump directory의 alert.log에는 분산 트랜잭션 fail시 관계된 오류 메시지등
log가 항상 남게 된다. 예를 들면 다음과 같은 형태인데, rollback/commit되었
는지, in-doubt 상태인지와 그 외에 transaction id등 정보를 확인할 수 있다.
Tue Dec 12 16:23:25 2000
ORA-02054: transaction 1.8.238 in-doubt
ORA-02063: preceding line from V817REM
Tue Dec 12 16:23:25 2000
DISTRIB TRAN V817LOC.WORLD.89f6eafb.1.8.238
is local tran 1.8.238 (hex=01.08.ee)
insert pending prepared tran, scn=194671 (hex=0.0002f86f)
STEP 2: network 환경을 확인한다.
listener가 떠 있는지, database link가 모두 정상적인지 확인해 본다.
STEP 3: RECO process가 떠 있는지 확인한다.
os상에서 RECO process가 떠 있는지 확인하려면 다음과 같이 한다.
os> ps -ef | grep reco
RECO process는 db가 startup되면서 자동으로 구동되는 background process로
distributed recovery를 disable시키면 사라지게 된다. distributed recovery를
enable/disable시키는 방법은 아래와 같다.
SQL>alter system enable distributed recovery;
SQL>alter system disable distributed recovery;
아래의 조치사항 중에서 STEP 9번을 제외하고는 기본적으로 RECO process가
자동으로 처리하는 작업과 동일하다. 그러나 여러가지 문제로 인해 RECO가
자동으로 정리하지 못한 경우 이 문서의 방법대로 manual하게 정리하여 주어야
한다.
STEP 4: DBA_2PC_PENDING을 조회해 본다.
sqlplus system/manager
SQL>select local_tran_id, global_tran_id, state, mixed, host, commit#
from dba_2pc_pending;
다음과 같은 결과가 return된다.
LOCAL_TRAN_ID|GLOBAL_TRAN_ID |STATE |MIX|HOST |COMMIT#
-------------|----------------------|--------|---|----------|--------
1.8.238 |V817LOC.WORLD.89f6eafb|prepared|no |SUP_SERVER|194671
|.1.8.238 | | |\eykim |
이 조회로 인해 여러개의 row가 나오는 경우 ora-1591이나 distributed fail에
관련된 오류시 나타나는 local transaction id값과 return된 LOCAL_TRAN_ID값을
비교하여 일치하는 row를 확인하면 된다. 이때 LOCAL_TRAN_ID 값과
GLOBAL_TRAN_ID의 뒷부분의 숫자가 동일하다면 이것은 이 node가 global
coordinator임을 의미한다.
STEP 5: DBA_2PC_NEIGHBORS view를 조회해 본다.
sqlplus system/manager
SQL>select local_id, in_out, database, dbuser_owner, interface
from dba_2pc_neighbors;
LOCAL_TRAN_ID|IN_OUT|DATABASE |DBUSER_OWNER |INT
-------------|------|-------------------------|---------------|---
1.8.238 |in | |SCOTT |N
1.8.238 |out |V817REM.WORLD |SCOTT |C
여기에 나타난 row들이 해당 분산 트랜잭션에 관여한 database 정보이다. 이때
DATABASE column 부분이 null로 나타나는 것은 현재 조회하고 있는 local
database를 의미하며, IN_OUT이 OUT으로 나타나는 경우 참조하는 node정보인데,
DATABASE 컬럼의 값이 해당 database를 가리키는 database link name이 된다.
이 database link name을 이용하여 다음과 같이 remote db의 DBA_2PC_PENDING을
다시 조사하여 관계된 node들의 상태를 확인할 수 있다.
SQL>select local_tran_id, global_tran_id, state, mixed, host, commit#
from dba_2pc_pending@v817rem;
각 node의 DBA_2PC_PENDING의 return된 row들이 같은 분산 트랜잭션에 포함된
정보인지는 GLOBAL_TRAN_ID 값을 이용하여 확인할 수 있다.
STEP 6: commit point site를 확인한다.
commit point site에 대해서는 <korean bulletin:12185>을 참조한다.
이 예의 경우 COMMIT_POINT_STRENGTH를 지정하지 않았기 때문에 default로
global coordinator가 아닌 V817REM이 commit point site가 된다. 일반적으로
commit point site는 global coordinator의 DBA_2PC_NEIGHBORS의 IN_OUT
field가 OUT으로 나타나고 INTERFACE 부분이 C로 나타나게 된다.
commit point site가 중요한 이유는 이 node의 local transaction부분은
prepared상태를 거치지 않아 in-doubt 상태가 되는 일이 없고, 그러므로
distributed lock에 의해 조회나 DML시 오류가 발생하는 없게 된다.
이러한 이유로 제일 중요한 data를 포함하는 중심이 되는 node를 commit point
site로 지정하는 것이 바람직하다.
STEP 7: DBA_2PC_PENDING의 MIXED column을 확인한다.
- MIXED값이 NO인 경우 : STEP 8 수행
- MIXED값이 YES인 경우: STEP 9 수행
DBA_2PC_PENDING에서 MIXED column을 YES나 NO의 값으로 지정하는 것은 RECO
process가 결정하여 변경하게 된다. MIXED가 YES가 되는 대표적인 경우는,
commit point site가 이미 commit을 수행한 상태에서 분산 트랜잭션이 fail된
경우, non-commit point site에서 prepared 상태의 transaction을 rollback
force시켜 분산 트랜잭션의 consistency가 깨진 상태이다.
(STATE column의 경우 commit point site는 COMMITTED로 non-commit point
site는 FORCED ROLLBACK으로 나타난다)
[참고] commit point site가 아직 commit을 수행하기 전에 분산 트랜잭션이
fail되어 commit point site가 rollback된 경우, non-commit point
site에서 prepared 상태의 transaction을 commit force 하면 이것도
논리적으로는 consistency가 지켜지지 않은 것은 동일하나 이때는
MIXED column이 no인 상태가 된다. 그 이유는 commit point site가
rollback되어 DBA_2PC_PENDING view에 entry가 남지 않기 때문에
RECO가 명시적으로 mixed 상태로 인식하는 것이 불가능하기 때문으로
파악된다.
STEP 8: DBA_2PC_PENDING의 STATE column의 값을 확인한다.
CASE 8-1: STATE field값이 COMMITTED인 경우
만약 STATE가 COMMITTED인 경우는 이 local database(V817LOC)에서는
transaction이 성공적으로 commit 되었음을 나타내므로, 이 node에서는 어떠한
작업도 수행할 필요가 없다. 이 entry는 RECO process에 의해 자동으로 지워질
것이며, 만약 RECO가 어떠한 이유로 이 row를 지우지 못했다면 다음과 같이
db admin이 직접 지워도 된다. 괄호 안의 값은 local_tran_id값이다.
sqlplus sys/manager (반드시 sys로 접속한다)
SQL>exec dbms_transaction.purge_lost_db_entry('1.8.238');
이렇게 V817LOC의 STATE부분이 COMMITTED인 경우는, 이미 commit point site인
V817REM은 commit된 후임을 나타낸다. 그러므로 V817REM은 STATE가 COMMITTED로
나타나거나 아니면 commit후 이미 정보가 지워져 DBA_2PC_PENDING에 정보가
나타나지 않을 수 있다. 그러므로 V817REM에 대해서는 필요한 경우 V817LOC의
앞의 조치 방법과 동일하게 DBA_2PC_PENDING의 내용만 정리하여 주면 된다.
만약 V817REM이 아닌 별도의 다른 node가 분산 트랜잭션에 관여했다고 가정하고
V817LOC가 COMMITTED인 상태에서 그 node의 STATE가 PREPARED로 나타난다면
그 node에 대해서는 아래의 CASE 8-2를 참조하여 해결하도록 한다.
CASE 8-2: STATE field값이 PREPARED인 경우
STATE값이 PREPARED인 경우는 이 node(V817LOC)에서 변경된 data가 속한 block
에 distributed lock이 걸린 상태이며, 이런 경우 변경된 data가 있는 block에
대한 모든 read/write가 ora-1591을 발생시키므로 trouble shooting에서 제일
중요한 부분이라 할 수 있다.
먼저 STEP 4와 STEP 5를 참조하여 관계된 모든 node들의 DBA_2PC_PENDING view
를 조회하여 본다. 이때 다른 node(V817REM)의 DBA_2PC_PENDING에 정보가 없다
면 V817REM이 commit point site이고 이미 data는 rollback되었음을 나타낸다.
이때는 V817LOC의 prepared 상태의 transaction을 다음과 rollback force 시켜
준다.
즉, V817LOC에서,
SQL>rollback force '1.8.238';
만약 V817REM node에 해당 정보가 있고 상태가 COMMITTED라면 V817LOC도
다음과 같이 commit을 해 주어야 한다.
SQL>commit force '1.8.238';
이때 local_tran_id 뒤에 SCN을 지정할 수 있는데 이것은 관여된 node 중 제일
큰 SCN을 지정하도록 한다. 이 SCN 값은 DBA_2PC_PENDING의 COMMIT# field에서
값을 확인할 수 있으며 이렇게 하는 이유는 이후 분산 database중 한 database
에서 incomplete recovery가 필요한 경우, 다른 database 들도 일관성을
맞추기 위해 incomplete recovery를 이용할 수 있게 하기 위한 것이다.
SQL>commit force '1.8.238', '194671'
CASE 8-3: STATE field값이 COLLECTING인 경우
STATE field가 collecting인 경우는 아직 distributed lock을 걸기 전단계에서
transaction이 비정상 종료됨을 나타내며, 이 단계에서는 distributed lock이
걸리기 전이어서 변경된 data는 이미 rollback된 상태이다. 이 경우는
DBA_2PC_PENDING에서 해당 entry를 지워 주면 된다.
sqlplus sys/manager (반드시 sys로 접속한다)
SQL>exec dbms_transaction.purge_lost_db_entry('1.8.238');
CASE 8-4: STATE field값이 FORCED ROLLBACK/FORCED COMMIT 인 경우
이미 RECO나 db admin이 rollback force나 commit force 명령을 시도하여
STATE가 FORCED ROLLBACK이나 FORCED COMMIT으로 변경된 경우는 추가적으로
수행할 작업은 없으며, RECO가 자동으로 이 entry를 지워 줄 것이다. 그러나
RECO가 작업하기를 기다리지 않고 다음과 같이 직접 삭제할 수 있다.
sqlplus sys/manager (반드시 sys로 접속한다)
SQL>exec dbms_transaction.purge_lost_db_entry('1.8.238');
STEP 9: 불일치 사항을 파악하고 DBA_2PC_PENDING을 정리한다.
어떠한 경우에 DBA_2PC_PENDING의 MIXED column이 YES가 되는지는 이미 STEP 7
에서 설명하였다. 이렇게 잘못된 조치에 의해 STEP 7에서 설명한 것과 같은
분산 데이타베이스간의 불일치가 발생한 경우는 간단한 operation을 통해
일치성을 맞추는 것은 불가능하다.
분산 트랜잭션의 consistency가 무엇보다 중요한 경우라면 관계된 node의
database를 모두 문제의 분산 트랜잭션이 수행되기 이전 상태로, incomplete
recovery를 수행하거나 할 수 있다. 분산 데이타베이스간의 일관성을 위한
incomplete recovery에 대해서는 여기에서는 다루지 않는다. 한가지 언급한
말한 것은 앞에서 설명한 분산 트랜잭션의 commit시 이용하는 commit SCN을
관계된 모든 node들의 최대 SCN으로 이용하는 것이 바로 이러한 recovery를
위한 것이다. 이렇게 일부 database에서 SCN값이 이전 SCN에서 1씩 증가하는
것이 아니라 큰 값으로 건너뛰어 다른 database와 같은 SCN을 유지하게
함으로써, 이후에 incomplete recovery시에 관계된 node들이 서로 동일한
SCN으로 recovery를 수행하면, 모두 분산 트랜잭션 적용 이전이 되거나 혹은
모두 이후가 되어 일관성을 유지할 수 있도록 해준다.
MIXED가 YES인 상태에서, inconsistency를 받아들이고 DBA_2PC_PENDING view를
정리하려면 다음과 같이 수행한다.
sqlplus sys/manager (반드시 sys user로 수행한다)
SQL>exec dbms_transaction.purge_mixed('1.8.238'); -
Sql server partition parent table and reference not partition child table
Hi,
I have two tables in SQL Server 2008 R2, Parent and Child Table.
Parent has date time, and it is partitioned monthly, there is a Child table which just refer the Parent table using Foreign key relation.
is there any problem the non-partitioned child table referring to a partitioned parent table?
Thanks,
AreefThe tables will need to be offline for the operation. "Offline" here, means that you wrap the entire operation in a transaction. Ideally, this transaction would:
1) Drop the foreign key.
2) Use ALTER TABLE SWITCH to drop the old data.
3) Use ALTER PARTITION FUNCTION to drop the old empty partition.
4) Use ALTER PARTITION FUNCTION to add a new empty partition.
5) Reapply the foreign keys WITH CHECK.
All but the last operation are metadata-only operation (provided that you do them right). To perform the last operation, SQL Server must scan the child tbale and verify that all keys are present in the parent table. This can take some time for larger tables.
During the transaction, SQL Server holds Sch-M locks on the table, which means that are entirely inaccessible, even for queries running with NOLOCK.
You avoid this the scan by applying the fkey constraint WITH NOCHECK, but this can have impact on query plans, as SQL Server will not consider the constraint as trusted.
An alternative which should not be entirely dismissed is to use partitioned
views instead. With partitioned views, the foreign keys are not an issue, because each partition is a pair of tables, with its own local fkey.
As for the second question: it appears to be completely pointless to partition the parent, but not the child table. Or does the child table only have rows for a smaller set of the rows in the parent?
Erland Sommarskog, SQL Server MVP, [email protected] -
ORA-01591: lock held by in-doubt distributed transaction
I am using oracle wcf adapter to connect to oracle to insert.
I am getting the ORA-01591: lock held by in-doubt distributed transaction.
Any ideas as to what could be the resolution. I have followed the below mentioned steps. but that's not a permanent solution.
Please advice.
The resolution described below is not acceptable and never should have been used. Although it does avoid the error, it's not okay to turn off AmbientTransaction when performing inserts
and/or updates. A different solution needs to be found.<o:p></o:p>
http://msdn.microsoft.com/en-US/library/dd788352(v=BTS.10).aspx <o:p></o:p>
"Not performing operations in a transactional context is advisable only for operations that do not make changes to the database. For operations that update data in the database, we recommend
setting the binding property to true otherwise you might either experience message loss or duplicate messages depending on whether you are performing inbound or outbound operations."<o:p></o:p>
********************************************<o:p></o:p>
This can be resolved by adjusting the configuration settings on the Oracle adapter, accessible via the Send Port properties. The properties and the values that should be used are shown below:<o:p></o:p>
** Binding tab:
incrPoolSize: 1
maxPoolSize: 10
useAmbientTransaction: False<o:p></o:p>
** Messages tab:
Isolation Level: ReadCommitted<o:p></o:p>
Also, you'll need to get a DBA to rollback the hanging "in-doubt" transactions, which will be viewable via the sql below. Otherwise, if you try processing the same data again, you'll
still get the same error.<o:p></o:p>
SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# FROM DBA_2PC_PENDING;<o:p></o:p>
The transactions can be rolled back with sql, using this syntax:<o:p></o:p>
ROLLBACK FORCE '<LOCAL_TRAN_ID>';
<o:p></o:p>
Thank you and have great day! Vivek Kulkarni MCAD.netHi Vivek,
This error is encountered by many DBA's and cause problem by locking the distributed transaction process, and not letting the query go through, because the Two - Phase Commit Mechanism got an error somewhere.
he DBA should query the pending_trans$ and related tables, and attempt to repair network connection(s) to coordinator and commit point.
Here are some codes to help you through the process:
This one brings in-doubt transactions:
select * from DBA_2PC_PENDING where state='prepared'
This one prepares the rollback script for the transactions:
select 'rollback force '''||local_tran_id||''';' from DBA_2PC_PENDING where state='prepared'
All this is well described in below link
ORA-01591: lock held by in-doubt distributed transaction
ORA-01591: lock held by in-doubt distributed transaction string tips
On BizTalk Side you need to make ambient transaction to false as Oracle does not go ahead with DTC Transaction with Biz Talk
Thanks
Abhishek -
DML Operations in Stored Function
Hi,
I have used Update Statement in a function. Which is giving an errror (ORA-14551: cannot perform a DML operation inside a query).
Can I use DML operations in a stored function ?
(I need help on locking master/transaction tables, i.e if a one user locks a master another user should not get modify access to the master/transactions).
Thanks
Ramesh GanjiSomeone who obviously din't read my previous post in this thread. You should pay attention, programming is all about the details.
ORA-14551: cannot perform a DML operation inside a query).Then it obviously does more than just "returns a Table Type object". Why are you doing DML in a function?
PLS-00653: aggregate/table functions are not allowed in PL/SQL scopeWe can only call pipelined functions from SQL queries. So you'll have to ditch the DML or make it an autonomous transaction. Be very careful if you adopt the latter approach.
Cheers, APC -
ORA-01591: lock held by in-doubt distributed transaction 14.4.44
Hi,
I am using WLI 8.1 SP2 on Windows 2000, Oracle 9.2. I am getting this error...
<Apr 23, 2004 10:43:43 AM EDT> <Error> <WLW> <000000> <error
java.io.IOException: [BEA][Oracle JDBC Driver][Oracle]ORA-01591: lock held by
in
-doubt distributed transaction 14.4.44
at weblogic.jdbc.base.BaseBlobOutputStream.write(Unknown Source)
at weblogic.jdbc.base.BaseBlobOutputStream.write(Unknown Source)
at com.bea.wlw.runtime.core.bean.BMPContainerBean$OracleTableAccess.doSt
oreByInsert(BMPContainerBean.java:904)
at com.bea.wlw.runtime.core.bean.BMPContainerBean.doInsert(BMPContainerB
ean.java:1785)
at com.bea.wlw.runtime.core.bean.BMPContainerBean.ejbStore(BMPContainerB
ean.java:1742)
at com.bea.wli.bpm.runtime.ProcessContainerBean.ejbStore(ProcessContaine
rBean.java:79)
at com.bea.wlwgen.PersistentContainer_nga2bb_Impl.ejbStore(PersistentCon
tainer_nga2bb_Impl.java:149)
at weblogic.ejb20.manager.ExclusiveEntityManager.beforeCompletion(Exclus
iveEntityManager.java:556)
at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManag
er.java:745)
at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(Serv
erSCInfo.java:1010)
at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(Se
rverSCInfo.java:115)
at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAn
dChain(ServerTransactionImpl.java:1142)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(
ServerTransactionImpl.java:1868)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(Se
rverTransactionImpl.java:250)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTran
sactionImpl.java:221)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.
java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
I have tried dropping all wli tables & deleting tlog files (deleting everything
from cgServer directory). Still get this error.
Any suggestions??
TIA
-AmitGetting this problem with WLI. I have just truncated tables and it's fine now.
Thanks,
Amit
"Sheetal Jain" <[email protected]> wrote:
>
Amit,
If these problem is happening with the WLI database then I would suggest
you should
talk to BEA support. If it is happening in the application you have built
just
look for any deadlock conditions happening in some scenarios inside different
inter-related transactions running at the same time, I mean transaction
1 is locking
something and in the meantime another one B locks something else which
is required
for the transaction A to move farward and now second one needs the resource
locked
by 1st.
Hope this helps
"Nagraj Rao" <[email protected]> wrote:
Hello Amit !,
Lock From In-Doubt Transaction is a 2-p TX issue, A query or DML statement
that
requires locks on a database is probably blocked due to some lock held
by a resource
of an "in-doubt distributed transaction".
A DB Admin can manually Commit or Rollback an "in-doubt distributedtransaction".
So I suggest you talk to the DBA
More at : http://www-rohan.sdsu.edu/doc/oracle/server803/A54653_01/ds_ch3.htm
BTW here's what Oracle says :
ORA-01591 lock held by in-doubt distributed transaction string
Cause: An attempt was made to access resource that is locked by a dead
two-phase
commit transaction that is in prepared state.
Action: The database administrator should query the PENDING_TRANS$ and
related
tables, and attempt to repair network connection(s) to coordinator and
commit
point. If timely repair is not possible, the database administratorshould
contact
the database administrator at the commit point if known or the end user
for correct
outcome, or use heuristic default if given to issue a heuristic COMMIT
or ABORT
command to finalize the local portion of the distributed transaction.
"Sheetal Jain" <[email protected]> wrote:
Amit,
It could be a bug. Open a ticket with BEA and see if they have a patch.
"Amit Bhutra" <[email protected]> wrote:
Hi,
I am using WLI 8.1 SP2 on Windows 2000, Oracle 9.2. I am getting
this
error...
<Apr 23, 2004 10:43:43 AM EDT> <Error> <WLW> <000000> <error
java.io.IOException: [BEA][Oracle JDBC Driver][Oracle]ORA-01591: lock
held by
in
-doubt distributed transaction 14.4.44
at weblogic.jdbc.base.BaseBlobOutputStream.write(Unknown Source)
at weblogic.jdbc.base.BaseBlobOutputStream.write(Unknown Source)
at com.bea.wlw.runtime.core.bean.BMPContainerBean$OracleTableAccess.doSt
oreByInsert(BMPContainerBean.java:904)
at com.bea.wlw.runtime.core.bean.BMPContainerBean.doInsert(BMPContainerB
ean.java:1785)
at com.bea.wlw.runtime.core.bean.BMPContainerBean.ejbStore(BMPContainerB
ean.java:1742)
at com.bea.wli.bpm.runtime.ProcessContainerBean.ejbStore(ProcessContaine
rBean.java:79)
at com.bea.wlwgen.PersistentContainer_nga2bb_Impl.ejbStore(PersistentCon
tainer_nga2bb_Impl.java:149)
at weblogic.ejb20.manager.ExclusiveEntityManager.beforeCompletion(Exclus
iveEntityManager.java:556)
at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManag
er.java:745)
at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(Serv
erSCInfo.java:1010)
at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(Se
rverSCInfo.java:115)
at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAn
dChain(ServerTransactionImpl.java:1142)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(
ServerTransactionImpl.java:1868)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(Se
rverTransactionImpl.java:250)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTran
sactionImpl.java:221)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.
java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
I have tried dropping all wli tables & deleting tlog files (deleting
everything
from cgServer directory). Still get this error.
Any suggestions??
TIA
-Amit -
Date Field Displaying and DML Operations
Hi all,
I have an issue with displaying and updating date columns that I'm hoping someone can assist me with.
I'm using APEX 3.0.1.
I have a Form page with a number of fields sourced from one database table that are being populated by an Automatic Row Fetch On Load - After Header.
The Item P6_MONTHFOR is stored as a Date datatype in the table and displayed on the form using the Date Picker (use Item Format Mask). I have a Format Mask set as 'MON-RR'. I want to ensure that the last day of the month is saved back to the database table so have been trying various calculation techniques to try and achieve this but am experiencing a variety of SQL errors!
I have tried using LAST_DAY(:P6_MONTHFOR) in the Post Calculation Computation, or as a separate Computation After Submit.
I have also tried having P6_MONTHFOR as a hidden column and using display Items and then trying Item calculations to then update the value of P6_MONTHFOR column prior to DML operations but to no avail.
The only DML operations allowed on these rows are DELETE and UPDATE and I'm using an Automatic Row Processing (DML) On Submit - After Computations and Validations process to control these operations.
Any help or suggestions greatly appreciated :-)
Kind Regards,
Gary.the function LAST_DAY is a date function, expecting a date as input. Since it is all web, the values of items are as string/varchar2. In order to use date-function, you have to first make it a date with to_date() with the format-mask (DD-MON-RR).
In my opinion Dates are still tricky, it would be great if ApEx would have a DV() function, next to the V() and NV() functions, It is in ApExLib (of Patrick Wolf)
Simon -
DML operations on multiple views
Hi all.
I can't understand updateing the data on views which created by multiple table joining. Which columns I can update and why I'm getting
ORA-01779: cannot modify a column which maps to a non key-preserved table error??
Can anybody show me explanation with examples??
Thanks...Modifying a Join View
A modifiable join view is a view that contains more than one table in the top
level FROM clause of the SELECT statement, and that does not contain any of
the following:
- DISTINCT operator
- aggregate functions: AVG, COUNT, GLB, MAX, MIN, STDDEV, SUM, or VARIANCE
- set operations: UNION, UNION ALL, INTERSECT, MINUS
- GROUP BY or HAVING clauses
- START WITH or CONNECT BY clauses
- ROWNUM pseudocolumn
With some restrictions, you can modify views that involve joins. If a view is
a join on other nested views, then the other nested views must be mergeable
into the top level view.
The examples in following sections use the EMP and DEPT tables. These examples
work only if you explicitly define the primary and foreign keys in these
tables, or define unique indexes. Following are the appropriately constrained
table definitions for EMP and DEPT:
CREATE TABLE dept
deptno NUMBER(4) PRIMARY KEY,
dname VARCHAR2(14),
loc VARCHAR2(13)
CREATE TABLE emp
empno NUMBER(4) PRIMARY KEY,
ename VARCHAR2(10),
job varchar2(9),
mgr NUMBER(4),
hiredate DATE,
sal NUMBER(7,2),
comm NUMBER(7,2),
deptno NUMBER(2),
FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO)
You could also omit the primary and foreign key constraints listed above, and
create a UNIQUE INDEX on DEPT (DEPTNO) to make the following examples work.
CREATE OR REPLACE VIEW emp_dept AS
SELECT empno, ename, sal, e.deptno, dname, loc
FROM EMP e, DEPT d
WHERE e.deptno = d.deptno;
Key-Preserved Tables
The concept of a key-preserved table is fundamental to understanding the
restrictions on modifying join views. A table is key preserved if every key of
the table can also be a key of the result of the join. So, a key-preserved
table has its keys preserved through a join.
Note: It is not necessary that the key or keys of a table be selected for it
to be key preserved. It is sufficient that if the key or keys were selected,
then they would also be key(s) of the result of the join.
Attention: The key-preserving property of a table does not depend on the
actual data in the table. It is, rather, a property of its schema and not of
the data in the table. For example, if in the EMP table there was at most one
employee in each department, then DEPT.DEPTNO would be unique in the result of
a join of EMP and DEPT, but DEPT would still not be a key-preserved table.
If you SELECT all rows from EMP_DEPT view, the results are:
SELECT * FROM EMP_DEPT;
EMPNO ENAME SAL DEPTNO DNAME LOC
7369 SMITH 800 20 RESEARCH DALLAS
7499 ALLEN 1600 30 SALES CHICAGO
7521 WARD 1250 30 SALES CHICAGO
7566 JONES 2975 20 RESEARCH DALLAS
7654 MARTIN 1250 30 SALES CHICAGO
7698 BLAKE 2850 30 SALES CHICAGO
7782 CLARK 2695 10 ACCOUNTING NEW YORK
7788 SCOTT 3000 20 RESEARCH DALLAS
7839 KING 5500 10 ACCOUNTING NEW YORK
7844 TURNER 1500 30 SALES CHICAGO
7876 ADAMS 1100 20 RESEARCH DALLAS
7900 JAMES 950 30 SALES CHICAGO
7902 FORD 3000 20 RESEARCH DALLAS
7934 MILLER 1430 10 ACCOUNTING NEW YORK
14 rows selected.
In this view, EMP is a key-preserved table, because EMPNO is a key of the EMP
table, and also a key of the result of the join. DEPT is not a key-preserved
table, because although DEPTNO is a key of the DEPT table, it is not a key of
the join.
DML Statements and Join Views
=============================
!!!!!! IMPORTANT !!!!!! IMPORTANT !!!!!! IMPORTANT !!!!!! IMPORTANT !!!!!!
Any UPDATE, INSERT, or DELETE statement performed on a join view can modify
only *** one *** underlying base table.
!!!!!! IMPORTANT !!!!!! IMPORTANT !!!!!! IMPORTANT !!!!!! IMPORTANT !!!!!!
UPDATE Statements:
The following example shows an UPDATE statement that successfully modifies the
EMP_DEPT view:
UPDATE emp_dept
SET sal = sal * 1.10
WHERE deptno = 10;
The following UPDATE statement would be disallowed on the EMP_DEPT view:
UPDATE emp_dept
SET loc = 'BOSTON'
WHERE ename = 'SMITH';
This statement fails with an ORA-01779 error (cannot modify a column which
maps to a non key-preserved table), because it attempts to modify the
underlying DEPT table, and the DEPT table is not key preserved in the EMP_DEPT
view.
In general, all modifiable columns of a join view must map to columns of a
key-preserved table. If the view is defined using the WITH CHECK OPTION
clause, then all join columns and all columns of repeated tables are not
modifiable.
So, for example, if the EMP_DEPT view were defined using WITH CHECK OPTION,
the following UPDATE statement would fail:
UPDATE emp_dept
SET deptno = 10
WHERE ename = 'SMITH';
The statement fails because it is trying to update a join column.
DELETE Statements:
You can delete from a join view provided there is one and only one
key-preserved table in the join.
The following DELETE statement works on the EMP_DEPT view:
DELETE FROM emp_dept
WHERE ename = 'SMITH';
This DELETE statement on the EMP_DEPT view is legal because it can be
translated to a DELETE operation on the base EMP table, and because the EMP
table is the only key-preserved table in the join.
In the following view, a DELETE operation cannot be performed on the view
because both E1 and E2 are key-preserved tables:
CREATE VIEW emp_emp AS
SELECT e1.ename, e2.empno, deptno
FROM emp e1, emp e2
WHERE e1.empno = e2.empno;
If a view is defined using the WITH CHECK OPTION clause and the keypreserved
table is repeated, then rows cannot be deleted from such a view:
CREATE VIEW emp_mgr AS
SELECT e1.ename, e2.ename mname
FROM emp e1, emp e2
WHERE e1.mgr = e2.empno
WITH CHECK OPTION;
No deletion can be performed on this view because the view involves a
self-join of the table that is key preserved.
INSERT Statements:
The following INSERT statement on the EMP_DEPT view succeeds:
INSERT INTO emp_dept (ename, empno, deptno)
VALUES ('KURODA', 9010, 40);
This statement works because only one key-preserved base table is being
modified (EMP), and 40 is a valid DEPTNO in the DEPT table (thus satisfying
the FOREIGN KEY integrity constraint on the EMP table).
An INSERT statement like the following would fail for the same reason that
such an UPDATE on the base EMP table would fail: the FOREIGN KEY integrity
constraint on the EMP table is violated.
INSERT INTO emp_dept (ename, empno, deptno)
VALUES ('KURODA', 9010, 77);
The following INSERT statement would fail with an ORA-1776 error (cannot
modify more than one base table through a view).
INSERT INTO emp_dept (empno, ename, loc)
VALUES (9010, 'KURODA', 'BOSTON');
An INSERT cannot, implicitly or explicitly, refer to columns of a
non-key-preserved table. If the join view is defined using the WITH CHECK
OPTION clause, then you cannot perform an INSERT to it.
Using the UPDATABLE_ COLUMNS Views
The following views can assist you when modifying join views:
View Name Description
USER_UPDATABLE_COLUMNS Shows all columns in all tables and views in the
users schema that are modifiable.
DBA_UPDATABLE_COLUMNS Shows all columns in all tables and views in the
DBA schema that are modifiable.
ALL_UPDATABLE_COLUMNS Shows all columns in all tables and views that are
modifiable. -
Oracle BPM Issue - By lock held by indoubt transaction
Hi ,
I am getting the below issue while processing the work items(Applications) in the Oracle BPM. I am using Oracle BPM 10g R3 (10.3.1.0.0 Build# 100812) in Linux Environment.
An unexpected error occured while trying to execute an automatic task,pending automatic tasks will continue to be executed. Details:\n"An error occured while accessing the database. Detail: SQL statement: 'SELECT DUETIME, ID, PROCESSID,INSTDID, THREADID, ANCESTORTHREADID, TSTAMP, TYPE, ACTIVITYNAME,ORIGINPROCESSDN, REAL THREADID, NETYPE, PRIORITY, LATER, DATA FROM PTODOITEMS WHERE DUETIME=2011-12-22 11:23:53.0 AND PROCESSID=31 AND ID=94370151 FOR UPDATE' Caused by :[BEA][Oracle JDBC Driver][Oracle]ORA-01591: lock held by in doubt distributed transaction 9.30.1176766 fuego.transaction.DatabaseException; An error occured while accessing the database. Detail:SQL statment: 'SELECT DUETIME, ID, PROCESSID,INSTDID, THREADID, ANCESTORTHREADID, TSTAMP, TYPE, ACTIVITYNAME,ORIGINPROCESSDN, REAL THREADID, NETYPE, PRIORITY, LATER, DATA FROM PTODOITEMS WHERE DUETIME=2011-12-22 11:23:53.0 AND PROCESSID=31 AND ID=94370151 FOR UPDATE' at
If I force COMMIT the Transaction 9.30.1176766 in the dba_2pc_pending, pending_trans$, pending_sessions$ tables, I am able to move forward.
Please help me to resolve this issue.
Thanks In Advance.
BhaskaraHi ,
I am getting the below issue while processing the work items(Applications) in the Oracle BPM. I am using Oracle BPM 10g R3 (10.3.1.0.0 Build# 100812) in Linux Environment.
An unexpected error occured while trying to execute an automatic task,pending automatic tasks will continue to be executed. Details:\n"An error occured while accessing the database. Detail: SQL statement: 'SELECT DUETIME, ID, PROCESSID,INSTDID, THREADID, ANCESTORTHREADID, TSTAMP, TYPE, ACTIVITYNAME,ORIGINPROCESSDN, REAL THREADID, NETYPE, PRIORITY, LATER, DATA FROM PTODOITEMS WHERE DUETIME=2011-12-22 11:23:53.0 AND PROCESSID=31 AND ID=94370151 FOR UPDATE' Caused by :[BEA][Oracle JDBC Driver][Oracle]ORA-01591: lock held by in doubt distributed transaction 9.30.1176766 fuego.transaction.DatabaseException; An error occured while accessing the database. Detail:SQL statment: 'SELECT DUETIME, ID, PROCESSID,INSTDID, THREADID, ANCESTORTHREADID, TSTAMP, TYPE, ACTIVITYNAME,ORIGINPROCESSDN, REAL THREADID, NETYPE, PRIORITY, LATER, DATA FROM PTODOITEMS WHERE DUETIME=2011-12-22 11:23:53.0 AND PROCESSID=31 AND ID=94370151 FOR UPDATE' at
If I force COMMIT the Transaction 9.30.1176766 in the dba_2pc_pending, pending_trans$, pending_sessions$ tables, I am able to move forward.
Please help me to resolve this issue.
Thanks In Advance.
Bhaskara -
ORA-14551: cannot perform a DML operation inside a query
I have a Java method which is deployed as a Oracle function.
This Java method parses a huge XML & populates this data
into a set of database tables.
I have to call this Oracle function in a unix shell script using sqlplus.
Value returned by this function will be used by the shell script to decide
what to do next.
I am calling the Oracle Java function as follows in the shell script:
echo "SELECT XML_TABLES.RUN_XML_LOADER('$P1','$P2','$P3','$P4') FROM DUAL;\n" | sqlplus $DB_USER > $LOG
This gives error - "ORA-14551: cannot perform a DML operation inside a query".
If I have to add a AUTONOMOUS_TRANSACTION pragma to this Java function,
where to I add it considering, that the definition of the function is in a Java class.
Can we do it in call spec?
create or replace package XML_TABLES is
function RUN_XML_LOADER(xmlFile IN VARCHAR2,
xmlType IN VARCHAR2,
outputDir IN VARCHAR2,
logFileDir IN VARCHAR2) RETURN VARCHAR2 AS
LANGUAGE JAVA NAME 'XmlLoader.run
(java.lang.String, java.lang.String, java.lang.String, java.lang.String)
return java.lang.String';
end XML_TABLES;
If not is there any other way to acheive this?
Thanks in advance.
Sunitha.If I have to add a AUTONOMOUS_TRANSACTION pragma to this Java function,You'd have to write a PL/SQL function that calls the JSP. But I would caution you about using that pragma. It does introduce tremendous complexity into processing.
As I see it you only need a function to return the result code so why not use a procedure with an OUT parameter?
Cheers, APC
Of course Yoann's suggestion of using an anonymous block would work too.
Message was edited by:
APC -
Change lock type on large table
I have a table which is about 2G size. the lock type on this table is set as AllPage. I think this cause performance issue when issue select query, many SH lock applied. I try to change the lock type to datarow, which should have no lock when issue select as document indicated.
but when I try it with DBArtisan, it took longtime and never end. Then I stop the app and connect it again. I found the lock type do change it to datarow. but there is a new table name like mytab_3309ac22 created.
then I try to change lock type again on other table and got following info:
You can not run Alter Table Lock in this database because the 'select into/bulkcopy' option is off.
Looks like dboption changed. So want to know if this is safe for data when change lock type? how to ensure it is done properly?Hi Kent,
The problem is that allpages has different physical structure than datarows.
Changing from one format to another requires to rewrite the whole table.
There are a few ways to achieve this goal.
a) running the alter table directly:
alter table x lock datarows
This command causes to make the change internally. This is the fastest way to do that, but it requries to set the 'select into' option on the database. And that might be problematic because it breaks the the transaction log sequence.
If you don't have the option set on the database you are not able to run the command.
b) doing it manually:
1. create a second table with datarows scheme
2. copy the rows to the table
3. dropping the old and renaming the new table
If you have a very busy system with little time for downtime then that would be the only solution. You can create triggers to manually log all the changes on the old table if that process would take too long.
At the end you would have to add all the foreign keys to the new table. And of course recreate the indexes.
I don't exactly know what DBArtisan is doing, but since you don't have the 'select into' option then probably it is running the second option. This new table with name mytab_3309ac22 might be the effect of the operation broken in the middle. How many rows has this table have? And what is the scheme?
If you have plenty of time then I would recommend to run the alter table command together with the 'select into' option. Anyway you won't have any progress bar during this operation so it is very hard to predict the time of the operation.
HTH,
Adam
Maybe you are looking for
-
Hi! I've plugged my ipod nano into my computer and I'm getting an error message: "The iPod cannot be updated because there is not enough free space to hold all of the songs in the selected playlists" I've updated the iTunes software and when i drag s
-
when i export movie from final cut pro to quicktime , something happened. it says error quicktime -50. please help. i need to submit this movie for uni project tomorrow. any help/advice will help . thanks.
-
I bought a MiniDisply Port to DVI. When i hook it up to my MBP the screen goes black and there is no display on my TV. I unhook it and the display on my MBP continues to stay black. The only way to get the display back is to perform a restart of the
-
Tuxedo Memory Leak Issue (Tuxedo 8.1 - Windows Server 2003)
Hi We are running tuxedo 8.1, 32 bit with patch level 258 in our windows server 2003 based production environment. We are currently facing an issue where the memory usage of machine slowly keeps on going higher and higher eventually resulting in “Mem
-
Can you test this out for me? In quiet room, do you have harddrive spinnin?
I am hearing a low spinning noise coming from my macbook when I am alone, in a quiet room. What about your? Can you hear the spinning of the harddrive for you macbook?