Blocking Lock on Unique Index in Oracle
Hi
I would like to know under which circumstances blocking lock on unique index would happen in oracle.
This is happening in one of our system intermittenly and because of this blocking lock the whole system become super slow
and we need to kill session that holding this blocking lock
And how to avoid this kind of lock?
For time being , i plan to create schedule job that will clear this blocking lock every 5 minutes.
Thanks
Hi,
I would suggest, when you see the problem, have a look in v$lock (you do that now) and v$locked_object and v$session. From v$lock and locked_object, you can find the session which has generated the lock and from v$session you can get the exact SQL (sql_id) running in the session.
Get the text of that sql from v$sql.
The objective is to find out which SQL is generating the lock. It can be some update (after which there is no commit) or something like that.
Once you find out the SQL(s) causing the lock you can give a real solution.
Regards
Similar Messages
-
Hi Friends,
I am confused about primary keys.
What is the purpose of this key again? I know it is used for unique constraints.
Supposing I have a table with two (2) columns which are each indexed as unique.
Then they can me both candidate at primary key right?
So why do I need a primary key again? when I have 2 columns which are uniquely index?
Thanks a lotA UNIQUE index creates a constraint such that all values in the index must be distinct. An error occurs if you try to add a new row with a key value that matches an existing row. This constraint does not apply to NULL values except for the BDB storage engine. For other engines, a UNIQUE index allows multiple NULL values for columns that can contain NULL
The differences between the two are:
1. Column(s) that make the Primary Key of a table cannot be NULL since by definition; the Primary Key cannot be NULL since it helps uniquely identify the record in the table. The column(s) that make up the unique index can be nullable. A note worth mentioning over here is that different RDBMS treat this differently –> while SQL Server and DB2 do not allow more than one NULL value in a unique index column, Oracle allows multiple NULL values. That is one of the things to look out for when designing/developing/porting applications across RDBMS.
2. There can be only one Primary Key defined on the table where as you can have many unique indexes defined on the table (if needed).
3. Also, in the case of SQL Server, if you go with the default options then a Primary Key is created as a clustered index while the unique index (constraint) is created as a non-clustered index. This is just the default behavior though and can be changed at creation time, if needed.
So, if the unique index is defined on not null column(s), then it is essentially the same as the Primary Key and can be treated as an alternate key meaning it can also serve the purpose of identifying a record uniquely in the table. -
Using OleDbDataAdapter Update with InsertCommands and getting blocking locks on Oracle table
The following code snippet shows the use of OleDbDataAdapter with InsertCommands. This code is producing many inserts on the Oracle table and is now suffering from contention... all on the same table. How does the OleDbDataAdapter produce
inserts from a dataset... what characteristics do these inserts inherent in terms of batch behavior... or do they naturally contend for the same resource.
oc.Open();
for (int i = 0; i < xImageId.Count; i++)
// Create the oracle adapter using a SQL which will not return any actual rows just the structure
OleDbDataAdapter da =
new OleDbDataAdapter("SELECT BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, " +
"DIRECT_INVOICING, EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE FROM sysadm.PS_RI_INV_PDF_MERG WHERE 1 = 2", oc);
// Create a data set
DataSet ds = new DataSet("documents");
da.Fill(ds, "documents");
// Loop through invoices and write to oracle
string[] sInvoices = invoiceNumber.Split(',');
foreach (string sInvoice in sInvoices)
// Create a data set row
DataRow dr = ds.Tables["documents"].NewRow();
... map the data
// Populate the dataset
ds.Tables["documents"].Rows.Add(dr);
// Create the insert command
string insertCommandText =
"INSERT /*+ append */ INTO PS_table " +
"(SEQ_NBR, BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, DIRECT_INVOICING, " +
"EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE) " +
"VALUES (INV.nextval, :BUSINESS_UNIT, :INVOICE, :ASSIGNMENT_ID, :END_DT, :RI_TIMECARD_ID, :IMAGE_ID, :FILENAME, " +
":BARCODE_LABEL_ID, :DIRECT_INVOICING, :EXCLUDE_FLG, :DTTM_CREATED, :DTTM_MODIFIED, :IMAGE_DATA, :PROCESS_INSTANCE)";
// Add the insert command to the data adapter
da.InsertCommand = new OleDbCommand(insertCommandText);
da.InsertCommand.Connection = oc;
// Add the params to the insert
da.InsertCommand.Parameters.Add(":BUSINESS_UNIT", OleDbType.VarChar, 5, "BUSINESS_UNIT");
da.InsertCommand.Parameters.Add(":INVOICE", OleDbType.VarChar, 22, "INVOICE");
da.InsertCommand.Parameters.Add(":ASSIGNMENT_ID", OleDbType.VarChar, 15, "ASSIGNMENT_ID");
da.InsertCommand.Parameters.Add(":END_DT", OleDbType.Date, 0, "END_DT");
da.InsertCommand.Parameters.Add(":RI_TIMECARD_ID", OleDbType.VarChar, 10, "RI_TIMECARD_ID");
da.InsertCommand.Parameters.Add(":IMAGE_ID", OleDbType.VarChar, 8, "IMAGE_ID");
da.InsertCommand.Parameters.Add(":FILENAME", OleDbType.VarChar, 80, "FILENAME");
da.InsertCommand.Parameters.Add(":BARCODE_LABEL_ID", OleDbType.VarChar, 18, "BARCODE_LABEL_ID");
da.InsertCommand.Parameters.Add(":DIRECT_INVOICING", OleDbType.VarChar, 1, "DIRECT_INVOICING");
da.InsertCommand.Parameters.Add(":EXCLUDE_FLG", OleDbType.VarChar, 1, "EXCLUDE_FLG");
da.InsertCommand.Parameters.Add(":DTTM_CREATED", OleDbType.Date, 0, "DTTM_CREATED");
da.InsertCommand.Parameters.Add(":DTTM_MODIFIED", OleDbType.Date, 0, "DTTM_MODIFIED");
da.InsertCommand.Parameters.Add(":IMAGE_DATA", OleDbType.Binary, System.Convert.ToInt32(filedata.Length), "IMAGE_DATA");
da.InsertCommand.Parameters.Add(":PROCESS_INSTANCE", OleDbType.VarChar, 10, "PROCESS_INSTANCE");
// Update the table
da.Update(ds, "documents");Here is what Oracle is showing as blocking locks and the SQL that has been identified with each of the SIDS. Not sure why there is contention. There are no triggers or joined tables in this piece of code.
Here is the SQL all of the SIDs below are running:
INSERT INTO sysadm.PS_RI_INV_PDF_MERG (SEQ_NBR, BUSINESS_UNIT, INVOICE, ASSIGNMENT_ID, END_DT, RI_TIMECARD_ID, IMAGE_ID, FILENAME, BARCODE_LABEL_ID, DIRECT_INVOICING, EXCLUDE_FLG, DTTM_CREATED, DTTM_MODIFIED, IMAGE_DATA, PROCESS_INSTANCE) VALUES (SYSADM.INV_PDF_MERG.nextval,
:BUSINESS_UNIT, :INVOICE, :ASSIGNMENT_ID, :END_DT, :RI_TIMECARD_ID, :IMAGE_ID, :FILENAME, :BARCODE_LABEL_ID, :DIRECT_INVOICING, :EXCLUDE_FLG, :DTTM_CREATED, :DTTM_MODIFIED, :IMAGE_DATA, :PROCESS_INSTANCE)
SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1150 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1
SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1
SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1156 (BTSUSER,biztprdi,BTSNTSvc64.exe) in instance FSLX3
SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 6 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX2
SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 1726 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX2
SID 1452 (BTSUSER,BIZTPRDI,BTSNTSvc64.exe) in instance FSLX1 is blocking SID 2016 (BTSUSER,biztprdi,BTSNTSvc64.exe) in instance FSLX2 -
Dead Lock occured while Sync index in oracle text
Hi All,
We are facing dead lock issue which syncing the oracle text index . The index is built on the local partitioned table and the sync index has the parameters below,
parallel - 4
memory - 20M
the error message is,
System error: Plsql job execution is failed with
error code -20000 and error message ORA-20000: Oracle Text error: DRG-50610:
internal error: drvdml.ParallelDML DRG-50857: oracle error in
drvdml.ParallelDML ORA-12801: error signaled in parallel query server P003,
instance xxxx.enterprisenet.org:xxxx (1) ORA-20000: Oracle Text error:
DRG-50857: oracle error in drepdump_dollarp_insert ORA-00060: deadlock detected
while waiting for resource ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.DRVPARX",
Thanks in advance.How many occurrences of XYZ are there per XML document ?
If there are more than one, then obviously you cannot create such an index on it.
In this case, you'll need an XMLIndex, unstructured or structured, depending on the type of queries you want to run.
If there's only one occurrence, could you post a sample document and your db version?
Thanks. -
Database locking (state versus stateless) and indexes on oracle database
Does anyone have a link to a document talking about database locking (state versus stateless) and talking about indexes in oracle database?
No version information and no information as to what you mean by "locking" so no help is possible.
You could mean LOCK TABLE in version 7.3.4 or SELECT FOR UPDATE in 11.1.0.7 or something else entirely. -
Access path difference between Primary Key and Unique Index
Hi All,
Is there any specific way the oracle optimizer treats Primary key and Unique index differently?
Oracle Version
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> Sample test data for Normal Index
SQL> create table t_test_tab(col1 number, col2 number, col3 varchar2(12));
Table created.
SQL> create sequence seq_t_test_tab start with 1 increment by 1 ;
Sequence created.
SQL> insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
99999 rows created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(USER_OWNER','T_TEST_TAB',cascade => true);
PL/SQL procedure successfully completed.
SQL> select col1 from t_test_tab;
99999 rows selected.
Execution Plan
Plan hash value: 1565504962
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 99999 | 488K| 74 (3)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T_TEST_TAB | 99999 | 488K| 74 (3)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
6915 consistent gets
259 physical reads
0 redo size
1829388 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99999 rows processed
SQL> create index idx_t_test_tab on t_test_tab(col1);
Index created.
SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB',cascade => true);
PL/SQL procedure successfully completed.
SQL> select col1 from t_test_tab;
99999 rows selected.
Execution Plan
Plan hash value: 1565504962
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 99999 | 488K| 74 (3)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T_TEST_TAB | 99999 | 488K| 74 (3)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
6915 consistent gets
0 physical reads
0 redo size
1829388 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99999 rows processed
SQL> Sample test data when using Primary Key
SQL> create table t_test_tab1(col1 number, col2 number, col3 varchar2(12));
Table created.
SQL> create sequence seq_t_test_tab1 start with 1 increment by 1 ;
Sequence created.
SQL> insert into t_test_tab1 select seq_t_test_tab1.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
99999 rows created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
PL/SQL procedure successfully completed.
SQL> select col1 from t_test_tab1;
99999 rows selected.
Execution Plan
Plan hash value: 1727568366
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 99999 | 488K| 74 (3)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T_TEST_TAB1 | 99999 | 488K| 74 (3)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
6915 consistent gets
0 physical reads
0 redo size
1829388 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99999 rows processed
SQL> alter table t_test_tab1 add constraint pk_t_test_tab1 primary key (col1);
Table altered.
SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
PL/SQL procedure successfully completed.
SQL> select col1 from t_test_tab1;
99999 rows selected.
Execution Plan
Plan hash value: 2995826579
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 99999 | 488K| 59 (2)| 00:00:01 |
| 1 | INDEX FAST FULL SCAN| PK_T_TEST_TAB1 | 99999 | 488K| 59 (2)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
6867 consistent gets
0 physical reads
0 redo size
1829388 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99999 rows processed
SQL> If you see here the even though statistics were gathered,
* In the 1st table T_TEST_TAB, the table is still using FULL table access after creation of index.
* And in the 2nd table T_TEST_TAB1, table is using PRIMARY KEY as expected.
Any comments ??
Regards,
BPatThanks.
Yes, ignored the NOT NULL part.Did a test and now it is working as expected
SQL> create table t_test_tab(col1 number not null, col2 number, col3 varchar2(12));
Table created.
SQL>
create sequence seq_t_test_tab start with 1 increment by 1 ;SQL>
Sequence created.
SQL> insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
99999 rows created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
PL/SQL procedure successfully completed.
SQL> set autotrace traceonly
SQL> select col1 from t_test_tab;
99999 rows selected.
Execution Plan
Plan hash value: 1565504962
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 99999 | 488K| 74 (3)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T_TEST_TAB | 99999 | 488K| 74 (3)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
6912 consistent gets
0 physical reads
0 redo size
1829388 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99999 rows processed
SQL> create index idx_t_test_tab on t_test_tab(col1);
Index created.
SQL> exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
PL/SQL procedure successfully completed.
SQL> select col1 from t_test_tab;
99999 rows selected.
Execution Plan
Plan hash value: 4115006285
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 99999 | 488K| 63 (2)| 00:00:01 |
| 1 | INDEX FAST FULL SCAN| IDX_T_TEST_TAB | 99999 | 488K| 63 (2)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
6881 consistent gets
0 physical reads
0 redo size
1829388 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99999 rows processed
SQL> -
Blocking Locks - What Was Likely Going On?
I had a blocking lock yesterday that showed up in OEM under Cluster Database --> Cluster Database Locks. The blocking lock was a row exclusive (RX) table lock that was blocking 175 other sessions that were listed under the blocking lock as having requested row share (RS) locks. It was an hour before I found out about this problem.
Two questions:
1.] The "Oracle Database Concepts 10gR2" book, Table 13-13, states that a row exclusive (RX) table lock can be obtained as a result of INSERT, UPDATE or DELETE DML and that in RX mode share lock modes are not permitted (which is why I had 175 blocked sessions). Does this mean that a user must have been doing a long running (1 hour plus) INSERT, UPDATE or DELETE or is there another more likely cause that I'm not aware of?
2.] The only ways I know of to request a row share lock (of which 175 were blocked due to the RX lock) is by using:
LOCK_TABLE <table name> IN SHARE MODE;
LOCK_TABLE <table name> IN SHARE EXCLUSIVE MODE;
LOCK_TABLE <table name> IN EXCLUSIVE MODE;
I can't imagine a user doing any of these commands so is there another more likely reason that 175 row share (RX) locks were being requested (and blocked)?
Thanks for any insight you can offer. I ended up killing the session that held the RX lock and that resolved the problem but I'd like to better understand what was happening.1.] The "Oracle Database Concepts 10gR2" book, Table
13-13, states that a row exclusive (RX) table lock
can be obtained as a result of INSERT, UPDATE or
DELETE DML and that in RX mode share lock modes are
not permittedThat table shows that RS (mode 2) is permitted while RX lock is held.
Did you mean to say that sessions were waiting on a S mode (4) lock?
This could indicate that update/delete was attempted on a parent table and that dependend table was lacking an index on fk column. (may answer your q.2) -
Constantly inserting into large table with unique index... Guidance?
Hello all;
So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
This DB is about 1.7 TB of small record data.
One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
Now what we are observing is that the inserts into this table
- Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
- Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
- If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.Hello,
Here is a link to a blog article that will give you the right questions and answers which apply to your case:
http://jonathanlewis.wordpress.com/?s=delete+90%25
As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
(a) unique index (sourceid, timestamp)
(b) index(create time)
Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
create index indexname (sourceid, timestamp) compress;
or
alter index indexname rebuild compress; You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
Best Regards
Mohamed Houri -
Deadlock on a unique index?
I am trying to figure out what is exactly going on during this deadlock situation and i need some help. From the info
from the graph i figured that session 75 is waiting on row in a unique index. From what i am trying to figure out is
are the two sessions trying to insert the same key value and the second session has to wait to see if an ORA-0001 should be raised or not and a deadlock occurs?
Session 75: obj - rowid = 0001B54E - AAAbVOAASAABQ0SAAA
(dictionary objn - 111950, file - 18, block - 331026, slot - 0)
OWNER OBJECT_NAME OBJECT_TYPE
MULTI PK_SEGMENTMEMBER_1 INDEX
CREATE UNIQUE INDEX PK_SEGMENTMEMBER_1 ON SEGMENTMEMBER
(SEGMENTID, ENDUSERID)
--------Dumping Sorted Master Trigger List --------
Trigger Owner : MULTI
Trigger Name : RULE_CACHE
--------Dumping Trigger Sublists --------
trigger sublist 0 :
trigger sublist 1 :
Trigger Owner : MULTI
Trigger Name : RULE_CACHE
trigger sublist 2 :
trigger sublist 3 :
trigger sublist 4 :
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-000e0013-0015c871 20 11 X 33 75 S
TX-000b0000-0017d542 33 75 X 20 11 S
session 11: DID 0001-0014-0094ADA4 session 75: DID 0001-0021-003A6568
session 75: DID 0001-0021-003A6568 session 11: DID 0001-0014-0094ADA4
Rows waited on:
Session 11: no row
Session 75: obj - rowid = 0001B54E - AAAbVOAASAABQ0SAAA
(dictionary objn - 111950, file - 18, block - 331026, slot - 0)
----- Information for the OTHER waiting sessions -----
Session 75:
sid: 75 ser: 15873 audsid: 26563927 user: 95/MULTI flags: 0x41
pid: 33 O/S info: user: oracle, term: UNKNOWN, ospid: 13909
image: [email protected]
client details:
O/S info: user: jboss, term: unknown, ospid: 1234
machine: MULTI2.8020solutions.net program: JDBC Thin Client
application name: JDBC Thin Client, hash value=2546894660
current SQL:
MERGE INTO SEGMENTMEMBER A USING (SELECT id enduserid,
decode(MIN(nvl(ruleid, 0)), 0, NULL, min(ruleid)) ruleid,
segmentid
FROM temp_ids
GROUP BY id, segmentid) B
ON (A.ENDUSERID = B.ENDUSERID AND A.SEGMENTID = B.SEGMENTID)
WHEN NOT MATCHED THEN
INSERT ( A.ENDUSERID,A.RULEID, A.SEGMENTID)
VALUES ( B.ENDUSERID,B.RULEID, B.SEGMENTID)
----- End of information for the OTHER waiting sessions -----
Information for THIS session:
----- Current SQL Statement for this session (sql_id=9utn9atfhzsdz) -----
DELETE FROM RULE WHERE ID IN (SELECT ID FROM UTL_DELETE WHERE TABLENAME = 'RULE')
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0xeaaabbd0 20675 package body MULTI.MULTI_DELETE
0xeaaabbd0 20910 package body MULTI.MULTI_DELETE
0xccfa7ed0 1 anonymous block
===================================================
PROCESS STATE
Process global information:
process: 0x11b4d9e20, call: 0x11b892c78, xact: 0x117c891b8, curses: 0x11b610458, usrses: 0x11b610458
SO: 0x11b4d9e20, type: 2, owner: (nil), flag: INIT/-/-/0x00 if: 0x3 c: 0x3
proc=0x11b4d9e20, name=process, file=ksu.h LINE:11459, pg=0
(process) Oracle pid:20, ser:25, calls cur/top: 0x11b892c78/0x11b895a58
flags : (0x0) -
flags2: (0x0), flags3: (0x0)
intr error: 0, call error: 0, sess error: 0, txn error 0
intr queue: empty
ksudlp FALSE at location: 0
(post info) last post received: 0 0 150
last post received-location: kcb2.h LINE:3844 ID:kcbzww
last process to post me: 11b4e7160 41 0
last post sent: 0 0 26
last post sent-location: ksa2.h LINE:282 ID:ksasnd
last process posted by me: 11b4d1c20 1 6
(latch info) wait_event=0 bits=0
Process Group: DEFAULT, pseudo proc: 0x11b56ada0
O/S info: user: oracle, term: UNKNOWN, ospid: 15407
OSD pid info: Unix process pid: 15407, image: [email protected]
Dump of memory from 0x000000011B4B5110 to 0x000000011B4B5318Thanks damorgan for helping me out. This deadlock appeared a couple of times already.
Here is the result of the view
OBJECT_NAME LOCK_TYPE MODE_HELD MODE_REQUESTED BLOCKING_OTHERS
TAB$ XR Null None Not Blocking
PROXY_ROLE_DATA$ RS Row-S (SS) None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
C_OBJ# CT Exclusive None Not Blocking
C_OBJ# Media Recovery Share None Not Blocking
I_OBJ# Media Recovery Share None Not Blocking
TAB$ Media Recovery Share None Not Blocking
CLU$ Media Recovery Share None Not Blocking
C_TS# Media Recovery Share None Not Blocking
I_TS# Media Recovery Share None Not Blocking
C_FILE#_BLOCK# Media Recovery Share None Not Blocking
I_FILE#_BLOCK# Media Recovery Share None Not Blocking
C_USER# Media Recovery Share None Not Blocking
I_USER# Media Recovery Share None Not Blocking
FET$ Media Recovery Share None Not Blocking
UET$ Media Recovery Share None Not Blocking
SEG$ Media Recovery Share None Not Blocking
UNDO$ Media Recovery Share None Not Blocking
TS$ Media Recovery Share None Not Blocking
I_SQL$TEXT_HANDLE Media Recovery Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
I_OBJ# Temp Segment Row-X (SX) None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ICOL$ Media Recovery Share None Not Blocking
COL$ Media Recovery Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
USER$ Media Recovery Share None Not Blocking
SQLOBJ$ Media Recovery Share None Not Blocking
OBJ$ Media Recovery Share None Not Blocking
IND$ Media Recovery Share None Not Blocking
ORA$BASE AE Share None Not Blocking
ORA$BASE AE Share None Not Blocking
FILE$ Media Recovery Share None Not Blocking -
SQL Timeouts and Blocking Locks
SQL Timeouts and Blocking Locks
Just wanted to check in and see if anyone here has feedback on application settings, ColdFusion settings, JBOSS settings or other settings that could help to limit or remove SQL Timeouts and blocking locks on SID's.
We're using MS SQL 2000 with JBOSS and IIS5.
We've been seeing the following error in our logs that starts blocking locks in SQL:
java.sql.SQLException: [newScale] [SQLServer JDBC Drive] [SQLServer] Lock request time out period exceeded.
Once this happens, we're hosed until we remove the blocking SID in SQL. These are the connections to the application.
Any feedback would be great. Thanks!Hi
This is your exact solution:
Select a.username, a.sid, a.serial#, b.id1, c.sql_text
From v$session a, v$lock b, v$sqltext c
Where b.id1 in( Select distinct e.id1
from v$session d , v$lock e
where d.lockwait = e.kaddr ) and
a.sid = b.sid and
c.hash_value = a.sql_hash_value and
b.request =0;
Thanks
Sarju
Oracle DBA
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by I'm clueless:
Can someone give me the SQL statement to
show if there are any blocking database locks and if so - which user is locking the Database?
Thanks in Advance<HR></BLOCKQUOTE>
null -
Difference between Unique key and Unique index
Hi All,
I've got confused in the difference between unique index & unique key in a table.
While we create a unique index on a table, its created as a unique index.
On the other hand, if we create a unique key/constraint on the table, Oracle also creates an index entry for that. So I can find the same name object in all_constraints as well as in all_indexes.
My question here is that if during creation of unique key/constraint, an index is automatically created than why is the need to create unique key and then two objects , while we can create only one object i.e. unique index.
Thanks
DeepakThis is only my understanding and is not according to any documentation, that is as follows.
The unique key (constraint) needs an unique index for achieving constraint of itself.
Developers and users can make any constraint (unique-key, primary-key, foreign-key, not-null ...) to enable,disable and be deferable. Unique key is able to be enabled, disabled, deferable.
On the other hand, the index is used for performance-up originally, unique index itself doesn't have the concept like constraints. The index (including non-unique, unique) can be rebuilded,enabled,disabled etc. But I think that index cannot be set "deferable-builded" automatic. -
Difference between unique constraint and unique index
1. What is the difference between unique constraint and unique index when unique constraint is always indexed ? Which one is better in this case for better performance ?
2. Is Composite index of 3 columns x,y,z better
or having independent/ seperate indexes on 3 columns x,y,z is better for better performance ?
3. It has been very confusing for me to decide which columns to index, I have indexed most foreignkey columns, is it a good idea ? We do lot of selects and DMLS on most of our tables. Is there any query that I can run and find out if indexes are really being used and if they are improving any performance. I have analyzed and computed my indexes using ANALYZE index index_name validate structure and COMPUTE STATISTICS;
null1. Unique index is part of unique constraint. Of course you can create standalone unique index. But is is no point to skip the logical view of business if you spend same effort to achive.
You create unique const. Oracle create the unique index for you. You may specify index characteristic in unique constraint.
2. Depends. You can't utilize the composite index if the searching condition is not whole or front part of the indexing key. You can't utilize your index if you query the table for y=2. That is.
3. As old words in database arena, Index may be good or bad for a table depending on the size of table, number of columns in the table... etc. It is very environmental dependent. In fact, It is part of database nomalization. Statistic is a way oracle use to determine the execution plan.
Steve
null -
Difference Between Unique Index vs Unique Constraint
Can you tell me
what is the Difference Between Unique Index vs Unique Constraint.
and
Difference Between Unique Index and bitmap index.
Edited by: Nilesh Hole,Pune, India on Aug 22, 2009 10:33 AMNilesh Hole,Pune, India wrote:
Can you tell me
what is the Difference Between Unique Index vs Unique Constraint.http://www.jlcomp.demon.co.uk/faq/uk_idx_con.html
and
Difference Between Unique Index and bitmap index.The documentation is your friend:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#CNCPT1157
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref1008
Regards,
Rob. -
Unique Index vs. Unique Constraint
Hi All,
I'm studying for the Oracle SQL Expert Certification. At one point in the book, while talking about indices, the author says that a unique index is not the same a unique constraint. However, he doesn't explain why they're two different things.
Could anyone clarify the difference between the two, please?
Thanks a lot,
ValerioA constraint has different meaning to an index. It gives the optimiser more information and allows you to have foreign keys on the column, whereas a unique index doesn't.
eg:
SQL> create table t1 (col1 number, col2 varchar2(20), constraint t1_uq unique (col1));
Table created.
SQL> create table t2 (col1 number, col2 varchar2(20));
Table created.
SQL> create unique index t2_idx on t2 (col1);
Index created.
SQL> create table t3 (col1 number, col2 number, col3 varchar2(20), constraint t3_fk
2 foreign key (col2) references t1 (col1));
Table created.
SQL> create table t4 (col1 number, col2 number, col3 varchar2(20), constraint t4_fk
2 foreign key (col2) references t2 (col1));
foreign key (col2) references t2 (col1))
ERROR at line 2:
ORA-02270: no matching unique or primary key for this column-listIt's like saying "What's the difference between a car seat and an armchair? They both allow you to sit down!" -
Hi
This question relates to monitoring blocking locks on a 9.2.0.5 2 node RAC
Origionally I have been monitoring bocking locks with every 5 mins using the following query:
"select * from dba_blockers"
I have recently implemented monitoring via grid control this is running an out of the box metric every 5 mins, the sql behind it is as follows:
"SELECT blocking_sid, num_blocked
FROM ( SELECT blocking_sid, SUM(num_blocked) num_blocked
FROM ( SELECT l.id1, l.id2,
MAX(DECODE(l.block, 1, i.instance_name||'-'||l.sid,
2, i.instance_name||'-'||l.sid, 0 )) blocking_sid,
SUM(DECODE(l.request, 0, 0, 1 )) num_blocked
FROM gv$lock l, gv$instance i
WHERE ( l.block!= 0 OR l.request > 0 ) AND
l.inst_id = i.inst_id
GROUP BY l.id1, l.id2)
GROUP BY blocking_sid
ORDER BY num_blocked DESC)
WHERE num_blocked != 0 "
Now.. At one point today the alert using "select * from dba_blockers" fired where as the out of the box metric from gird control did not fire.... alert duration was around 5 - 10 mins
At first i simply assumed that this could have been a brief lock and due to both 5 min intervals being out of sync, the lock had shown and cleared before the grid control interval run.
now im a little more curious.
Is there any significan difference in what these 2 different SQL's will alert on, I was under the impression that DBA_BLOCKERS was simply querying a number of joined views, and Oracle had decided to use V$lock for their out of the box metric as it was more efficient.
Any comments welcome
ThanksJust to prove that the SQL is correct I have constrcuted a demo for you...
SQL> create table t (a char(1));
Table created.
SQL> insert into t values ('z');
1 row created.
SQL> commit;
in session 1 ---->
select * from t where a='z' for update;
==================================================================
in session 2 ---->
update t set a='x' where a='z';
(session simply hangs)
==================================================================
in session 3 ------>
SQL> select * from dba_blockers;
HOLDING_SESSION
48
SQL>
SQL> SELECT blocking_sid, num_blocked
FROM ( SELECT blocking_sid, SUM(num_blocked) num_blocked
FROM ( SELECT l.id1, l.id2, MAX(DECODE(l.block, 1, i.instance_name||'-'||l.sid,
2, i.instance_name||'-'||l.sid, 0 )) blocking_sid,
SUM(DECODE(l.request, 0, 0, 1 )) num_blocked
FROM gv$lock l, gv$instance i
WHERE ( l.block!= 0 OR l.request > 0 ) AND
l.inst_id = i.inst_id
GROUP BY l.id1, l.id2)
GROUP BY blocking_sid
ORDER BY num_blocked DESC)
WHERE num_blocked != 0;
2 3 4 5 6 7 8 9 10 11 12
BLOCKING_SID NUM_BLOCKED
RAC1-48 1
So back to the origional question,
I am using both these queries from different monitors on my prod syystem, both running on 5 minute intervals, " select * from dba_blockers" fired where as the above query - querying gv$lock did not fire.
Origionaly i assumed that the blocking lock may have simply lasted 3t0 seconds, and due the 5 minute monitor intervals of each metric not being in sync, ... "select * from dba_blockers" may have picked up the lock, then the query selecting from gv$lock ran 2 mins later by which time the lock had disapeared.
-Can anyone suggest any other reasons other than this why one monitor (select * from dba_blockers) picked up the lock and the other (gv$lock) didnt?
Thanks
Maybe you are looking for
-
I am trying to open a box score from a baseball website. There is a picture of a lock at the bar on top of iPad. My husband can bring it up on his iPad and it doesn't have a lock on the top. Please help.
-
I think I found a bug in my copy of Numbers 3.5 (2109). After renaming the table title and saving the document, the title change doesn't get saved. Upon reopening the document the table name is still the same.
-
Locally edit an event received from a subscription
I publish an iCal feed (so I have complete control over the feed properties) and while iCal imports it just fine, I am not allowed to locally edit the event details. It appears to be a read-only affair. I am not referring to having local edits be pus
-
Access oracle apps dba through Linux
hi, I am using mozilla firefox on linux for oracle applications 11.5.9 but i cant open them because of plugin. i've installed j2re1.4.2_04 and make soft link with mozilla/plugin/ through this command: #ln -s /user/java/j2re1.4.2_04/plugin/i386/ns610-
-
Doubt on 11g reports server (rwservlet.properties)
Hi , We are using oracle fusion middleware forms & reports(11.1.1.2.0).For reports,we have setup standalone report server.As per my understanding by reading the documentation, Based on rwservlet.properties(server & in process parameters) under