Large tables without clustered indexes -- a bad idea, or ok?
We have several large tables that don't have clustered indexes, but do have primary keys and assorted other indexes. We're seeing issues that look like they are related to statistics even though there is a nightly update statistics job.
So my question is, does the existence of a clustered index impact other indexes or the update statistics process? Are our tables ok the way they are?
Adding clustered indexes would be a big project for non-development-related reasons so I would need to have good reasons to advocate for this.
RSingh,
"By default" is a confusing word here. If you do it with WIZARD, I agree. But with T-SQL,there is no space for by default, it can be anything.
I agree that I assumed here as I wont do much things with wizard. :)
Try the below:
--Here by default, its non-clustered
create table Test_table(Col1 int , Col2 int not null )
Create clustered index IX__Test_Table on Test_Table (Col1)
Alter table test_Table
Add Constraint PK_testTable
Primary Key (Col2);
Drop table test_table
--Here by default, its clustered
create table Test_table(Col1 int , Col2 int not null Primary key)
Create index IX__Test_Table on Test_Table (Col1)
Drop table test_table
Neither the word "By Default" is a confusing word nor it happens only in WIZARD. Let me clarify below. I said "When a primary key constraint is created it creates a unique clustered index by default". Run the below DDL and check whether
a clustered index is created or not. i.e
CREATE TABLE Persons
P_Id int NOT NULL,
LastName varchar(255) NOT NULL,
FirstName varchar(255),
Address varchar(255),
City varchar(255),
CONSTRAINT pk_PersonID PRIMARY KEY (P_Id,LastName)
Regards, RSingh
That did not really make any sense to my context as I am NOT against that it will create Clustered index, In fact, my code says the same. My point is only "By default" is confusing.
Let me explain, what I mean "By default"....
Create index IX_Indexname on Tablename(Colname)
The above will create "non clustered index" BY DEFAULT. Any situation, the above would create only non-clustered irrespective of other indexes present in the table.
Hope the above is clear. May be its my way looking at "BY DEFAULT".
Similar Messages
-
Hi,
I am getting following warnings in db02:
Tables without unique index
STATS_RFC
STATS_RFC_OLD.
In se16 the status sows Table STATS_RFC_OLD\ STATS_RFC is not active in the Dictionary. In se11 does not exist.
Kindly suggest.
Regards,
Rahul.HI,
desc SAPR3P.STATS_RFC
Name Null? Type
STATID VARCHAR2(30)
TYPE CHAR(1)
VERSION NUMBER
FLAGS NUMBER
C1 VARCHAR2(30)
C2 VARCHAR2(30)
C3 VARCHAR2(30)
C4 VARCHAR2(30)
C5 VARCHAR2(30)
N1 NUMBER
N2 NUMBER
N3 NUMBER
N4 NUMBER
N5 NUMBER
N6 NUMBER
N7 NUMBER
N8 NUMBER
N9 NUMBER
N10 NUMBER
N11 NUMBER
N12 NUMBER
D1 DATE
R1 RAW(32)
R2 RAW(32)
CH1 VARCHAR2(1000)
No fields in STATS_RFC_OLD.
Regards,
Rahul. -
Constantly inserting into large table with unique index... Guidance?
Hello all;
So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
This DB is about 1.7 TB of small record data.
One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
Now what we are observing is that the inserts into this table
- Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
- Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
- If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.Hello,
Here is a link to a blog article that will give you the right questions and answers which apply to your case:
http://jonathanlewis.wordpress.com/?s=delete+90%25
As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
(a) unique index (sourceid, timestamp)
(b) index(create time)
Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
create index indexname (sourceid, timestamp) compress;
or
alter index indexname rebuild compress; You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
Best Regards
Mohamed Houri -
How to select an item in sap.ui.table.Table without using index?
Hi there,
I want to select an item of a sap.ui.table.Table by finding the right item and selecting it.
Take the example at this SDK page:
SAPUI5 SDK - Demo Kit
You have first name and last name. I want to select "Mo Lester". While there is a function called setSelectedIndex, there is no function for setSelectedItem.
I have to search the items of the table for a required entry and select it (like clicking on it).
Is there a way to do this (programatically)?
I'm using aggregation binding with a JSON model.
Regards
TobiasHi Tobias,
What you could do is first find the JSON object (i.e., the table row) in your table model, and use its index in the model to set the selected index:
var matches = $.grep(array, function() {
return(this.firstName === "John" && this.lastName === "Doe");
if (matches.length) {
var index = yourTable.getModel().getData().indexOf(matches[0]); //first match
yourTable.setSelectedIndex(index); -
Sort a table with no index or key defined?
Greetz!
I've got a script that builds a table, table A, by doing a select into. Later, a select is done on that table and an order by added. However the sort order is not persisting. Could this be due to there being no indexes or primary keys defined
on table A? Can a sort be done on a table without an index of any kind?
Thanks!
Love them all...regardless. - BuddhaTo add to Erland's comment
You cannot "look" directly at the contents of a table - you must use a query to generate a resultset. And a resultset, just like a table, has no defined order unless it was generated using an order by clause. Any order you might observe
is simply an artifact of your data, the load on the db engine, the currently cached data, and many other factors. If you see a consistent order, you might be tempted to assume such an order will always exist. Don't be tempted. Many others
have been so tempted and have discovered the incorrectness of this assumption at a later date - often at an inconvenient time. -
ORA-00604 ORA-00904 When query partitioned table with partitioned indexes
Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
Query runs fine when query the partitioned table without partitioned indexes.
Here is the query.
SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
al27.accessory_code
FROM vlc.veh_vdc_accessorization_fact al1,
vlc.vdc_dim al2,
vlc.model_attribute_dim al7,
vlc.ppo_list_dim al18,
vlc.ppo_list_indiv_type_dim al23,
vlc.accy_type_dim al27
WHERE ( al2.vdc_id = al1.vdc_location_id
AND al7.model_attribute_id = al1.model_attribute_id
AND al18.mydppolist_id = al1.ppo_list_id
AND al23.mydppolist_id = al18.mydppolist_id
AND al23.mydaccytyp_id = al27.mydaccytyp_id
AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
AND al2.vdc_name IN
('PORT OF BALTIMORE',
'PORT OF JACKSONVILLE - LEXUS',
'PORT OF LONG BEACH',
'PORT OF NEWARK',
'PORT OF PORTLAND'
AND al27.accessory_code IN ('42', '43', '44', '45')
GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_codeI would recommend that you post this at the following OTN forum:
Database - General
General Database Discussions
and perhaps at:
Oracle Warehouse Builder
Warehouse Builder
The Oracle OLAP forum typically does not cover general data warehousing topics. -
Creating a Primary Key in SSMS without Also Creating a Clustered Index
In the course of development (SQL Server Express 2012) I’m creating various tables which should not have a clustered index on the primary key. However, it seems as if SSMS is obsessed with always doing so. I cannot seem to find a way through the simple
“right click on column and select "Set Primary Key" method to stop the creation of an associated clustered index.
I can work around it fairly easily by having SSMS generate the necessary script and then changing CLUSTERED to NONCLUSTERED, but I’m curious if there is a way to do it purely using the UI that I’m overlooking…To be honest, though, until I get much closer to production and have loaded a lot more data I won't know for sure one way or the other - this was more of an awareness question from my part and something that I could not find anyone ever asking on
the web.
To add to what David said, the performance impact in the end depends much on the size of the table size and amount of memory available to SQL Server. All things being equal, performance will be roughly the same with both sequential and random keys
as long as data are memory resident. But a random key greatly reduces the likelihood need data will be in memory.
The worst case scenario for single-row selects is an even distribution of random key values, a very large table, and little SQL Server memory. Nearly every select query will require a physical I/O in this case. Since each spinning-media disk
spindle is capable of only 150-200 IOPS so your throughput will be constrained accordingly.
A big advantage of sequential key values for single-row selects is improved temporal reference locality. In practice, the most recently inserted/accessed rows are most likely to be subsequently queried. A single-row query will not only
read the requested row, but the adjacent rows on the same page as well, thus avoiding costly I/O for a select of those rows.
See http://www.dbdelta.com/improving-uniqueidentifier-performance/ for an example of how to generate sequential GUIDs from C# code.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Pagination query help needed for large table - force a different index
I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
SELECT members.*
FROM members,
SELECT RID, rownum rnum
FROM
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
The problem I have is this:
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
SELECT /*+ index(members, joindate_idx) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
SELECT /*+ first_rows(100) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
SELECT members.* -- Select all data from members table
FROM members, -- members table added to FROM clause
SELECT RID, rownum rnum
FROM
SELECT /*+ index(members, joindate_idx) */ rowid as RID -- Hint is ignored now that I am joining in the outer query
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowid -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
Thanks!Lakmal Rajapakse wrote:
OK here is an example to illustrate the advantage:
SQL> set autot traceonly
SQL> select * from (
2 select a.*, rownum x from
3 (
4 select a.* from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 )
9 where x >= 1100
10 /
101 rows selected.
Execution Plan
Plan hash value: 3711662397
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 1 | VIEW | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1200 | 506K| 192 (0)| 00:00:03 |
| 4 | TABLE ACCESS BY INDEX ROWID| EVENTS | 253M| 34G| 192 (0)| 00:00:03 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 1200 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("X">=1100)
2 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
443 consistent gets
0 physical reads
0 redo size
25203 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
SQL>
SQL>
SQL> select * from aoswf.events a, (
2 select rid, rownum x from
3 (
4 select rowid rid from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 ) b
9 where x >= 1100
10 and a.rowid = rid
11 /
101 rows selected.
Execution Plan
Plan hash value: 2308864810
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 201K| 261K (1)| 00:52:21 |
| 1 | NESTED LOOPS | | 1200 | 201K| 261K (1)| 00:52:21 |
|* 2 | VIEW | | 1200 | 30000 | 260K (1)| 00:52:06 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 253M| 2895M| 260K (1)| 00:52:06 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 253M| 4826M| 260K (1)| 00:52:06 |
| 6 | TABLE ACCESS BY USER ROWID| EVENTS | 1 | 147 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("X">=1100)
3 - filter(ROWNUM<=1200)
Statistics
8 recursive calls
0 db block gets
117 consistent gets
0 physical reads
0 redo size
27539 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
Lakmal (and OP),
Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter pga
NAME TYPE VALUE
pga_aggregate_target big integer 103M
SQL> create table t nologging as select * from all_objects where 1 = 2 ;
Table created.
SQL> create index t_idx on t(last_ddl_time) nologging ;
Index created.
SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
40617 rows created.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
PL/SQL procedure successfully completed.
SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME CREATED
47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
47672 ALL$OLAP2_CUBE_DIM_USES 28-JUL-2009 08:08:39
47681 ALL$OLAP2_CUBE_MEASURE_MAPS 28-JUL-2009 08:08:39
47682 ALL$OLAP2_FACT_LEVEL_USES 28-JUL-2009 08:08:39
47685 ALL$OLAP2_AGGREGATION_USES 28-JUL-2009 08:08:39
47692 ALL$OLAP2_CATALOGS 28-JUL-2009 08:08:39
47665 ALL$OLAPMR_FACTTBLKEYMAPS 28-JUL-2009 08:08:39
47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS 28-JUL-2009 08:08:39
47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS 28-JUL-2009 08:08:39
47669 ALL$OLAP9I2_HIER_DIMENSIONS 28-JUL-2009 08:08:39
47666 ALL$OLAP9I1_HIER_DIMENSIONS 28-JUL-2009 08:08:39
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> set autotrace traceonly
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
2 ;
11 rows selected.
Execution Plan
Plan hash value: 44968669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 180 (2)| 00:00:03 |
| 1 | SORT ORDER BY | | 1200 | 91200 | 180 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 3 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 6 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 7 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T".ROWID="T1"."RID")
3 - filter("RN">=1190)
4 - filter(ROWNUM<=1200)
Statistics
1 recursive calls
0 db block gets
348 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
343 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
11 rows selected.
Execution Plan
Plan hash value: 168880862
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 1 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 2 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 5 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T".ROWID="T1"."RID")
2 - filter("RN">=1190)
3 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
349 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
175 recursive calls
0 db block gets
388 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> set autotrace off
SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query. -
Hello,
Below I provide a complete code to re-produce the behavior I am observing. You could run it in tempdb or any other database, which is not important. The test query provided at the top of the script is pretty silly, but I have observed the same
performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
observed on my machine). Here are the steps with numbers corresponding to the numbers in the script:
1. Run script from #1 to #7. This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
2. Run test query (at the top of the script). Here are the execution statistics:
Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 5514 ms,
elapsed time = 1389 ms.
3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
4. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 828 ms,
elapsed time = 392 ms.
As you can see the query is clearly faster. Yay for columnstore indexes!.. But let's continue.
5. Run script from #10 to #12 (note that this might take some time to execute). This will move about 80% of the data in both tables to a different partition. You should be able to see the fact that the data has been moved when running Step #
11.
6. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 8172 ms,
elapsed time = 3119 ms.
And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
I am not going to paste here execution plans or the detailed properties for each of the operators. They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
run (when all of the data resided on the same partition).
So the question is: why is it slower?
Thank you for any help!
Here is the code to re-produce this:
--==> Test Query - begin --<===
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT COUNT(1)
FROM Txns AS z WITH(NOLOCK)
LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
WHERE z.RecordStatus = 1
--==> Test Query - end --<===
--===========================================================
--1. Clean-up
IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
--2. Create partition funciton
CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
--3. Partition scheme
CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
--4. Create Main table
CREATE TABLE dbo.Main(
SetID int NOT NULL,
SubSetID int NOT NULL,
TxnID int NOT NULL,
ColBatchID int NOT NULL,
ColMadeId int NOT NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--5. Create Txns table
CREATE TABLE dbo.Txns(
TxnID int IDENTITY(1,1) NOT NULL,
GroupID int NULL,
SiteID int NULL,
Period datetime NULL,
Amount money NULL,
CreateDate datetime NULL,
Descr varchar(50) NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
-- 40 mln. rows - approx. 4 min
--6.1 Populate Main table
DECLARE @NumberOfRows INT = 40000000
INSERT INTO Main (
SetID,
SubSetID,
TxnID,
ColBatchID,
ColMadeID,
RecordStatus)
SELECT TOP (@NumberOfRows)
SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--6.2 Populate Txns table
-- 10 mln. rows - approx. 1 min
SET @NumberOfRows = 10000000
INSERT INTO Txns (
GroupID,
SiteID,
Period,
Amount,
CreateDate,
Descr,
RecordStatus)
SELECT TOP (@NumberOfRows)
GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--7. Add PK's
-- 1 min
ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Replace regular indexes with clustered columnstore indexes
--===========================================================
--8. Drop existing indexes
ALTER TABLE Txns DROP CONSTRAINT PK_Txns
DROP INDEX Main.CDX_Main
--9. Create clustered columnstore indexes (on partition scheme!)
-- 1 min
CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Move about 80% the data into a different partition
--===========================================================
--10. Update "RecordStatus", so that data is moved to a different partition
-- 14 min (32002557 row(s) affected)
UPDATE Main
SET RecordStatus = 2
WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
-- 4.5 min (7999999 row(s) affected)
UPDATE Txns
SET RecordStatus = 2
WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
--11. Check data distribution
SELECT
OBJECT_NAME(SI.object_id) AS PartitionedTable
, DS.name AS PartitionScheme
, SI.name AS IdxName
, SI.index_id
, SP.partition_number
, SP.rows
FROM sys.indexes AS SI WITH (NOLOCK)
JOIN sys.data_spaces AS DS WITH (NOLOCK)
ON DS.data_space_id = SI.data_space_id
JOIN sys.partitions AS SP WITH (NOLOCK)
ON SP.object_id = SI.object_id
AND SP.index_id = SI.index_id
WHERE DS.type = 'PS'
AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
ORDER BY 1, 2, 3, 4, 5;
PartitionedTable PartitionScheme IdxName index_id partition_number rows
Main PS_Scheme CDX_Main 1 1 7997443
Main PS_Scheme CDX_Main 1 2 32002557
Main PS_Scheme CDX_Main 1 3 0
Main PS_Scheme CDX_Main 1 4 0
Txns PS_Scheme PK_Txns 1 1 2000001
Txns PS_Scheme PK_Txns 1 2 7999999
Txns PS_Scheme PK_Txns 1 3 0
Txns PS_Scheme PK_Txns 1 4 0
--12. Update statistics
EXEC sys.sp_updatestats
--==> Run test Query --<===Hello Michael,
I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 251 ms, elapsed time = 128 ms.
As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
(coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation".
Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer -
Select count from large fact tables with bitmap indexes on them
Hi..
I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
ThxYou may have corruption in the index if the queries ...
Select /*+ full(t) */ count(*) from my_table t
... and ...
Select /*+ index_combine(t my_index) */ count(*) from my_table t;
... give different results.
Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them. -
SQL Server 2014 Bug - with Clustered Index On Temp Table with Identity Column
Hi,
Here's a stored procedure. (It could be shorter, but this will show the problem.)
CREATE PROCEDURE dbo.SGTEST_TBD_20141030 AS
IF OBJECT_ID('TempDB..#Temp') IS NOT NULL
DROP TABLE #Temp
CREATE TABLE #Temp
Col1 INT NULL
,Col2 INT IDENTITY NOT NULL
,Col3 INT NOT NULL
CREATE CLUSTERED INDEX ix_cl ON #Temp(Col2)
SET IDENTITY_INSERT #Temp ON;
RAISERROR('Preparing to INSERT INTO #Temp...',0,1) WITH NOWAIT;
INSERT INTO #Temp (Col1, Col2, Col3)
SELECT 1,2,3
RAISERROR('Insert Success!!!',0,1) WITH NOWAIT;
SET IDENTITY_INSERT #Temp OFF;
In SQL Server 2014, If I execute this (EXEC dbo.SGTEST_TBD_20141030) It works. If I execute it a second time - It fails hard with:
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
In SQL Server 2012, I can execute it over and over, with no problem. I've discovered two work-a-rounds:
1) Add "WITH RECOMPILE" to the SP CREATE/ALTER statement, or
2) Declare the cluster index in the TABLE CREATE statement, e.g. ",UNIQUE CLUSTERED (COL2)"
This second option only works though, if the column is unique. I've opted for the "WITH RECOMPILE" for now. But, thought I should share.Hi,
I did not get any error Message:
CREATE TABLE #Temp
Col1 INT NULL
,Col2 INT IDENTITY NOT NULL
,Col3 INT NOT NULL
CREATE CLUSTERED INDEX ix_cl ON #Temp(Col2)
SET IDENTITY_INSERT #Temp ON;
RAISERROR('Preparing to INSERT INTO #Temp...',0,1) WITH NOWAIT;
INSERT INTO #Temp (Col1, Col2, Col3)
SELECT 1,2,3
RAISERROR('Insert Success!!!',0,1) WITH NOWAIT;
SET IDENTITY_INSERT #Temp OFF;
SELECT * FROM #Temp
OUTPUT:
Col1 Col2
Col3
1 2 3
1 2 3
1 2 3
Select @@version
--Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)
Oct 19 2012 13:38:57
Copyright (c) Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
Thanks Shiven:) If Answer is Helpful, Please Vote -
Compression on table and on clustered index
Hi all,
if I had a table with a clustered index on it, is it the same to rebuild the table with compression and rebuild the clustered index with compression?
Is this
ALTER TABLE dbo.TABLE_001 REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW, ONLINE=ON)
Equivalent to this?
ALTER INDEX PK_TABLE_001 ON dbo.TABLE_001 REBUILD WITH (DATA_COMPRESSION=ROW, ONLINE=ON)
where PK_TABLE_001 is the clustered index of the table TABLE_001Andrea,
Cluster index is table itself organized according to clustering key value. So if you apply compression on CI means internally table would be compressed and because cluster index includes all columsn of the table so complete table would be compressed.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Move large tables and indexes into own tablespace
I currently manage a 100Gb 10.2.0.4 SE database on Windows.
There is one data tablespace and one indexes.
I have one table xxxHistory that is periodically cleared out, however, six months of data must be retained.
The table is currently 17Gb and has 95 million rows, the corresponding nine or so indexes take another 47Gb.
I am having a small problem with I/O waits on this table so, I want to move this table and the indexes to their own tablespaces, which I will create (xxxHistory_D and xxxHistory_I).
I know the two methods, exp/imp (difficult due to foreign keys) and the prefered method of :
alter table tbl move tablespace tblsp and
alter index ind rebuild tablespace tblsp.
I have no problems with the syntax etc, having used this method many times.
My question is, does anyone have a better idea of how to approach this to minimise downtime?
The system cannot be used if this table is not available.
I am also going to migrate to 11.2x when available but can't find anything in the new features to help.
Note, this is SE, so partitioning is not an option and once I have sorted this table out, I will unchain the rows of any other and reorganise the space.
Disk space is not an issue.
Thanks,BigPhil wrote:
Note, this is SE, so partitioning is not an option and once I have sorted this table out, I will unchain the rows of any other and reorganise the space.
Disk space is not an issue.
Strategically this sounds as if you really do need partitioning; since you're on SE, you could consider the v7 "partition view" concept.
Create one table per month, and index each table separately.
Add a constraint to each table of the form: "movement date between to_date('...') and to_date('...')"
Create a union all view of the tables.
Getting rid of a single month means redefining the view.
In theory any queries you do should be able to filter predicate pushdown and thus eliminate redundant partitions, and also handle pushing joins down into the union all view. But that's something you would have to test carefully.
You could even create the tables as index organized tables - which may be the solution to your I/O wait problems - if your queries are about stock movement then all the movements for a given stock will be thinly scattered across the table, leading to one block I/O per row required. IOTs would give you an overhead on inserts, but eliminate waits on queries.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Using large block sizes for index and table spaces
" You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
How to find the current block size used for tables and index? is there a v$parameter query?
What is an optimal block size value for batch?
How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?user3390467 wrote:
" You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
How to find the current block size used for tables and index? is there a v$parameter query?
What is an optimal block size value for batch?
How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
1) stop using generic tools to analyze specific problems
2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
3) define the OS and DB version - in detail. Give rev levels and patch levels.
If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission. -
Deadlock when updating different rows on a single table with one clustered index
Deadlock when updating different rows on a single table with one clustered index. Can anyone explain why?
<event name="xml_deadlock_report" package="sqlserver" timestamp="2014-07-30T06:12:17.839Z">
<data name="xml_report">
<value>
<deadlock>
<victim-list>
<victimProcess id="process1209f498" />
</victim-list>
<process-list>
<process id="process1209f498" taskpriority="0" logused="1260" waitresource="KEY: 8:72057654588604416 (8ceb12026762)" waittime="1396" ownerId="1145783115" transactionname="implicit_transaction"
lasttranstarted="2014-07-30T02:12:16.430" XDES="0x3a2daa538" lockMode="X" schedulerid="46" kpid="7868" status="suspended" spid="262" sbid="0" ecid="0" priority="0"
trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783115" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<inputbuf>
UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales1', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'ABCD' AND cam_window= 2 </inputbuf>
</process>
<process id="process9cba188" taskpriority="0" logused="2084" waitresource="KEY: 8:72057654588604416 (2280b457674a)" waittime="1397" ownerId="1145783104" transactionname="implicit_transaction"
lasttranstarted="2014-07-30T02:12:16.427" XDES="0x909616d28" lockMode="X" schedulerid="23" kpid="8704" status="suspended" spid="155" sbid="0" ecid="0" priority="0"
trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783104" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<inputbuf>
UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales2', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'12345' AND cam_window= 1 </inputbuf>
</process>
</process-list>
<resource-list>
<keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
<owner-list>
<owner id="process9cba188" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process1209f498" mode="X" requestType="wait" />
</waiter-list>
</keylock>
<keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock18ee1ab00" mode="X" associatedObjectId="72057654588604416">
<owner-list>
<owner id="process1209f498" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process9cba188" mode="X" requestType="wait" />
</waiter-list>
</keylock>
</resource-list>
</deadlock>
</value>
</data>
</event>To be honest, I don't think the transaction is necessary, but the developers put it there anyway. The select statement will put the result cam_status
into a variable, and then depends on its value, it will decide whether to execute the second update statement or not. I still can't upload the screen-shot, because it says it needs to verify my account at first. No clue at all. But it is very simple, just
like:
Clustered Index Update
[analyst_monitor].[IX_Clust_scam_an_name_window]
cost: 100%
By the way, for some reason, I can't find the object based on the associatedObjectId listed in the XML
<keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor"
indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
For example:
SELECT * FROM sys.partition WHERE hobt_id = 72057654588604416
This return nothing. Not sure why.
Maybe you are looking for
-
Following a near disaster after installing security update 2005-009 i have 2 problems ( or maybe only 1) When i do a verify on my boot drive i get the message from disk utility, INCORRECT NUMBER OF THREAD RECORDS, volume needs repair but it cannot fi
-
Did the iTunes update & now it won't work at all!
I downloaded the iTunes update 2 days ago, and now iTunes won't work because it needs 10.4.9+ & all I have is 10.4.8 I tried to download the 'Combo' 10.4.11 but it crashed my system (the 'Restart' hung with turning wheel for over 2 hours) & I had to
-
How can i make factory reset to may macbook pro
hey, how can i make factory reset to my macbook pro
-
I want to buy an iphone 5 but they are just to expensive and I found one on eBay and the seller wants $250 for it but the phone has one flaw, is locked it needs he previous owners apple id and I want to know if there is any way I can around that ?
-
Will I lose my games if I restore factory settings?
I received a message asking if I wanted to update to iOS7, I clicked yes; now it wants to sync with my iTunes but will only do this if I restore factory settings. I cannot use my iPad as it is stuck with a screen telling me to plug it into iTunes...