SIDs for Full table scan wait events in db
Guys,
10.2.0.5/ 2 node RAC / RHEL-3
CanAnyone provide me an sql to find all SID doing full table scans in a db?
Thanks!
Hari
You ought to be able to query v$sql_plan for those sql plans with a full scan in them and if you want all the recent plans that contain a full table scan in them. You can join this back to v$session for those session currently executing a full table scan but if the sql in question was a previous statement executed by the session the join will not show the session because that statement is no longer the current statement.
1 select p.sql_id, s.sid, p.object_name, p.operation, p.options
2 from v$sql_plan p, v$session s
3 where p.options like '%FULL%'
4* and s.sql_id = p.sql_id
You will probably want to filter out internal operations (operation = "FIXED TABLE")
HTH -- Mark D Powell --
Similar Messages
-
"db file scattered read" too high and Query going for full table scan-Why ?
Hi,
I had a big table of around 200mb and had a index on it.
In my query I am using the where clause which has to use the
index. I am neither using any not null condition
nor using any function on the index fields.
Still my query is not using the index.
It is going for full table scan.
Also the statspack report is showing the
"db file scattered read" too high.
Can any body help and suggest me why this is happenning.
Also tell me the possible solution for it.
Thanks
Arun Tayal"db file scattered read" are physical reads/multi block reads. This wait occurs when the session reading data blocks from disk and writing into the memory.
Take the execution plan of the query and see what is wrong and why the index is not being used.
However, FTS are not always bad. By the way, what is your db_block_size and db_file_multiblock_read_count values?
If those values are set to high, Optimizer always favour FTS thinking that reading multiblock is always faster than single reads (index scans).
Dont see oracle not using index, just find out why oracle is not using index. Use the INDEX hint to force optimizer to use index. Take the execution with/witout index and compare the cardinality,cost and of course, logical reads.
Jaffar
Message was edited by:
The Human Fly -
Bitmap index column goes for full table scan
Hi all,
Database : 10g R2
OS : Windows xp
my select query is :
SELECT tran_id, city_id, valid_records
FROM transaction_details
WHERE type_id=101;
And the Explain Plan is :
Plan
SELECT STATEMENT ALL_ROWSCost: 29 Bytes: 8,876 Cardinality: 634
1 TABLE ACCESS FULL TABLE TRANSACTION_DETAILS** Cost: 29 Bytes: 8,876 Cardinality: 634
total number of rows in the table = 1800 ;
distinct value of type_ids are 101,102,103
so i created a bit map index on it.
CREATE BITMAP INDEX btmp_typeid ON transaction_details
(type_id)
LOGGING
NOPARALLEL;
after creating the index, the explain plan shows the same. why it goes for full table scan?.
Kindly share ur idea on this.
Edited by: 887268 on Apr 3, 2013 11:01 PM
Edited by: 887268 on Apr 3, 2013 11:02 PM>
I am sorry for being ignorant, can you please cite any scenario of locking due to bitmap indices? A link can be useful as well.
>
See my full reply in this thread
Bitmap index for FKs on Fact tables
>
ETL is affected because DML operations (INSERT/UPDATE/DELETE) on tables with bitmapped indexes can have serious performance issues due to the serialization involved. Updating a single bit-mapped column value (e.g. from 'M' to 'F' for gender) requires both bitmapped index blocks to be locked until the update is complete. A bitmap index stored ROWID ranges (min rowid - max rowid) than can span many, many records. The entire 'range' of rowids is locked in order to change just one value.
To change from 'M' the 'M' rowid range for that one row is locked and the ROWID must be removed from the range byt clearing the bit. To change to 'F' the 'F' rowid id range needs to be found, locked and the bit set that corresponds to that rowid. No other rows with rowids in the range can be changed since this is a serial operation. If the range includes 1000 rows and they all need changed it takes 1000 serial operations. -
Different Cost values for full table scans
I have a very simple query that I run in two environments (Prod (20 CPU) and Dev (12 CPU)). Both environemtns are HPUX, oracle 9i.
The query looks like this:
SELECT prd70.jde_item_n
FROM gdw.vjda_gdwprd68_bom_cmpnt prd68
,gdw.vjda_gdwprd70_gallo_item prd70
WHERE prd70.jde_item_n = prd68.parnt_jde_item_n
AND prd68.last_eff_t+nvl(to_number(prd70.auto_hld_dy_n),0)>= trunc(sysdate)
GROUP BY prd70.jde_item_n
When I look at the explain plans, there is a significant difference in cost and I can't figure out why they would be different. Both queries do full table scans, both instances have about the same number of rows, statistics on both are fresh.
Production Plan:
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=18398 Card=14657 B
ytes=249169)
1 0 SORT (GROUP BY) (Cost=18398 Card=14657 Bytes=249169)
2 1 HASH JOIN (Cost=18304 Card=14657 Bytes=249169)
3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=949
4 Card=194733 Bytes=1168398)
4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=5887
Card=293149 Bytes=3224639)
Development plan:
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3566 Card=14754 B
ytes=259214)
1 0 HASH GROUP BY (GROUP BY) (Cost=3566 Card=14754 Bytes=259214)
2 1 HASH JOIN (Cost=3558 Card=14754 Bytes=259214)
3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=1914
4 Card=193655 Bytes=1323598)
4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=1076
Card=295075 Bytes=3169542)
There seems to be no reason for the costs to be so different, but I'm hoping that someone will be able to lead me in the right direction.
Thanks,
JdelaoThis link may help:
http://jaffardba.blogspot.com/2007/07/change-behavior-of-group-by-clause-in.html
But looking at the explain plans one of them uses a SORT (GROUP BY) (higher cost query) and the other uses a HASH GROUP BY (GROUP BY) (lower cost query). From my searches on the `Net the HASH GROUP BY is a more efficient algorithm than the SORT (GROUP BY) which would lead me to believe that this is one of the reasons why the cost values are so different. I can't find which version HASH GROUP BY came in but quick searches indicate 10g for some reason.
Are your optimizer features parameter set to the same value? In general you could compare relevant parameters to see if there is a difference.
Hope this helps! -
Avod full table scan help...
HI ,
I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
please help to resolve the issue and it should recognize index rather than full table scan..user13301356 wrote:
HI ,
I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
please help to resolve the issue and it should recognize index rather than full table scan..What is database version? Are all columns in the table indexed? Copy and paste the query that you are executing. -
URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query
Hi Everyone,
When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
UPDATE TABLE_A a
SET a.current_commit_date =
(SELECT MAX (b.loading_date)
FROM TABLE_B b
WHERE a.sales_order_id = b.sales_order_id
AND a.sales_order_line_id = b.sales_order_line_id
AND b.confirmed_qty > 0
AND b.data_flag IS NULL
OR b.schedule_line_delivery_date >= '23 NOV 2008')
Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
Please please help me in optimizing this.
Thanks,
SudhindraCheck the instruction again, you're leaving out information we need in order to help you, like optimizer information.
- Post your exact database version, that is: the result of select * from v$version;
- Don't use TOAD's execution plan, but use
SQL> explain plan for <your_query>;
SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
It's also explained in the instruction.
When was the last time statistics were gathered for table_a and table_b?
You can find out by issuing the following query:select table_name
, last_analyzed
, num_rows
from user_tables
where table_name in ('TABLE_A', 'TABLE_B');
Can you also post the results of these counts;select count(*)
from table_b
where confirmed_qty > 0;
select count(*)
from table_b
where data_flag is null;
select count(*)
from table_b
where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy'); -
How to find the count of tables going for fts(full table scan in oracle 10g
HI
how to find the count of tables going for fts(full table scan) in oracle 10g
regardsHi,
Why do you want to 'find' those tables?
Do you want to 'avoid FTS' on those tables?
You provide little information here. (Perhaps you just migrated from 9i and having problems with certain queries now?)
FTS is sometimes the fastest way to retrieve data, and sometimes an index scan is.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968
There's no 'FTS view' available, if you want to know what happens on your DB you need, like Anand already said, to trace sessions that 'worry you'. -
Entity Framework Generated SQL for paging or using Linq skip take causes full table scans.
The slq genreated creates queries that cause a full table scan for pagination. Is there any way to fix this?
I am using
ODP.NET ODTwithODAC1120320_32bit
ASP.NET 4.5
EF 5
Oracle 11gR2
This table has 2 million records. The further into the records you page the longer it takes.
LINQ
var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
select errorLog).Count();
var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
orderby errorLog.ERR_LOG_ID
select errorLog).Skip(cnt-10).Take(10).ToList();
Here is the query & execution plans.
SELECT *
FROM (SELECT "Extent1"."ERR_LOG_ID" AS "ERR_LOG_ID",
"Extent1"."SRV_LOG_ID" AS "SRV_LOG_ID",
"Extent1"."TS" AS "TS",
"Extent1"."MSG" AS "MSG",
"Extent1"."STACK_TRACE" AS "STACK_TRACE",
"Extent1"."MTD_NM" AS "MTD_NM",
"Extent1"."PRM" AS "PRM",
"Extent1"."INSN_ID" AS "INSN_ID",
"Extent1"."TS_1" AS "TS_1",
"Extent1"."LOG_ETRY" AS "LOG_ETRY"
FROM (SELECT "Extent1"."ERR_LOG_ID" AS "ERR_LOG_ID",
"Extent1"."SRV_LOG_ID" AS "SRV_LOG_ID",
"Extent1"."TS" AS "TS",
"Extent1"."MSG" AS "MSG",
"Extent1"."STACK_TRACE" AS "STACK_TRACE",
"Extent1"."MTD_NM" AS "MTD_NM",
"Extent1"."PRM" AS "PRM",
"Extent1"."INSN_ID" AS "INSN_ID",
"Extent1"."TS_1" AS "TS_1",
"Extent1"."LOG_ETRY" AS "LOG_ETRY",
row_number() OVER (ORDER BY "Extent1"."ERR_LOG_ID" ASC) AS "row_number"
FROM (SELECT "ERRORLOGANDSERVICELOG_VIEW"."ERR_LOG_ID" AS "ERR_LOG_ID",
"ERRORLOGANDSERVICELOG_VIEW"."SRV_LOG_ID" AS "SRV_LOG_ID",
"ERRORLOGANDSERVICELOG_VIEW"."TS" AS "TS",
"ERRORLOGANDSERVICELOG_VIEW"."MSG" AS "MSG",
"ERRORLOGANDSERVICELOG_VIEW"."STACK_TRACE" AS "STACK_TRACE",
"ERRORLOGANDSERVICELOG_VIEW"."MTD_NM" AS "MTD_NM",
"ERRORLOGANDSERVICELOG_VIEW"."PRM" AS "PRM",
"ERRORLOGANDSERVICELOG_VIEW"."INSN_ID" AS "INSN_ID",
"ERRORLOGANDSERVICELOG_VIEW"."TS_1" AS "TS_1",
"ERRORLOGANDSERVICELOG_VIEW"."LOG_ETRY" AS "LOG_ETRY"
FROM "IDS_CORE"."ERRORLOGANDSERVICELOG_VIEW" "ERRORLOGANDSERVICELOG_VIEW") "Extent1") "Extent1"
WHERE ("Extent1"."row_number" > 1933849)
ORDER BY "Extent1"."ERR_LOG_ID" ASC)
WHERE (ROWNUM <= (10))
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 31750 | | 821K (1)| 02:44:15 |
|* 1 | COUNT STOPKEY | | | | | | |
| 2 | VIEW | | 1561K| 4728M| | 821K (1)| 02:44:15 |
|* 3 | VIEW | | 1561K| 4748M| | 821K (1)| 02:44:15 |
| 4 | WINDOW SORT | | 1561K| 3154M| 4066M| 821K (1)| 02:44:15 |
|* 5 | HASH JOIN OUTER | | 1561K| 3154M| | 130K (1)| 00:26:09 |
| 6 | TABLE ACCESS FULL| IDS_SERVICES_LOG | 1047 | 52350 | | 5 (0)| 00:00:01 |
| 7 | TABLE ACCESS FULL| IDS_SERVICES_ERROR_LOG | 1561K| 3080M| | 130K (1)| 00:26:08 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter("Extent1"."row_number">1933849)
5 - access("T1"."SRV_LOG_ID"(+)="T2"."SRV_LOG_ID")I did try a sample from stack overflow that would apply it to all string types, but I didn't see any query results differences. Please note, I am having the problem without any order with or where statements. Of course the skip take generates them. Please advise how I would implement the EntityFunctions.AsNonUnicode method with this Linq query.
LINQ
var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
select errorLog).Count();
var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
orderby errorLog.ERR_LOG_ID
select errorLog).Skip(cnt-10).Take(10).ToList();
This is what I inserted into my model to hopefully fix it. FROM:c# - EF Code First - Globally set varchar mapping over nvarchar - Stack Overflow
/// <summary>
/// Change the "default" of all string properties for a given entity to varchar instead of nvarchar.
/// </summary>
/// <param name="modelBuilder"></param>
/// <param name="entityType"></param>
protected void SetAllStringPropertiesAsNonUnicode(
DbModelBuilder modelBuilder,
Type entityType)
var stringProperties = entityType.GetProperties().Where(
c => c.PropertyType == typeof(string)
&& c.PropertyType.IsPublic
&& c. -
Hi ,
I have posted the following :
Full Table Scans for small tables... in Oracle10g v.2
and the first post of Mr. Chris Antognini was that :
"I'm sorry to say that the documentation is wrong! In fact when a full table scan is executed, and the blocks are not cached, at least 2 I/O are performed. The first one to get the header block (where the extent map is stored) and the second to access the first and, for a very small table, only extent."
Is it really wrong....????
Thanks...
SimFredrik,
I do not say in any way that the documentation in this point is wrong.....
In my first post , i have inserted a link to a thread made in another forum:
Full Table Scans for small tables... in Oracle10g v.2
Christian Antognini has written that the documentation is wrong....
I'm sorry to say that the documentation is wrong!
In fact when a full table scan is executed, and the
blocks are not cached, at least 2 I/O are performed. The
first one to get the header block (where the extent map
is stored) and the second to access the first and, for a
very small table, only extent.I'm just wondering if he has right......!!!!!!!
Thanks..
Sim -
Full Table Scans for small tables... in Oracle10g v.2
Hi,
The Query optimizer may select to do a full table scan instead of an index scan when the table is small (it consists of blocks the number of which is less than the the value of db_file_multiblock_read_count).
So , i tried to see it using the dept table......
SQL> select blocks , extents , bytes from USER_segments where segment_name='DEPT';
BLOCKS EXTENTS BYTES
8 1 65536
SQL> SHOW PARAMETER DB_FILE_MULTIBLOCK_READ_COUNT;
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> explain plan for SELECT * FROM DEPT
2 WHERE DEPTNO=10
3 /
Explained
SQL>
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2852011669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Tim
| 0 | SELECT STATEMENT | | 1 | 23 | 1 (0)| 00:
| 1 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 23 | 1 (0)| 00:
|* 2 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:
Predicate Information (identified by operation id):
2 - access("DEPTNO"=10)
14 rows selectedSo , according to the above remarks
What may be the reason of not performing full table scan...????
Thanks...
SimNo i have not generalized.... In the Oracle's extract
i have posted there is the word "might".....
I just want to find an example in which selecting
from a small table... query optimizer does perform a
full scan instead of a type of index scan...Sorry for that... I don't mean to be rude :)
See following...
create table index_test(id int, name varchar2(10));
create index index_test_idx on index_test(id);
insert into index_test values(1, 'name');
commit;
-- No statistics
select * from index_test where id = 1;
SELECT STATEMENT ALL_ROWS-Cost : 3
TABLE ACCESS FULL MAXGAUGE.INDEX_TEST(1) ("ID"=1)
-- If first rows mode?
alter session set optimizer_mode = first_rows;
select * from index_test where id = 1;
SELECT STATEMENT FIRST_ROWS-Cost : 802
TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1)
-- If statistics is gathered
exec dbms_stats.gather_table_stats(user, 'INDEX_TEST', cascade=>true);
alter session set optimizer_mode = first_rows;
select * from index_test where id = 1;
SELECT STATEMENT ALL_ROWS-Cost : 2
TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1)
alter session set optimizer_mode = all_rows;
select * from index_test where id = 1;
SELECT STATEMENT ALL_ROWS-Cost : 2
TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1) See some dramatic changes by the difference of parameters and statistics?
Jonathan Lewis has written a great book on cost mechanism of Oracle optimizer.
It will tell almost everything about your questions...
http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Jonathan-Lewis/dp/1590596366/ref=sr_1_1?ie=UTF8&s=books&qid=1195209336&sr=1-1 -
Selecting a column goes for a full table scan
Hi,
Oracle version 10.2
I have the below query:
SELECT c.companyid,
c.contactid,
c.contactlname
FROM contact c,company ic
WHERE UPPER((c.contactlname)) LIKE CASE
WHEN 'test' IS NULL THEN
UPPER((c.contactlname))
ELSE DECODE(1,
1,
'%' || 'TEST' || '%',
2,
'TEST' || '%',
3,
'%' || 'TEST',
4,
'TEST')
END
AND c.activeflag = DECODE('Y', 'N', 'Y', c.activeflag)
AND ic.companyid=c.companyidthis is using the index on table company.
Explain plan:
SELECT STATEMENT, GOAL = ALL_ROWS 68 3523 176150
NESTED LOOPS 68 3523 176150
TABLE ACCESS FULL CONTACT 65 3523 155012
INDEX UNIQUE SCAN COMPANY_PK 0 1 6 Now if i include two more columns from the company table in the select statement the plan changes and it goes for a full table scan...
Query:
SELECT ic.companyname,
ic.companystatustypeid,
c.companyid,
c.contactid,
c.contactlname
FROM contact c,company ic
WHERE UPPER((c.contactlname)) LIKE CASE
WHEN 'test' IS NULL THEN
UPPER((c.contactlname))
ELSE DECODE(1,
1,
'%' || 'TEST' || '%',
2,
'TEST' || '%',
3,
'%' || 'TEST',
4,
'TEST')
END
AND c.activeflag = DECODE('Y', 'N', 'Y', c.activeflag)
AND ic.companyid=c.companyidExplain Plan:
SELECT STATEMENT, GOAL = ALL_ROWS 2126 4121 403858
HASH JOIN 2126 4121 403858
TABLE ACCESS FULL CONTACT 108 4121 185445
PARTITION LIST ALL 1959 1031340 54661020
TABLE ACCESS FULL COMPANY 1959 1031340 54661020 Any ideas why?dont think so.
i tried removing the filters also:
Query:
SELECT -- ic.companyname,
-- ic.companystatustypeid,
c.companyid,
c.contactid,
c.contactlname,
c.contactfname,
c.contactpositionid,
c.contactroledesc,
c.updateddate
FROM contact c,company ic
WHERE ic.companyid=c.companyidplan:
SELECT STATEMENT, GOAL = ALL_ROWS 109 73346 3520608
NESTED LOOPS 109 73346 3520608
TABLE ACCESS FULL CONTACT 51 73346 3080532
INDEX UNIQUE SCAN COMPANY_PK 0 1 6 Query:
SELECT ic.companyname,
ic.companystatustypeid,
c.companyid,
c.contactid,
c.contactlname,
c.contactfname,
c.contactpositionid,
c.contactroledesc,
c.updateddate
FROM contact c,company ic
WHERE ic.companyid=c.companyid
Plan:
SELECT STATEMENT, GOAL = ALL_ROWS 2462 73346 6894524
HASH JOIN 2462 73346 6894524
TABLE ACCESS FULL CONTACT 51 73346 3080532
PARTITION LIST ALL 1348 973674 50631048
TABLE ACCESS FULL COMPANY 1348 973674 50631048 Does the columns selected have an impact on the plan?? I thought the plan is derived on basis of the join conditions... -
Taking more time in INDEX RANGE SCAN compare to the full table scan
Hi all ,
Below are the version og my database.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
I have gather the table statistics and plan change for sql statment.
SELECT P1.COMPANY, P1.PAYGROUP, P1.PAY_END_DT, P1.PAYCHECK_OPTION,
P1.OFF_CYCLE, P1.PAGE_NUM, P1.LINE_NUM, P1.SEPCHK FROM PS_PAY_CHECK P1
WHERE P1.FORM_ID = :1 AND P1.PAYCHECK_NBR = :2 AND
P1.CHECK_DT = :3 AND P1.PAYCHECK_OPTION <> 'R'
Plan before the gather stats.
Plan hash value: 3872726522
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | *14306* (100)| |
| 1 | *TABLE ACCESS FULL| PS_PAY_CHECK* | 1 | 51 | 14306 (4)| 00:02:52 |
Plan after the gather stats:
Operation Object Name Rows Bytes Cost
SELECT STATEMENT Optimizer Mode=CHOOSE
1 4
*TABLE ACCESS BY INDEX ROWID SYSADM.PS_PAY_CHECK* 1 51 *4*
*INDEX RANGE SCAN SYSADM.PS0PAY_CHECK* 1 3After gather stats paln look good . but when i am exeuting the query it take 5 hours. before the gather stats it finishing the within 2 hours. i do not want to restore my old statistics. below are the data for the tables.and when i am obserrving it lot of db files scatter rea
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
SQL> select count(*) from sysadm.ps_pay_check;
select num_rows,blocks from dba_tables where table_name ='PS_PAY_CHECK';
COUNT(*)
1270052
SQL> SQL> SQL>
NUM_ROWS BLOCKS
1270047 63166
Event Waits Time (s) (ms) Time Wait Class
db file sequential read 1,584,677 6,375 4 36.6 User I/O
db file scattered read 2,366,398 5,689 2 32.7 User I/Oplease let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?suresh.ratnaji wrote:
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default? -
Tables in subquery resulting in full table scans
Hi,
This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
Problem Description:
All the tables in sub-query are resulting in full table scans and hence are executing for hours.
Here is the query
SELECT /*+ PARALLEL*/
act.assignment_action_id
, act.assignment_id
, act.tax_unit_id
, as1.person_id
, as1.effective_start_date
, as1.primary_flag
FROM pay_payroll_actions pa1
, pay_population_ranges pop
, per_periods_of_service pos
, per_all_assignments_f as1
, pay_assignment_actions act
, pay_payroll_actions pa2
, pay_action_classifications pcl
, per_all_assignments_f as2
WHERE pa1.payroll_action_id = :b2
AND pa2.payroll_id = pa1.payroll_id
AND pa2.effective_date
BETWEEN pa1.start_date
AND pa1.effective_date
AND act.payroll_action_id = pa2.payroll_action_id
AND act.action_status IN ('C', 'S')
AND pcl.classification_name = :b3
AND pa2.consolidation_set_id = pa1.consolidation_set_id
AND pa2.action_type = pcl.action_type
AND nvl (pa2.future_process_mode, 'Y') = 'Y'
AND as1.assignment_id = act.assignment_id
AND pa1.effective_date
BETWEEN as1.effective_start_date
AND as1.effective_end_date
AND as2.assignment_id = act.assignment_id
AND pa2.effective_date
BETWEEN as2.effective_start_date
AND as2.effective_end_date
AND as2.payroll_id = as1.payroll_id
AND pos.period_of_service_id = as1.period_of_service_id
AND pop.payroll_action_id = :b2
AND pop.chunk_number = :b1
AND pos.person_id = pop.person_id
AND (
as1.payroll_id = pa1.payroll_id
OR pa1.payroll_id IS NULL
AND NOT EXISTS
SELECT /*+ PARALLEL*/ NULL
FROM pay_assignment_actions ac2
, pay_payroll_actions pa3
, pay_action_interlocks int
WHERE int.locked_action_id = act.assignment_action_id
AND ac2.assignment_action_id = int.locking_action_id
AND pa3.payroll_action_id = ac2.payroll_action_id
AND pa3.action_type IN ('P', 'U')
AND NOT EXISTS
SELECT /*+ PARALLEL*/
NULL
FROM per_all_assignments_f as3
, pay_assignment_actions ac3
WHERE :b4 = 'N'
AND ac3.payroll_action_id = pa2.payroll_action_id
AND ac3.action_status NOT IN ('C', 'S')
AND as3.assignment_id = ac3.assignment_id
AND pa2.effective_date
BETWEEN as3.effective_start_date
AND as3.effective_end_date
AND as3.person_id = as2.person_id
ORDER BY as1.person_id
, as1.primary_flag DESC
, as1.effective_start_date
, act.assignment_id
FOR UPDATE OF as1.assignment_id
, pos.period_of_service_id
Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
Suspecting some db parameter which is causing this issue.
In the
- Full table scans on tables in the first sub-query
PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
- Full table scans on tables in Second sub-query
PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 29 398.80 2192.99 238706 4991924 2383 0
Fetch 1136 378.38 1921.39 0 4820511 0 1108
total 1166 777.19 4114.38 238706 9812435 2383 1108
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 41 (APPS) (recursive depth: 1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 FOR UPDATE
0 PX COORDINATOR
0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
0 SORT (ORDER BY) [:Q1009]
0 PX RECEIVE [:Q1009]
0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
0 HASH JOIN (ANTI BUFFERED) [:Q1008]
0 PX RECEIVE [:Q1008]
0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
0 BUFFER (SORT) [:Q1006]
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
0 NESTED LOOPS [:Q1006]
0 NESTED LOOPS [:Q1006]
0 NESTED LOOPS [:Q1006]
0 HASH JOIN (ANTI) [:Q1006]
0 BUFFER (SORT) [:Q1006]
0 PX RECEIVE [:Q1006]
0 PX SEND (HASH) OF ':TQ10002'
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
0 PX RECEIVE [:Q1006]
0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
0 HASH JOIN [:Q1005]
0 BUFFER (SORT) [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (BROADCAST) OF ':TQ10000'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
0 HASH JOIN [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
0 PX BLOCK (ITERATOR) [:Q1004]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
0 BUFFER (SORT) [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (HASH) OF ':TQ10001'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
0 PX RECEIVE [:Q1008]
0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
0 FILTER [:Q1007]
0 HASH JOIN [:Q1007]
0 BUFFER (SORT) [:Q1007]
0 PX RECEIVE [:Q1007]
0 PX SEND (BROADCAST) OF ':TQ10003'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
0 PX BLOCK (ITERATOR) [:Q1007]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enq: KO - fast object checkpoint 32 0.02 0.12
os thread startup 8 0.02 0.19
PX Deq: Join ACK 198 0.00 0.04
PX Deq Credit: send blkd 167116 1.95 1103.72
PX Deq Credit: need buffer 327389 1.95 266.30
PX Deq: Parse Reply 148 0.01 0.03
PX Deq: Execute Reply 11531 1.95 1901.50
PX qref latch 23060 0.00 0.60
db file sequential read 108199 0.17 22.11
db file scattered read 9272 0.19 51.74
PX Deq: Table Q qref 78 0.00 0.03
PX Deq: Signal ACK 1165 0.10 10.84
enq: PS - contention 73 0.00 0.00
reliable message 27 0.00 0.00
latch free 218 0.00 0.01
latch: session allocation 11 0.00 0.00
Thanks in advance
Suresh PVHi,
welcome,
how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
Herald ten Dam
http://htendam.wordpress.com
PS. Use "{code}" for showing your code and explain plans, it looks nicer -
Strange full table scan behavior
Hi all you sharp-eyed oracle gurus out there..
I need some help/tips on a update I'm running which is taking very long time. Oracle RDMBS is 11.2.0.1 with advanced compression option.
I'm currently updating all rows in a table from value 1 to 0. (update mistaf set b_code='0';)
The column in question is a CHAR(1) column and the column is not indexed. The table is a fairly large heap-table with 55 million rows to be updated and it's size is approx 11GB. The table is compressed with the compress for OLTP option.
What is strange to me is that I can clearly see that a full table scan is running but i cannot see any db file scattered read, as i would expect, but instead i'm only seeing db file sequential reads. I suppose this might be the reason for its long execution time (dbconsole estimates 20 hours to complete looking at sql monitoring).
Any views on why Oracle would do db file sequential reads on a FTS? And do you agree that this might be the reason why it takes so long to complete?
More info: I first started the update and left work and the next morning i saw that the update still wasnt finished to which i realised that i had a bitmap index on the column to be updated. I dropped the index and started the update once again.. It seemed to execute very fast at the begining before rapildy declining in performance..
THanks in advance for any help!tried to tkprof the trace file but no sql came up..
however the raw trace file looks like this
*** 2010-07-15 17:32:39.829
WAIT #1: nam='db file sequential read' ela= 7516 file#=11 block#=1185541 blocks=1 obj#=221897 tim=1279207959829762
WAIT #1: nam='db file sequential read' ela= 7519 file#=3 block#=567053 blocks=1 obj#=0 tim=1279207959837428
WAIT #1: nam='db file sequential read' ela= 55317 file#=3 block#=186728 blocks=1 obj#=0 tim=1279207959892903
WAIT #1: nam='db file sequential read' ela= 23363 file#=11 block#=3528062 blocks=1 obj#=221897 tim=1279207959916438
WAIT #1: nam='db file sequential read' ela= 4796 file#=3 block#=92969 blocks=1 obj#=0 tim=1279207959921314
WAIT #1: nam='db file sequential read' ela= 1426 file#=11 block#=1079147 blocks=1 obj#=221897 tim=1279207959922846
WAIT #1: nam='db file sequential read' ela= 4510 file#=11 block#=4180577 blocks=1 obj#=221897 tim=1279207959927479
WAIT #1: nam='db file sequential read' ela= 12 file#=11 block#=478 blocks=1 obj#=221897 tim=1279207959927715
WAIT #1: nam='db file sequential read' ela= 11 file#=3 block#=566015 blocks=1 obj#=0 tim=1279207959927768
WAIT #1: nam='db file sequential read' ela= 17343 file#=11 block#=1142438 blocks=1 obj#=221897 tim=1279207960025312
WAIT #1: nam='db file sequential read' ela= 11 file#=11 block#=202520 blocks=1 obj#=221897 tim=1279207960025548
WAIT #1: nam='db file sequential read' ela= 15 file#=3 block#=612704 blocks=1 obj#=0 tim=1279207960025592
WAIT #1: nam='db file sequential read' ela= 17604 file#=11 block#=1198573 blocks=1 obj#=221897 tim=1279207960043303
WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=1473771 class#=1 obj#=221897 tim=1279207960059044
WAIT #1: nam='buffer busy waits' ela= 21 file#=11 block#=4173048 class#=1 obj#=221897 tim=1279207960066512
WAIT #1: nam='buffer busy waits' ela= 3 file#=509 block#=392139 class#=1 obj#=221897 tim=1279207960070049
WAIT #1: nam='buffer busy waits' ela= 20 file#=11 block#=1134301 class#=1 obj#=221897 tim=1279207960075224
WAIT #1: nam='db file sequential read' ela= 19164 file#=11 block#=3502287 blocks=1 obj#=221897 tim=1279207960120163
WAIT #1: nam='buffer busy waits' ela= 70 file#=3 block#=156 class#=45 obj#=0 tim=1279207960126680
WAIT #1: nam='db file sequential read' ela= 43587 file#=11 block#=3503000 blocks=1 obj#=221897 tim=1279207960189443
WAIT #1: nam='db file sequential read' ela= 14214 file#=11 block#=4135977 blocks=1 obj#=221897 tim=1279207960203841
WAIT #1: nam='latch: undo global data' ela= 28 address=11239411512 number=237 tries=0 obj#=221897 tim=1279207960226196
WAIT #1: nam='buffer busy waits' ela= 376 file#=11 block#=1343104 class#=1 obj#=221897 tim=1279207960228124
WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=1450745 class#=1 obj#=221897 tim=1279207960236628
WAIT #1: nam='buffer busy waits' ela= 14 file#=11 block#=1456732 class#=1 obj#=221897 tim=1279207960237393
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=1469341 class#=1 obj#=221897 tim=1279207960239415
WAIT #1: nam='buffer busy waits' ela= 16 file#=11 block#=3498660 class#=1 obj#=221897 tim=1279207960241348
WAIT #1: nam='buffer busy waits' ela= 10 file#=11 block#=1478782 class#=1 obj#=221897 tim=1279207960242208
WAIT #1: nam='buffer busy waits' ela= 11 file#=11 block#=3529073 class#=1 obj#=221897 tim=1279207960242774
WAIT #1: nam='buffer busy waits' ela= 10 file#=11 block#=3506834 class#=1 obj#=221897 tim=1279207960243188
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3550683 class#=1 obj#=221897 tim=1279207960243589
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4082313 class#=1 obj#=221897 tim=1279207960244816
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4090328 class#=1 obj#=221897 tim=1279207960245086
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3555804 class#=1 obj#=221897 tim=1279207960245350
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3483832 class#=1 obj#=221897 tim=1279207960245549
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4115411 class#=1 obj#=221897 tim=1279207960246323
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4100593 class#=1 obj#=221897 tim=1279207960246791
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4135120 class#=1 obj#=221897 tim=1279207960247407
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4119599 class#=1 obj#=221897 tim=1279207960247832
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4174925 class#=1 obj#=221897 tim=1279207960249045
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4185250 class#=1 obj#=221897 tim=1279207960249699
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4188816 class#=1 obj#=221897 tim=1279207960250138
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4189312 class#=1 obj#=221897 tim=1279207960250363
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4190380 class#=1 obj#=221897 tim=1279207960250618
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4190996 class#=1 obj#=221897 tim=1279207960251339
WAIT #1: nam='buffer busy waits' ela= 9 file#=11 block#=4176416 class#=1 obj#=221897 tim=1279207960251490
WAIT #1: nam='buffer busy waits' ela= 11 file#=509 block#=436859 class#=1 obj#=221897 tim=1279207960253748
WAIT #1: nam='buffer busy waits' ela= 2 file#=509 block#=426961 class#=1 obj#=221897 tim=1279207960253993
WAIT #1: nam='db file sequential read' ela= 18802 file#=509 block#=413732 blocks=1 obj#=221897 tim=1279207960273210
WAIT #1: nam='db file sequential read' ela= 13 file#=3 block#=615387 blocks=1 obj#=0 tim=1279207960273322
WAIT #1: nam='db file sequential read' ela= 3948 file#=3 block#=569033 blocks=1 obj#=0 tim=1279207960277522
WAIT #1: nam='db file sequential read' ela= 14 file#=11 block#=4191700 blocks=1 obj#=221897 tim=1279207960333755
WAIT #1: nam='db file sequential read' ela= 3745 file#=11 block#=1197543 blocks=1 obj#=221897 tim=1279207960358279
WAIT #1: nam='db file sequential read' ela= 4541 file#=11 block#=472946 blocks=1 obj#=221897 tim=1279207960363005
WAIT #1: nam='db file sequential read' ela= 7775 file#=3 block#=229860 blocks=1 obj#=0 tim=1279207960370848
WAIT #1: nam='db file sequential read' ela= 22319 file#=11 block#=1150525 blocks=1 obj#=221897 tim=1279207960393342
WAIT #1: nam='db file sequential read' ela= 17058 file#=11 block#=3542375 blocks=1 obj#=221897 tim=1279207960410577
WAIT #1: nam='db file sequential read' ela= 16042 file#=509 block#=437647 blocks=1 obj#=221897 tim=1279207960427928
WAIT #1: nam='db file sequential read' ela= 6412 file#=3 block#=542118 blocks=1 obj#=0 tim=1279207960434440
WAIT #1: nam='buffer busy waits' ela= 660 file#=3 block#=88 class#=23 obj#=0 tim=1279207960457208
WAIT #1: nam='db file sequential read' ela= 13 file#=11 block#=4140513 blocks=1 obj#=221897 tim=1279207960467438
WAIT #1: nam='db file sequential read' ela= 5451 file#=11 block#=3516234 blocks=1 obj#=221897 tim=1279207960472965
WAIT #1: nam='db file sequential read' ela= 5121 file#=11 block#=3514597 blocks=1 obj#=221897 tim=1279207960478231
WAIT #1: nam='db file sequential read' ela= 3982 file#=3 block#=1039898 blocks=1 obj#=0 tim=1279207960482281
WAIT #1: nam='db file sequential read' ela= 5391 file#=509 block#=433941 blocks=1 obj#=221897 tim=1279207960487775
WAIT #1: nam='db file sequential read' ela= 9707 file#=11 block#=3551543 blocks=1 obj#=221897 tim=1279207960529848
WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=4090328 class#=1 obj#=221897 tim=1279207960610165
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4115879 class#=1 obj#=221897 tim=1279207960611710
WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4100364 class#=1 obj#=221897 tim=1279207960612167
WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4133339 class#=1 obj#=221897 tim=1279207960612648
WAIT #1: nam='db file sequential read' ela= 7254 file#=509 block#=405005 blocks=1 obj#=221897 tim=1279207960631133
WAIT #1: nam='db file sequential read' ela= 25608 file#=11 block#=1181075 blocks=1 obj#=221897 tim=1279207960693920
etc -
Do partition scans take longer than a full table scan on an unpartitioned table?
Hello there,
I have a range-partitioned table PART_TABLE which has 10 Million records and 10 partitions having 1 million records each. Partition is done based on a Column named ID which is a sequence from 1 to 10 million.
I created another table P2_BKP (doing a select * from part_table) which has the same dataset as that of PART_TABLE except that this table is not partitioned.
Now, I run a same query on both the tables to retrieve a range of data. Precisely I am trying to read only the data present in 5 partitions of the partitioned tables which theoretically requires less reads than when done on unpartitioned tables.
Yet, the query seems to take extra time on partitioned table than when run on unpartitioned table.Any specific reason why is this the case?
Below is the query I am trying to run on both the tables and their corresponding Explain Plans.
QUERY A
=========
select * from P2_BKP where id<5000000;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 6573K| 720M| 12152 (2)| 00:02:26 |
|* 1 | TABLE ACCESS FULL| P2_BKP | 6573K| 720M| 12152 (2)| 00:02:26 |
QUERY B
========
select * from part_table where id<5000000;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 3983K| 436M| 22181 (73)| 00:04:27 | | |
| 1 | PARTITION RANGE ITERATOR| | 3983K| 436M| 22181 (73)| 00:04:27 | 1 | 5 |
|* 2 | TABLE ACCESS FULL | PART_TABLE | 3983K| 436M| 22181 (73)| 00:04:27 | 1 | 5 |at the risk of bringing unnecessary confusion into the discussion: I think there is a situation in 11g in which a Full Table Scan on a non partitioned table can be faster than the FTS on a corresponding partitioned table: if the size of the non partitioned table reaches a certain threshold (I think it's: blocks > _small_table_threshold * 5) the runtime engine may decide to use a serial direct path read to access the data. If the single partitions don't pass the threshold the engine will use the conventional path.
Here is a small example for my assertion:
-- I create a simple partitioned table
-- and a corresponding non-partitioned table
-- with 1M rows
drop table tab_part;
create table tab_part (
col_part number
, padding varchar2(100)
partition by list (col_part)
partition P00 values (0)
, partition P01 values (1)
, partition P02 values (2)
, partition P03 values (3)
, partition P04 values (4)
, partition P05 values (5)
, partition P06 values (6)
, partition P07 values (7)
, partition P08 values (8)
, partition P09 values (9)
insert into tab_part
select mod(rownum, 10)
, lpad('*', 100, '*')
from dual
connect by level <= 1000000;
exec dbms_stats.gather_table_stats(user, 'tab_part')
drop table tab_nopart;
create table tab_nopart
as
select *
from tab_part;
exec dbms_stats.gather_table_stats(user, 'tab_nopart')
-- my _small_table_threshold is 1777 and the partitions
-- have a size of ca. 1600 blocks while the non-partitioned table
-- contains 15360 blocks
-- I have to flush the buffer cache since
-- the direct path access is only used
-- if there are few blocks already in the cache
alter system flush buffer_cache;
-- the execution plans are not really exciting
| Id | Operation | Name | Rows | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 8089 (0)| 00:00:41 | | |
| 1 | SORT AGGREGATE | | 1 | | | | |
| 2 | PARTITION LIST ALL| | 1000K| 8089 (0)| 00:00:41 | 1 | 10 |
| 3 | TABLE ACCESS FULL| TAB_PART | 1000K| 8089 (0)| 00:00:41 | 1 | 10 |
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7659 (0)| 00:00:39 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TAB_NOPART | 1000K| 7659 (0)| 00:00:39 |
But on my PC the FTS on the non-partitioned table is faster than the FTS on the partitions (1sec to 3 sec.) and v$sesstat shows the reason for this difference:
-- non partitioned table
NAME DIFF
table scan rows gotten 1000000
file io wait time 15313
session logical reads 15156
physical reads 15153
consistent gets direct 15152
physical reads direct 15152
DB time 95
-- partitioned table
NAME DIFF
file io wait time 2746493
table scan rows gotten 1000000
session logical reads 15558
physical reads 15518
physical reads cache prefetch 15202
DB time 295
(maybe my choose of counters is questionable)
So it's possible to get slower access for an FTS on a partitioned table under special conditions.
Regards
Martin
Maybe you are looking for
-
Standard SAP report for purchase price variance
Hi All, I need to know if there is a standard SAP report that shows the purchase price variance. Thanks, Regards
-
Lines printing in edited photos on 3 HP printers
We have three HP printers - one F4280 all-in-one and one 990cse connected to the same Vista computer and one 990cse connected to an XP. They are all printing horizontal lines in portrait view edited photos and vertical lines in landscape view. The li
-
Is there a quick way of filtering by date without having to open the Filter HUD and use either Calandar or Date criteria; both of which are clunky? (Often have to select date criteria to add to the HUD which takes even longer!!) I want to be able to
-
I have a report with associated form which should display the values of attributes of one record as read-only texts. One of the attributes is a long text (VARCHAR2(4000)). If I define the field as read/write and display as TEXTAREA and width, say, 60
-
Folders constantly change from list to icon view by themselves
For example, I'll open my Applications folder, change it to list view, close it, then a few minutes later open it again and it'll be in icon view. There's no rhyme or reason as to when this happens or with which folders. This used to happen in 10.3 a