Nested Tables and Full Table Scans
Hello,
I am hoping someone help can help me as I am truly scratching my head.
I have recently been introduced to nested tables due to the fact that we have a poor running query in production. What I have discovered is that when a table is created with a column that is a nested table a unique index is automatically created on that column.
When I do an explain plan on this table A it states that a full scan is being doine on Table A and on the nested table B. I can add an index to the offending columns to remove the full scan on Table A but the explain plan still identifies that a full scan is being done on the nested table B. Bare in mind that the column with the nested table has a cardinality of 27.
What can I do? As I stated, there is an index on this nested table column but clearly it is being ignored.The query bombed out after 4 hours and when I ran a query to see what the record count was it was only 2046.
Any suggestions would be greatly appreciated.
Edited by: user11887286 on Sep 10, 2009 1:05 PM
Hi and welcome to the forum.
Since your question is in fact a tuning request, you need to provide us some more insights.
See:
[How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]
and also
[When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
In short:
- database version
- the actual queries you're executing
- the execution plans (explain plans)
- trace/tkprof output if available, or ask your DBA for it
- utopiamode a small but concisive testcase would be ideal (create table + insert statements). +/utopiamode+
Similar Messages
-
Annoying pop up ads now claim to be from Firefox and full system scan didn't help me.
Pop ups have been appearing in lower left and lower right corners of my screen for many months. I installed Adblock Plus a few weeks ago and they seemed to go away. Now, they have come back. The last ad I clicked on opened a new tab, blank and the spinner was going for a few seconds, I sensed trouble and closed the tab.
While researching on the Mozilla site - I got another lower right ad (purporting to be from Firefox). Hovered the mouse over it and it appeared to show a Mozilla URL in the lower left status bar, but I wasn't about to click on it. The ad looks phoney. I have a screen capture and would love to send it to you. I've got tonight's screen capture (with mouse hover) saved as a JPG and PNG files. Tell me how to upload and I will.
Related or not related to the pop up issue (I'm note sure) is this: Again while researching on the Mozilla site tonight - I did a Control-Clicked on a link to read one of your malware topics and instead the new tab opened to a site called local.com. For a long time (a year) I have had the problem of being taken off to some unwanted web sites when using Control-Clicks. Angy.
In all these months, Windows Security Essentials is always on. I've run several full scans. Never find things. Do my best to keep all software and virus info up to date. Damn pop ups just keep happening. I can't tell you if these things happen with Windows Explorer cuz I don't use it much.
Would like to scream - if I knew it would help. Tell me what to do next. Thanks.Install, update, and run these programs in this order. They are listed in order of efficacy.<br />'''''(Not all programs detect the same Malware, so you may need to run them all to solve your problem.)''''' <br />These programs are all free for personal use, but some have limited functionality in the "free mode" - but those are features you really don't need to find and remove the problem that you have.<br />
''Note: If your Malware infection is bad enough and you are mis-directed to URL's other than what is posted, you may have to use a different PC to download these programs and use a USB stick to transfer them to the afflicted PC.''
Malwarebytes' Anti-Malware - [http://www.malwarebytes.org/mbam.php] <br />
SuperAntispyware - [http://www.superantispyware.com/] <br />
AdAware - [http://www.lavasoftusa.com/software/adaware/] <br />
Spybot Search & Destroy - [http://www.safer-networking.org/en/index.html] <br />
Windows Defender: Home Page - [http://windows.microsoft.com/en-US/windows7/products/features/windows-defender]<br />
Also, if you have a search engine re-direct problem, see this:<br />
http://deletemalware.blogspot.com/2010/02/remove-google-redirect-virus.html
If these don't find it or can't clear it, post in one of these forums for specialized malware removal help: <br />
[http://www.spywarewarrior.com/index.php] <br />
[http://forum.aumha.org/] <br />
[http://www.spywareinfoforum.com/] <br />
[http://bleepingcomputer.com] -
What does a "full surface scan" of a hard drive?
I've been advised to do a "hard format and full surface scan" of new internal hard drives I'm adding to my new mac pro to hopefully avoid encountering any problems down the line. (mostly HDV video editing purposes) If there are any problems I would just exchange the drives now rather than wait for the problem to pop up unexpectedly. I'm not sure if disk utility does a true surface scan. So, is there any included apple application that does a true "full surface scan"? If not, what other inexpensive software would do it? I've heard of disk warrior and similar software but they are $100 which I'd like to avoid spending on a preemptive surface scan. Thanks.
You can look for free utility. But you won't find a better disk catalogue maintenance program.
You could buy TechTool Pro that does a good media surface scan but then fails or doesn't actually do anything to map a sector out.
Windows wlll let you use the vendor's own tool and does an excellent job.
SMART Utility sounds like it does a good job.
Intech Speedtools has a suite of tools but I found it to not do a good job when I did notice the side effects and behavior of weak and bad blocks. ZoneBench/QuickBench set is only $29 and you can do a lot with those, helps to create multiple partitions to force write test to every block.
Bottom line i see: no free lunches.
I use to believe that a zero all would attempt but the errors have to be really bad to pick them up. And a 7-way write erase takes 7X longer and really strains things. So I stict to WD Diagnostic Utility running in Windows, and I swear by the results and job it does. Excellect.
In theory, enterprise drives with 1.4 million MTBF hours have longer burn-in and therefore should be safer.
You could torture a drive for a few days and load it with files and then erase with zero and/or 7-way. -
Slow queries and full table scans in spite of context index
I have defined a USER_DATASTORE, which uses a PL/SQL procedure to compile data from several tables. The master table has 1.3 million rows, and one of the fields being joined is a CLOB field.
The resulting token table has 65,000 rows, which seems about right.
If I query the token table for a word, such as "ORACLE" in the token_text field, I see that the token_count is 139. This query returns instantly.
The query against the master table is very slow, taking about 15 minutes to return the 139 rows.
Example query:
select hnd from master_table where contains(myindex,'ORACLE',1) > 0;
I've run a sql_trace on this query, and it shows full table scans on both the master table and the DR$MYINDEX$I table. Why is it doing this, and how can I fix it?After looking at the tuning FAQ, I can see that this is doing a functional lookup instead of an indexed lookup. But why, when the rows are not constrained by any structural query, and how can I get it to instead to an indexed lookup?
Thanks in advance,
Annie -
Slow query due to large table and full table scan
Hi,
We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
Two of the full table scans are of the two large tables mentioned above.
This is an example query:
SELECT table1.column, table2.column, table3.column
FROM table1
JOIN table2 on table1.table2Id = table2.id
LEFT JOIN table3 on table2.table3id = table3.id
WHERE table1.id IN(
SELECT id
FROM (
(SELECT a.*, rownum rnum FROM(
SELECT table1.id
FROM table1,
table2,
table3
WHERE
table1.table2id = table2.id
AND
table2.table3id IS NULL OR table2.table3id = :table3IdParameter
) a
WHERE rownum <= :end))
WHERE rnum >= :start
Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
Can we avoid this? We have, what we think are, the correct indexes.
/best regards, Håkan>
Hi Håkan - welcome to the forum.
We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
Two of the full table scans are of the two large tables mentioned above.
This is an example query:Firstly, please read the forum FAQ - top right of page.
Please format your SQL using tags [code /code].
In order to help us to help you.
Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
CREATE TABLE table1
Field1 Type1,
Field2 Type2,
FieldN TypeN
);Then give us some table data - not 100's of records - just enough in the form
INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
..Please post EXPLAIN PLAN - again with tags.
HTH,
Paul...
/best regards, Håkan -
Update doing full table scan and taking long time
Hi All,
I am running an update statement which is doing a full table scan.
UPDATE Database.TABLE AS T
SET COMMENTS = CAST(CAST(COALESCE(T.COMMENTS,0) AS INTEGER) + 1 AS
CHARACTER)
WHERE T.TRACKINGPOINT = 'NDEL'
AND T.REFERENCENUMBER =
SUBSTRING(Root.XML.EE_EAI_MESSAGE.ReferenceNumber || '
' FROM 1 FOR 32);
Any advice.
Regards,
UmairMustafa,
No Developer is writing it in his program.
Regards,
Umair -
Full table scan and how to avoid it
Hello,
I have two tables, one with 425,000 records, and the other with 5,200,000 records in it. The smaller table has an index on its unique primary key, and the bigger table has an index on the foreign key of the smaller table.
When joining these two tables, I keep getting full table scans on both of these tables, and I would like to understand the philosophy behind it as well as ways to avoid this.
ThanksAre you manipulating the join columns in any fashion? Such as applying a function to them like in
to_char(column_a) = to_char(column_b)Because any manipulation like that will obviate your index (assuming you don't have function based indexes).
Really though, without your tables, indexes and query, we're left with voodoo, which is cool, but not really that effective.
*note to any and all practicing witch doctors, i really do think voodoo is cool and effective, please don't persecute me for my speakings.
Message was edited by:
Tubby -
Question about Full Table Scans and Tablespaces
Good evening (or morning),
I'm reading the Oracle Concepts (I'm new to Oracle) and it seems that, based on the way that Oracle allocates and manages storage the following premise would be true:
Premise: A table that is often accessed using a full table scan (for whatever reasons) would best reside in its own dedicated tablespace.
The main reason I came to this conclusion is that when doing a full table scan, Oracle does multiblock I/O, likely reading one extent at a time. If the Tablespace's datafile(s) only contain data for a single table then a serial read will not have to skip over segments that contain data for other tables (as would be the case if the tablespace is shared with other tables). The performance improvement is probably small but, it would seem that there is one nonetheless.
I'd like to have the thoughts of experienced DBAs regarding the above premise.
Thank you for your contribution,
John.Good morning :) Aman,
>
A little correction! A segment(be it a table,index, cluster, temporary) , would stay always in its own tablespace. Segments can't span tablespaces!
>
Fortunately, I understood that from the beginning :)
You mentioned fragmentation, I understand that too. As rows get deleted small holes start existing in the segment and those holes are not easily reusable because of their limited size.
What I am referring to is different though.
Let's consider a tablespace that is the home of 2 or more tables, the tablespace in turn is represented by one or more OS datafiles, in that case the situation will be as shown in the following diagram (not a very good diagram but... best I can do here ;) ):
Tablespace TablespaceWithManyTables
(segment 1 contents)
TableA Extent 1
TableA Block 1
TableA Block 2
Fragmentation may happen in these blocks or
even across blocks because Oracle allows rows
to span blocks
TableA Block n
End of TableA Extent 1
more extents here all for TableA
(end of segment 1 contents)
(segment 2 contents)
TableZ Extent 5
blocks here
End of TableZ Extent 5
more extents here, all for tableZ
(end of segment 2 contents)
and so on
(more segments belonging to various tables)
end of Tablespace TablespaceWithManyTablesOn the other hand, if the tablespace hosts only one table, the layout will be:
Tablespace TablespaceExclusiveForTableA
(segment 1 contents)
TableA Extent 1
TableA Block 1
TableA Block 2
Fragmentation may happen in these blocks or
even across blocks because Oracle allows rows
to span blocks
TableA Block n
End of TableA Extent 1
another extent for TableA
(end of segment 1 contents)
(segment 2 contents)
TableA Extent 5
blocks here
End of TableA Extent 5
more extents for TableA
(end of segment 2 contents)
and so on
(more segments belonging to TableA)
end of Tablespace TablespaceExclusiveForTableAThe fragmentation you mentioned takes place in both cases. In the first case, regardless of fragmentation, some segments don't belong to the table that is being serially scanned, therefore they have to be skipped over at the OS level. In the second case, since all the extents belong to the same table, they can be read serially at the OS level. I realize that in that case the segments may not be read in the "right" sequence but they don't have to because they can be served to the client app in sequence.
It is because of this that, I thought that if a particular table is mostly read serially, there might be a performance benefit (and also less work for Oracle) to dedicate a tablespace to it.
I can't wait to see what you think of this :)
John. -
Hello,
In a full table scan I understand that the memory block used for a newly read table block is placed at the end of the LRU.
When the second table block is read, is the same memory block replaced?
What I am asking basically is whether for a full table scan only one block in the data buffer is ever used, with the same single block being recycled for the entire content of the table.
Kind regards,
Peter StraussHi Fidel,
> In oracle 10g it changes a little the behavior. It is
> recommended not to set MULTIBLOCK_READ_COUNT. You
> calculate system statistics and Oracle "decides" the
> <i>best </i>value.
Take care... oracle 10gR2 uses the system statistic values to calculate the costs (= execution plan) including the i/o statistics but for the multiple i/o it uses the parameter DB_FILE_MULTIBLOCK_READ_COUNT.
So the result is: For calculating it uses the system statistic and for the work itself it uses DB_FILE_MULTIBLOCK_READ_COUNT (if set).
http://jonathanlewis.wordpress.com/2007/05/20/system-stats-strategy/
For the LRU thing ... oracle has a nice explanation:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm
Regards
Stefan -
Hi All,
I am using Discoverer Version 10.1.2.1.
I have few reports in which tables used in queries (used in Custom folders) go into full table scans inspite of all efforts in tuning.
I came to know that full table scans also come from the way EUL is built and maintained.
Can anyone please throw some light on how does EUL and full table scan relate.
Also one more thing here, which is better to use, Database View on which we can base our folder or writing the complete complex query in the folder.
Any help in this case will be appreciated.
Regards,
AnkurHi,
Can anyone please throw some light on how does EUL and full table scan relateThe database cost base optimiser processes the SQL that has been generated by Discoverer and creates an execution plan for the SQL. Now the execution plan will contain full table scans if the CBO calculates that FTS will give the best results. The CBO mainly uses the statistics held against the tables and the conditions in the SQL to calculate whether FTS would be better than using an index. The table join conditions are usually defined in the EUL but other conditions are usually in the workbook.
So there are many factors which control whether the database uses an FTS and only a few of them are affected by how the EUL is built.
Database View on which we can base our folder or writing the complete complex query in the folderIn general, it is always better to create a database view if that option is available to you. You can control and monitor the SQL in a database view much more easily than using a query in a custom folder.
Rod West -
"db file scattered read" too high and Query going for full table scan-Why ?
Hi,
I had a big table of around 200mb and had a index on it.
In my query I am using the where clause which has to use the
index. I am neither using any not null condition
nor using any function on the index fields.
Still my query is not using the index.
It is going for full table scan.
Also the statspack report is showing the
"db file scattered read" too high.
Can any body help and suggest me why this is happenning.
Also tell me the possible solution for it.
Thanks
Arun Tayal"db file scattered read" are physical reads/multi block reads. This wait occurs when the session reading data blocks from disk and writing into the memory.
Take the execution plan of the query and see what is wrong and why the index is not being used.
However, FTS are not always bad. By the way, what is your db_block_size and db_file_multiblock_read_count values?
If those values are set to high, Optimizer always favour FTS thinking that reading multiblock is always faster than single reads (index scans).
Dont see oracle not using index, just find out why oracle is not using index. Use the INDEX hint to force optimizer to use index. Take the execution with/witout index and compare the cardinality,cost and of course, logical reads.
Jaffar
Message was edited by:
The Human Fly -
Preventing Discoverer using Full Table Scans with Decode in a View
Hi Forum,
Hope you are can help, it involves a performance issues when creating a Report / Query in Discoverer.
I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = Posted we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds. Changing the condition to Batch Status that = Unposted returns the query in seconds.
Ive been doing some digging and have found the database view that is linked to the Journal Batches folder in Discoverer. See at end of post.
I think the problem is with the column using DECODE. When querying the column in TOAD the value of P is returned. But in discoverer the condition is done on the value Posted. Im not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans.
Any idea how do we get around this?
SELECT
JOURNAL_BATCH1.JE_BATCH_ID,
JOURNAL_BATCH1.NAME,
JOURNAL_BATCH1.SET_OF_BOOKS_ID,
GL_SET_OF_BOOKS.NAME,
DECODE( JOURNAL_BATCH1.STATUS,
'+', 'Unable to validate or create CTA',
'+*', 'Was unable to validate or create CTA',
'-','Invalid or inactive rounding differences account in journal entry',
'-*', 'Modified invalid or inactive rounding differences account in journal entry',
'<', 'Showing sequence assignment failure',
'<*', 'Was showing sequence assignment failure',
'>', 'Showing cutoff rule violation',
'>*', 'Was showing cutoff rule violation',
'A', 'Journal batch failed funds reservation',
'A*', 'Journal batch previously failed funds reservation',
'AU', 'Showing batch with unopened period',
'B', 'Showing batch control total violation',
'B*', 'Was showing batch control total violation',
'BF', 'Showing batch with frozen or inactive budget',
'BU', 'Showing batch with unopened budget year',
'C', 'Showing unopened reporting period',
'C*', 'Was showing unopened reporting period',
'D', 'Selected for posting to an unopened period',
'D*', 'Was selected for posting to an unopened period',
'E', 'Showing no journal entries for this batch',
'E*', 'Was showing no journal entries for this batch',
'EU', 'Showing batch with unopened encumbrance year',
'F', 'Showing unopened reporting encumbrance year',
'F*', 'Was showing unopened reporting encumbrance year',
'G', 'Showing journal entry with invalid or inactive suspense account',
'G*', 'Was showing journal entry with invalid or inactive suspense account',
'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
'I', 'In the process of being posted',
'J', 'Showing journal control total violation',
'J*', 'Was showing journal control total violation',
'K', 'Showing unbalanced intercompany journal entry',
'K*', 'Was showing unbalanced intercompany journal entry',
'L', 'Showing unbalanced journal entry by account category',
'L*', 'Was showing unbalanced journal entry by account category',
'M', 'Showing multiple problems preventing posting of batch',
'M*', 'Was showing multiple problems preventing posting of batch',
'N', 'Journal produced error during intercompany balance processing',
'N*', 'Journal produced error during intercompany balance processing',
'O', 'Unable to convert amounts into reporting currency',
'O*', 'Was unable to convert amounts into reporting currency',
'P', 'Posted',
'Q', 'Showing untaxed journal entry',
'Q*', 'Was showing untaxed journal entry',
'R', 'Showing unbalanced encumbrance entry without reserve account',
'R*', 'Was showing unbalanced encumbrance entry without reserve account',
'S', 'Already selected for posting',
'T', 'Showing invalid period and conversion information for this batch',
'T*', 'Was showing invalid period and conversion information for this batch',
'U', 'Unposted',
'V', 'Journal batch is unapproved',
'V*', 'Journal batch was unapproved',
'W', 'Showing an encumbrance journal entry with no encumbrance type',
'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
'X', 'Showing an unbalanced journal entry but suspense not allowed',
'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
'Z', 'Showing invalid journal entry lines or no journal entry lines',
'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),
DECODE( JOURNAL_BATCH1.ACTUAL_FLAG, 'A', 'Actual', 'B', 'Budget', 'E', 'Encumbrance', NULL ),
JOURNAL_BATCH1.DEFAULT_PERIOD_NAME,
JOURNAL_BATCH1.POSTED_DATE,
JOURNAL_BATCH1.DATE_CREATED,
JOURNAL_BATCH1.DESCRIPTION,
DECODE( JOURNAL_BATCH1.AVERAGE_JOURNAL_FLAG, 'N', 'Standard', 'Y', 'Average', NULL ),
DECODE( JOURNAL_BATCH1.BUDGETARY_CONTROL_STATUS, 'F', 'Failed', 'I', 'In Process', 'N', 'N/A', 'P', 'Passed', 'R', 'Required', NULL ),
DECODE( JOURNAL_BATCH1.APPROVAL_STATUS_CODE, 'A', 'Approved', 'I', 'In Process', 'J', 'Rejected', 'R', 'Required', 'V','Validation Failed','Z', 'N/A',NULL ),
JOURNAL_BATCH1.CONTROL_TOTAL,
JOURNAL_BATCH1.RUNNING_TOTAL_DR,
JOURNAL_BATCH1.RUNNING_TOTAL_CR,
JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_DR,
JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_CR,
JOURNAL_BATCH1.PARENT_JE_BATCH_ID,
JOURNAL_BATCH2.NAME
FROM
GL_JE_BATCHES JOURNAL_BATCH1,
GL_JE_BATCHES JOURNAL_BATCH2,
GL_SETS_OF_BOOKS
GL_SET_OF_BOOKS
WHERE
JOURNAL_BATCH1.PARENT_JE_BATCH_ID = JOURNAL_BATCH2.JE_BATCH_ID (+) AND
JOURNAL_BATCH1.SET_OF_BOOKS_ID = GL_SET_OF_BOOKS.SET_OF_BOOKS_ID AND
GL_SECURITY_PKG.VALIDATE_ACCESS( JOURNAL_BATCH1.SET_OF_BOOKS_ID ) = 'TRUE' WITH READ ONLY
Thanks,
LanceDiscoverer created it's own SQL.
Please see below the SQL Inspector Plan:
Before Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
AND-EQUAL
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_N1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
After Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS FULL GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
_________________________________ -
How can i make the optimiser to skip this full table scan ??
Hi,
I am trying to tune the below query, I have checked up all the possibilities to skip the full table scan on vhd_calldesh_archive, But am unable to find the predicate in the where clause, which is letting the optimiser to choose the full table scan on vhd_calldesk_archive table, which is very large one. how can i make the optimiser to skip this full table scan.
Please check the below sql script and explain plan ,
SELECT a.call_id, a.entry_date,
NVL (INITCAP (b.full_name), caller_name) AS caller_name,
c.description AS org_desc, a.env_id, i.env_desc, a.appl_id,
d.appl_desc, a.module_id, e.module_desc, a.call_type_id,
f.call_type_desc, a.priority, a.upduserid,
INITCAP (g.full_name) AS lastupdated_username, a.call_desc, h.mode_desc,
a.received_time,a.assignment_team, a.status,
ROUND (lcc.pkg_com.fn_datediff ('MI',
a.entry_date,
a.status_date
) AS elapsed_time,
ROUND (lcc.pkg_com.fn_datediff ('MI',
a.entry_date,
a.status_date
) AS resolved_min,
CASE
WHEN a.orgid in (1,100,200) THEN a.orgid
ELSE j.regionorgid
END AS region
,(SELECT coalesce(MAX(upddate),a.upddate) FROM lcc.vhd_callstatus stat WHERE stat.call_id = a.call_id
) as stat_upddate
,(SELECT team_desc from lcc.vhd_teams t where t.team_id = a.assignment_team) as team_desc
,a.eta_date
,coalesce(a.caller_contact, b.telephone) AS caller_contact
,coalesce(a.caller_email, b.email) as email
,a.affected_users
,a.outage_time
,a.QA_DONE
,a.LAST_ACTION_TEAM
,a.LAST_ACTION_USER
,INITCAP (k.full_name) AS last_action_username
,a.last_action_date
,l.team_desc as last_action_teamdesc
,a.refid
,INITCAP (lu.full_name) AS logged_name
,a.pmreview
,a.status as main_status
FROM lcc.vhd_calldesk_archive a
LEFT OUTER JOIN lcc.lcc_userinfo_details b ON b.user_name = a.caller_id
INNER JOIN lcc.com_organization c ON c.code = a.orgid
INNER JOIN lcc.vhd_applications d ON d.appl_id = a.appl_id
INNER JOIN lcc.vhd_modules e ON e.appl_id = a.appl_id AND e.module_id = a.module_id
INNER JOIN lcc.vhd_calltypes f ON f.call_type_id = a.call_type_id
INNER JOIN lcc.com_rptorganization j ON j.orgid = a.orgid AND j.tree = 'HLPDK'
LEFT OUTER JOIN lcc.lcc_userinfo_details g ON g.user_name = a.upduserid
LEFT OUTER JOIN lcc.vhd_callmode h ON h.mode_id = a.mode_id
LEFT OUTER JOIN lcc.vhd_environment i ON i.appl_id = a.appl_id AND i.env_id = a.env_id
LEFT OUTER JOIN lcc.lcc_userinfo_details k ON k.user_name = a.last_action_user
LEFT OUTER JOIN lcc.vhd_teams l ON l.team_id = a.last_action_user
LEFT OUTER JOIN (select CALL_ID,upduserid FROM lcc.VHD_CALLDESK_HISTORY P where upddate
in ( select min(upddate) from lcc.VHD_CALLDESK_HISTORY Q WHERE Q.CALL_ID = P.CALL_ID
group by call_id)) ku
ON ku.call_id = a.call_id
LEFT OUTER JOIN lcc.lcc_userinfo_details lu ON NVL(ku.upduserid,A.upduserid) = lu.user_name;
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 2104 | 3667K| 37696 |
| 1 | UNION-ALL | | | | |
| 2 | NESTED LOOPS OUTER | | 2103 | 3665K| 37683 |
| 3 | VIEW | | 2103 | 3616K| 35580 |
| 4 | NESTED LOOPS OUTER | | 2103 | 823K| 35580 |
| 5 | NESTED LOOPS OUTER | | 2103 | 774K| 33477 |
| 6 | NESTED LOOPS OUTER | | 2103 | 685K| 31374 |
| 7 | NESTED LOOPS | | 2103 | 636K| 29271 |
| 8 | NESTED LOOPS | | 2103 | 603K| 27168 |
| 9 | NESTED LOOPS OUTER | | 2103 | 558K| 25065 |
| 10 | NESTED LOOPS OUTER | | 2103 | 515K| 22962 |
| 11 | NESTED LOOPS | | 2103 | 472K| 20859 |
| 12 | NESTED LOOPS | | 2103 | 429K| 18756 |
| 13 | NESTED LOOPS OUTER | | 4826 | 890K| 13930 |
| 14 | NESTED LOOPS OUTER | | 4826 | 848K| 9104 |
| 15 | NESTED LOOPS | | 4826 | 754K| 4278 |
|* 16 | TABLE ACCESS FULL | COM_RPTORGANIZATION | 75 | 1050 | 3 |
| 17 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK | 64 | 9344 | 57 |
|* 18 | INDEX RANGE SCAN | VHD_CALLDSK_ORGID | 2476 | | 7 |
| 19 | VIEW PUSHED PREDICATE | | 1 | 20 | 1 |
|* 20 | FILTER | | | | |
| 21 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK_HISTORY | 1 | 20 | 2 |
|* 22 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
|* 23 | FILTER | | | | |
| 24 | SORT GROUP BY NOSORT | | 1 | 12 | 2 |
| 25 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK_HISTORY | 1 | 12 | 2 |
|* 26 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
| 27 | TABLE ACCESS BY INDEX ROWID | VHD_CALLMODE | 1 | 9 | 1 |
|* 28 | INDEX UNIQUE SCAN | VHD_CALLMOD_MODID_PK | 1 | | |
| 29 | TABLE ACCESS BY INDEX ROWID | VHD_APPLICATIONS | 1 | 20 | 1 |
|* 30 | INDEX UNIQUE SCAN | VHD_APPL_APPLID_PK | 1 | | |
| 31 | TABLE ACCESS BY INDEX ROWID | VHD_CALLTYPES | 1 | 21 | 1 |
|* 32 | INDEX UNIQUE SCAN | VHD_CALLTYP_ID_PK | 1 | | |
| 33 | TABLE ACCESS BY INDEX ROWID | VHD_TEAMS | 1 | 21 | 1 |
|* 34 | INDEX UNIQUE SCAN | VHD_TEAMID_PK | 1 | | |
| 35 | TABLE ACCESS BY INDEX ROWID | VHD_ENVIRONMENT | 1 | 21 | 1 |
|* 36 | INDEX UNIQUE SCAN | VHD_ENV_APLENVID_PK | 1 | | |
| 37 | TABLE ACCESS BY INDEX ROWID | VHD_MODULES | 1 | 22 | 1 |
|* 38 | INDEX UNIQUE SCAN | VHD_MOD_APLMOD_ID_PK | 1 | | |
| 39 | TABLE ACCESS BY INDEX ROWID | COM_ORGANIZATION | 1 | 16 | 1 |
|* 40 | INDEX UNIQUE SCAN | COM_ORG_PK | 1 | | |
| 41 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 |
|* 42 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 43 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 43 |
|* 44 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 45 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
|* 46 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 47 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
|* 48 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 49 | NESTED LOOPS OUTER | | 1 | 1785 | 13 |
| 50 | VIEW | | 1 | 1761 | 12 |
| 51 | NESTED LOOPS OUTER | | 1 | 1656 | 12 |
| 52 | NESTED LOOPS OUTER | | 1 | 1632 | 11 |
| 53 | NESTED LOOPS OUTER | | 1 | 1608 | 10 |
| 54 | NESTED LOOPS | | 1 | 1565 | 9 |
| 55 | NESTED LOOPS | | 1 | 1549 | 9 |
| 56 | NESTED LOOPS | | 1 | 1535 | 9 |
| 57 | NESTED LOOPS OUTER | | 1 | 1513 | 8 |
| 58 | NESTED LOOPS OUTER | | 1 | 1492 | 7 |
| 59 | NESTED LOOPS | | 1 | 1471 | 6 |
| 60 | NESTED LOOPS | | 1 | 1450 | 5 |
| 61 | NESTED LOOPS OUTER | | 1 | 1430 | 4 |
| 62 | NESTED LOOPS OUTER | | 1 | 1421 | 3 |
| 63 | TABLE ACCESS FULL | VHD_CALLDESK_ARCHIVE | 1 | 1401 | 2 |
| 64 | VIEW PUSHED PREDICATE | | 1 | 20 | 1 |
|* 65 | FILTER | | | | |
| 66 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK_HISTORY | 1 | 20 | 2 |
|* 67 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
|* 68 | FILTER | | | | |
| 69 | SORT GROUP BY NOSORT | | 1 | 12 | 2 |
| 70 | TABLE ACCESS BY INDEX ROWID| VHD_CALLDESK_HISTORY | 1 | 12 | 2 |
|* 71 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
| 72 | TABLE ACCESS BY INDEX ROWID | VHD_CALLMODE | 1 | 9 | 1 |
|* 73 | INDEX UNIQUE SCAN | VHD_CALLMOD_MODID_PK | 1 | | |
| 74 | TABLE ACCESS BY INDEX ROWID | VHD_APPLICATIONS | 1 | 20 | 1 |
|* 75 | INDEX UNIQUE SCAN | VHD_APPL_APPLID_PK | 1 | | |
| 76 | TABLE ACCESS BY INDEX ROWID | VHD_CALLTYPES | 1 | 21 | 1 |
|* 77 | INDEX UNIQUE SCAN | VHD_CALLTYP_ID_PK | 1 | | |
| 78 | TABLE ACCESS BY INDEX ROWID | VHD_TEAMS | 1 | 21 | 1 |
|* 79 | INDEX UNIQUE SCAN | VHD_TEAMID_PK | 1 | | |
| 80 | TABLE ACCESS BY INDEX ROWID | VHD_ENVIRONMENT | 1 | 21 | 1 |
|* 81 | INDEX UNIQUE SCAN | VHD_ENV_APLENVID_PK | 1 | | |
| 82 | TABLE ACCESS BY INDEX ROWID | VHD_MODULES | 1 | 22 | 1 |
|* 83 | INDEX UNIQUE SCAN | VHD_MOD_APLMOD_ID_PK | 1 | | |
| 84 | TABLE ACCESS BY INDEX ROWID | COM_RPTORGANIZATION | 1 | 14 | |
|* 85 | INDEX UNIQUE SCAN | COM_RPTORG_PK | 1 | | |
| 86 | TABLE ACCESS BY INDEX ROWID | COM_ORGANIZATION | 1 | 16 | |
|* 87 | INDEX UNIQUE SCAN | COM_ORG_PK | 1 | | |
| 88 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 43 |
|* 89 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 90 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 |
|* 91 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 92 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
|* 93 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
| 94 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
|* 95 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
Predicate Information (identified by operation id):
16 - filter("J"."TREE"='HLPDK')
18 - access("J"."ORGID"="A"."ORGID")
20 - filter( EXISTS (SELECT /*+ */ 0 FROM "LCC"."VHD_CALLDESK_HISTORY" "Q" WHERE "Q"."CALL_ID"=:B1
"Q"."CALL_ID" HAVING MIN("Q"."UPDDATE")=:B2))
22 - access("SYS_ALIAS_2"."CALL_ID"="A"."CALL_ID")
23 - filter(MIN("Q"."UPDDATE")=:B1)
26 - access("Q"."CALL_ID"=:B1)
28 - access("H"."MODE_ID"(+)="A"."MODE_ID")
30 - access("D"."APPL_ID"="A"."APPL_ID")
32 - access("F"."CALL_TYPE_ID"="A"."CALL_TYPE_ID")
34 - access("L"."TEAM_ID"(+)="A"."LAST_ACTION_TEAM")
36 - access("I"."APPL_ID"(+)="A"."APPL_ID" AND "I"."ENV_ID"(+)="A"."ENV_ID")
38 - access("E"."APPL_ID"="A"."APPL_ID" AND "E"."MODULE_ID"="A"."MODULE_ID")
40 - access("C"."CODE"="A"."ORGID")
42 - access("K"."USER_NAME"(+)="A"."LAST_ACTION_USER")
44 - access("B"."USER_NAME"(+)="A"."CALLER_ID")
46 - access("G"."USER_NAME"(+)="A"."UPDUSERID")
48 - access("LU"."USER_NAME"(+)=NVL("SYS_ALIAS_4"."UPDUSERID_162","SYS_ALIAS_4"."UPDUSERID_25"))
65 - filter( EXISTS (SELECT /*+ */ 0 FROM "LCC"."VHD_CALLDESK_HISTORY" "Q" WHERE "Q"."CALL_ID"=:B1
"Q"."CALL_ID" HAVING MIN("Q"."UPDDATE")=:B2))
67 - access("SYS_ALIAS_2"."CALL_ID"="SYS_ALIAS_1"."CALL_ID")
68 - filter(MIN("Q"."UPDDATE")=:B1)
71 - access("Q"."CALL_ID"=:B1)
73 - access("H"."MODE_ID"(+)="SYS_ALIAS_1"."MODE_ID")
75 - access("D"."APPL_ID"="SYS_ALIAS_1"."APPL_ID")
77 - access("F"."CALL_TYPE_ID"="SYS_ALIAS_1"."CALL_TYPE_ID")
79 - access("L"."TEAM_ID"(+)=TO_NUMBER("SYS_ALIAS_1"."LAST_ACTION_USER"))
81 - access("I"."APPL_ID"(+)="SYS_ALIAS_1"."APPL_ID" AND "I"."ENV_ID"(+)="SYS_ALIAS_1"."ENV_ID")
83 - access("E"."APPL_ID"="SYS_ALIAS_1"."APPL_ID" AND "E"."MODULE_ID"="SYS_ALIAS_1"."MODULE_ID")
85 - access("SYS_ALIAS_1"."ORGID"="J"."ORGID" AND "J"."TREE"='HLPDK')
87 - access("C"."CODE"="SYS_ALIAS_1"."ORGID")
89 - access("B"."USER_NAME"(+)="SYS_ALIAS_1"."CALLER_ID")
91 - access("SYS_ALIAS_1"."UPDUSERID"="G"."USER_NAME"(+))
93 - access("K"."USER_NAME"(+)="SYS_ALIAS_1"."LAST_ACTION_USER")
95 - access("LU"."USER_NAME"(+)=NVL("SYS_ALIAS_3"."UPDUSERID_162","SYS_ALIAS_3"."UPDUSERID_25"))
Note: cpu costing is offI've tried to look thru your sql and changed it a bit. Of course not testet :-)
Your problem isn't the archive table! I tried to remove the 2 selects from the select-clause. Furthermore you have a lot of nested loops in your explain, which is a performance-killer. Try getting rid of them, perhaps use /*+ USE_HASH(?,?) */.
SELECT a.call_id, a.entry_date,
NVL (INITCAP (b.full_name), caller_name) AS caller_name, c.description AS org_desc, a.env_id, i.env_desc, a.appl_id,
d.appl_desc, a.module_id, e.module_desc, a.call_type_id, f.call_type_desc, a.priority, a.upduserid,
INITCAP (g.full_name) AS lastupdated_username, a.call_desc, h.mode_desc, a.received_time, a.assignment_team, a.status,
ROUND (lcc.pkg_com.fn_datediff ('MI', a.entry_date, a.status_date)) AS elapsed_time,
ROUND (lcc.pkg_com.fn_datediff ('MI', a.entry_date, a.status_date)) AS resolved_min,
CASE
WHEN a.orgid IN (1, 100, 200)
THEN a.orgid
ELSE j.regionorgid
END AS region,
COALESCE (stat.upddate, a.upddate) AS stat_upddate,
t.team_desc, a.eta_date,
COALESCE (a.caller_contact, b.telephone) AS caller_contact,
COALESCE (a.caller_email, b.email) AS email, a.affected_users,
a.outage_time, a.qa_done, a.last_action_team, a.last_action_user,
INITCAP (k.full_name) AS last_action_username, a.last_action_date,
l.team_desc AS last_action_teamdesc, a.refid,
INITCAP (lu.full_name) AS logged_name, a.pmreview,
a.status AS main_status
FROM lcc.vhd_calldesk_archive a, lcc.lcc_userinfo_details b, lcc.com_organization c,
lcc.vhd_applications d, lcc.vhd_modules e, lcc.vhd_calltypes f, lcc.com_rptorganization j,
lcc.lcc_userinfo_details g, lcc.vhd_callmode h, lcc.vhd_environment i, lcc.lcc_userinfo_details k,
lcc.vhd_teams l,
(SELECT call_id, upduserid
FROM lcc.vhd_calldesk_history p
WHERE upddate IN (SELECT MIN (upddate)
FROM lcc.vhd_calldesk_history q
WHERE q.call_id = p.call_id
GROUP BY call_id)) ku,
lcc.lcc_userinfo_details lu,
(SELECT call_id, MAX (upddate)
FROM lcc.vhd_callstatus
GROUP BY call_id) stat,
lcc.vhd_teams t
WHERE a.caller_id = b.user_name(+)
AND a.orgid = c.code
AND a.appl_id = d.appl_id
AND a.appl_id = e.appl_id
AND a.module_id = e.module_id
AND a.call_type_id = f.call_type_id
AND a.orgid = j.orgid
AND j.tree = 'HLPDK'
AND a.upduserid = g.user_name(+)
AND a.mode_id = h.mode_id(+)
AND a.appl_id = i.appl_id(+)
AND a.env_id = i.env_id(+)
AND a.last_action_user = k.user_name(+)
AND a.last_action_user = l.team_id(+)
AND a.call_id = ku.call_id(+)
AND NVL (ku.upduserid, a.upduserid) = lu.user_name(+)
AND a.call_id = stat.call_id
AND a.assignment_team = t.team_id; -
Tables in subquery resulting in full table scans
Hi,
This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
Problem Description:
All the tables in sub-query are resulting in full table scans and hence are executing for hours.
Here is the query
SELECT /*+ PARALLEL*/
act.assignment_action_id
, act.assignment_id
, act.tax_unit_id
, as1.person_id
, as1.effective_start_date
, as1.primary_flag
FROM pay_payroll_actions pa1
, pay_population_ranges pop
, per_periods_of_service pos
, per_all_assignments_f as1
, pay_assignment_actions act
, pay_payroll_actions pa2
, pay_action_classifications pcl
, per_all_assignments_f as2
WHERE pa1.payroll_action_id = :b2
AND pa2.payroll_id = pa1.payroll_id
AND pa2.effective_date
BETWEEN pa1.start_date
AND pa1.effective_date
AND act.payroll_action_id = pa2.payroll_action_id
AND act.action_status IN ('C', 'S')
AND pcl.classification_name = :b3
AND pa2.consolidation_set_id = pa1.consolidation_set_id
AND pa2.action_type = pcl.action_type
AND nvl (pa2.future_process_mode, 'Y') = 'Y'
AND as1.assignment_id = act.assignment_id
AND pa1.effective_date
BETWEEN as1.effective_start_date
AND as1.effective_end_date
AND as2.assignment_id = act.assignment_id
AND pa2.effective_date
BETWEEN as2.effective_start_date
AND as2.effective_end_date
AND as2.payroll_id = as1.payroll_id
AND pos.period_of_service_id = as1.period_of_service_id
AND pop.payroll_action_id = :b2
AND pop.chunk_number = :b1
AND pos.person_id = pop.person_id
AND (
as1.payroll_id = pa1.payroll_id
OR pa1.payroll_id IS NULL
AND NOT EXISTS
SELECT /*+ PARALLEL*/ NULL
FROM pay_assignment_actions ac2
, pay_payroll_actions pa3
, pay_action_interlocks int
WHERE int.locked_action_id = act.assignment_action_id
AND ac2.assignment_action_id = int.locking_action_id
AND pa3.payroll_action_id = ac2.payroll_action_id
AND pa3.action_type IN ('P', 'U')
AND NOT EXISTS
SELECT /*+ PARALLEL*/
NULL
FROM per_all_assignments_f as3
, pay_assignment_actions ac3
WHERE :b4 = 'N'
AND ac3.payroll_action_id = pa2.payroll_action_id
AND ac3.action_status NOT IN ('C', 'S')
AND as3.assignment_id = ac3.assignment_id
AND pa2.effective_date
BETWEEN as3.effective_start_date
AND as3.effective_end_date
AND as3.person_id = as2.person_id
ORDER BY as1.person_id
, as1.primary_flag DESC
, as1.effective_start_date
, act.assignment_id
FOR UPDATE OF as1.assignment_id
, pos.period_of_service_id
Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
Suspecting some db parameter which is causing this issue.
In the
- Full table scans on tables in the first sub-query
PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
- Full table scans on tables in Second sub-query
PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 29 398.80 2192.99 238706 4991924 2383 0
Fetch 1136 378.38 1921.39 0 4820511 0 1108
total 1166 777.19 4114.38 238706 9812435 2383 1108
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 41 (APPS) (recursive depth: 1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 FOR UPDATE
0 PX COORDINATOR
0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
0 SORT (ORDER BY) [:Q1009]
0 PX RECEIVE [:Q1009]
0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
0 HASH JOIN (ANTI BUFFERED) [:Q1008]
0 PX RECEIVE [:Q1008]
0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
0 BUFFER (SORT) [:Q1006]
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
0 NESTED LOOPS [:Q1006]
0 NESTED LOOPS [:Q1006]
0 NESTED LOOPS [:Q1006]
0 HASH JOIN (ANTI) [:Q1006]
0 BUFFER (SORT) [:Q1006]
0 PX RECEIVE [:Q1006]
0 PX SEND (HASH) OF ':TQ10002'
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
0 PX RECEIVE [:Q1006]
0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
0 HASH JOIN [:Q1005]
0 BUFFER (SORT) [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (BROADCAST) OF ':TQ10000'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
0 HASH JOIN [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
0 PX BLOCK (ITERATOR) [:Q1004]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
0 BUFFER (SORT) [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (HASH) OF ':TQ10001'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
0 PX RECEIVE [:Q1008]
0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
0 FILTER [:Q1007]
0 HASH JOIN [:Q1007]
0 BUFFER (SORT) [:Q1007]
0 PX RECEIVE [:Q1007]
0 PX SEND (BROADCAST) OF ':TQ10003'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
0 PX BLOCK (ITERATOR) [:Q1007]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enq: KO - fast object checkpoint 32 0.02 0.12
os thread startup 8 0.02 0.19
PX Deq: Join ACK 198 0.00 0.04
PX Deq Credit: send blkd 167116 1.95 1103.72
PX Deq Credit: need buffer 327389 1.95 266.30
PX Deq: Parse Reply 148 0.01 0.03
PX Deq: Execute Reply 11531 1.95 1901.50
PX qref latch 23060 0.00 0.60
db file sequential read 108199 0.17 22.11
db file scattered read 9272 0.19 51.74
PX Deq: Table Q qref 78 0.00 0.03
PX Deq: Signal ACK 1165 0.10 10.84
enq: PS - contention 73 0.00 0.00
reliable message 27 0.00 0.00
latch free 218 0.00 0.01
latch: session allocation 11 0.00 0.00
Thanks in advance
Suresh PVHi,
welcome,
how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
Herald ten Dam
http://htendam.wordpress.com
PS. Use "{code}" for showing your code and explain plans, it looks nicer -
VARRAY bind parameter in IN clause causes Full Table Scan
Hi
My problem is that Oracle elects to perform a full table scan when I want it to use an index.
The situation is this: I have a single table SQL query with an IN clause such as:
SELECT EMPNO, ENAME, JOB FROM EMP WHERE ENAME IN (...)
Since this is running in an application, I want to allow the user to provide a list of ENAMES to search. Because IN clauses don't accept bind parameters I've been using the Tom Kyte workaround which relies on setting a bind variable to an array-valued scalar, and then casting this array to be a table of records that the database can handle in an IN clause:
SELECT *
FROM EMP
WHERE ENAME IN (
SELECT *
FROM TABLE(CAST( ? AS TABLE_OF_VARCHAR)))
This resulted in very slow performance due to a full table scan. To test, I ran the SQL in SQL*Plus and provided the IN clause values in the query itself. The explain plan showed it using my index...ok good. But once I changed the IN clause to the 'select * from table...' syntax Oracle went into Full Scan mode. I added an index hint but it didn't change the plan. Has anyone had success using this technique without a full scan?
Thanks
John
p.s.
Please let me know if you think this should be posted on a different forum. Even though my context is a Java app developed with JDev this seemed like a SQL question.Justin and 3360 - that was great advice and certainly nothing I would have come up with. However, as posted, the performance still wasn't good...but it gave me a term to Google on. I found this Ask Tom page http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:3779680732446#15740265481549, where he included a seemingly magical 'where rownum >=0' which, when applied with your suggestions, turned my query from minutes into seconds.
My plans are as follows:
1 - Query with standard IN clause
SQL> explain plan for
2 select accession_number, protein_name, sequence_id from protein_dim
3 where accession_number in ('33460', '33458', '33451');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 7 | 336 | 4 |
| 1 | INLIST ITERATOR | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| PROTEIN_DIM | 7 | 336 | 4 |
| 3 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 7 | | 3 |
Note: cpu costing is off, 'PLAN_TABLE' is old version
11 rows selected.
2 - Standard IN Clause with Index hint
SQL> explain plan for
2 select /*+ INDEX(protein_dim IDX_PROTEIN_ACCNUM) */
3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
4 from pdssuser.protein_dim
5 where accession_number in
6 ('33460', '33458', '33451');
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 7 | 588 | 4 |
| 1 | INLIST ITERATOR | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| PROTEIN_DIM | 7 | 588 | 4 |
| 3 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 7 | | 3 |
Note: cpu costing is off, 'PLAN_TABLE' is old version
11 rows selected.
3 - Using custom TABLE_OF_VARCHAR type
CREATE TYPE TABLE_OF_VARCHAR AS
TABLE OF VARCHAR2(50);
SQL> explain plan for
2 select
3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
4 from pdssuser.protein_dim
5 where accession_number in
6 (select * from table(cast(TABLE_OF_VARCHAR('33460', '33458', '33451') as TABLE_OF_VARCHAR)) t);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 2 | 168 | 57M|
| 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
| 2 | TABLE ACCESS FULL | PROTEIN_DIM | 5235K| 419M| 13729 |
| 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
Note: cpu costing is off, 'PLAN_TABLE' is old version
11 rows selected.
4 - Using custom TABLE_OF_VARCHAR type w/ Index hint
SQL> explain plan for
2 select /*+ INDEX(protein_dim IDX_PROTEIN_ACCNUM) */
3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
4 from pdssuser.protein_dim
5 where accession_number in
6 (select * from table(cast(TABLE_OF_VARCHAR('33460', '33458', '33451') as TABLE_OF_VARCHAR)) t);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 2 | 168 | 57M|
| 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
| 2 | TABLE ACCESS BY INDEX ROWID | PROTEIN_DIM | 5235K| 419M| 252K|
| 3 | INDEX FULL SCAN | IDX_PROTEIN_ACCNUM | 5235K| | 17255 |
| 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
PLAN_TABLE_OUTPUT
Note: cpu costing is off, 'PLAN_TABLE' is old version
12 rows selected.
5 - Using custom TABLE_OF_VARCHAR type w/ cardinality hint
SQL> explain plan for
2 select
3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source from protein_dim
4 where accession_number in (select /*+ cardinality( t 10 ) */
5 * from TABLE(CAST (TABLE_OF_VARCHAR('33460', '33458', '33451') AS TABLE_OF_VARCHAR)) t);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 2 | 168 | 57M|
| 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
| 2 | TABLE ACCESS FULL | PROTEIN_DIM | 5235K| 419M| 13729 |
| 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
Note: cpu costing is off, 'PLAN_TABLE' is old version
11 rows selected.
6 - Using custom TABLE_OF_VARCHAR type w/ cardinality hint
and rownum >= 0 constraint
SQL> explain plan for
2 select
3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source from protein_dim
4 where accession_number in (select /*+ cardinality( t 10 ) */
5 * from TABLE(CAST (TABLE_OF_VARCHAR('33460', '33458', '33451') AS TABLE_OF_VARCHAR)) t
6 where rownum >= 0);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 25 | 2775 | 43 |
| 1 | TABLE ACCESS BY INDEX ROWID | PROTEIN_DIM | 2 | 168 | 3 |
| 2 | NESTED LOOPS | | 25 | 2775 | 43 |
| 3 | VIEW | VW_NSO_1 | 10 | 270 | 11 |
| 4 | SORT UNIQUE | | 10 | | |
| 5 | COUNT | | | | |
| 6 | FILTER | | | | |
PLAN_TABLE_OUTPUT
| 7 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
| 8 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 2 | | 2 |
Note: cpu costing is off, 'PLAN_TABLE' is old version
16 rows selected.
I don't understand why performance improved so dramatically but happy that it did.
Thanks a ton!
John
Maybe you are looking for
-
RMI UnmarshalException Occurs after some time
Hi, I have a client server application that talks via rmi. My client saves data to a server or gets data by passing in parameters that include string, boolean and a Hashtable. The data is saved and retrieved many times. However, after some time, the
-
Not able to set values in drop down for a table field
Hi All, I am not able to set values in drop down for a table field. Although I am able to set these values to a stand alone field but its not happening for a particular table field. I am using ABAP web dynpro for coding. Pls help. Regards, Bhaskar
-
Dear HEC experts My customer would install a new SAP ERP landscape (DEV + QAS ) in HEC and mantain PRD on Premise. Could you explain if CR transport continue to run as all systems are on premise or I have to manage STMS differently ? And in terms of
-
How do u report a app to apple thats shown a false age rating
my 8 year old son download a app that is 5+ showen in the app store but when looking at it has some things that are not recommended to be child friendly or to be seen by a child i want to know how i can report this to apple to get i changed to the ri
-
SSIS - "Insufficient Connection information was supplied" problem
Hello, I have connection problem with my SSIS package. Setup is as follows: Package includes 150+ dataflows that pump data from Informix tables to MS SQL Server tables Setup of each dataflow consists of ADO.NET source (newest driver version 4.10 for