Avod full table scan help...

HI ,
I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
please help to resolve the issue and it should recognize index rather than full table scan..

user13301356 wrote:
HI ,
I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
please help to resolve the issue and it should recognize index rather than full table scan..What is database version? Are all columns in the table indexed? Copy and paste the query that you are executing.

Similar Messages

  • URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query

    Hi Everyone,
    When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
    UPDATE TABLE_A a
    SET a.current_commit_date =
    (SELECT MAX (b.loading_date)
    FROM TABLE_B b
    WHERE a.sales_order_id = b.sales_order_id
    AND a.sales_order_line_id = b.sales_order_line_id
    AND b.confirmed_qty > 0
    AND b.data_flag IS NULL
    OR b.schedule_line_delivery_date >= '23 NOV 2008')
    Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
    I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
    When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
    Please please help me in optimizing this.
    Thanks,
    Sudhindra

    Check the instruction again, you're leaving out information we need in order to help you, like optimizer information.
    - Post your exact database version, that is: the result of select * from v$version;
    - Don't use TOAD's execution plan, but use
    SQL> explain plan for <your_query>;
    SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
    Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
    It's also explained in the instruction.
    When was the last time statistics were gathered for table_a and table_b?
    You can find out by issuing the following query:select table_name
    , last_analyzed
    , num_rows
    from user_tables
    where table_name in ('TABLE_A', 'TABLE_B');
    Can you also post the results of these counts;select count(*)
    from table_b
    where confirmed_qty > 0;
    select count(*)
    from table_b
    where data_flag is null;
    select count(*)
    from table_b
    where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy');

  • Select statement in a function does Full Table Scan

    All,
    I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
    Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
    Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
    Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
    Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = case_number_in and
    a1.case_number = a2.case_number and
    a2.application_date <= effective_date_in and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = 'A006438' and
    a1.case_number = a2.case_number and
    a2.application_date <= '01-Apr-2009' and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Why using the values in the passed parameter in the first select statement causes full table scan?
    Thank you for your help,
    Seyed
    Edited by: user11117178 on Jul 30, 2009 6:22 AM
    Edited by: user11117178 on Jul 30, 2009 6:23 AM
    Edited by: user11117178 on Jul 30, 2009 6:24 AM

    Hello Dan,
    Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
    PLAN_TABLE_OUTPUT
    Plan hash value: 2132048964
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT              |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |*  1 |  HASH JOIN                    |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |   2 |   BITMAP CONVERSION TO ROWIDS |                         |     3 |     9 |     1   (0)| 00:00:01 |       |       |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES    |       |       |            |          |       |       |
    |   4 |   PARTITION RANGE ITERATOR    |                         |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    |   5 |    TABLE ACCESS FULL          | PS2_FS_TRANSACTION_FACT |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    Predicate Information (identified by operation id):
       1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
       3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
    Thank you very much,
    Seyed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Preventing Discoverer using Full Table Scans with Decode in a View

    Hi Forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query in Discoverer.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds. Changing the condition to Batch Status that = ‘Unposted’ returns the query in seconds.
    I’ve been doing some digging and have found the database view that is linked to the Journal Batches folder in Discoverer. See at end of post.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans.
    Any idea how do we get around this?
    SELECT
    JOURNAL_BATCH1.JE_BATCH_ID,
    JOURNAL_BATCH1.NAME,
    JOURNAL_BATCH1.SET_OF_BOOKS_ID,
    GL_SET_OF_BOOKS.NAME,
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),
    DECODE( JOURNAL_BATCH1.ACTUAL_FLAG, 'A', 'Actual', 'B', 'Budget', 'E', 'Encumbrance', NULL ),
    JOURNAL_BATCH1.DEFAULT_PERIOD_NAME,
    JOURNAL_BATCH1.POSTED_DATE,
    JOURNAL_BATCH1.DATE_CREATED,
    JOURNAL_BATCH1.DESCRIPTION,
    DECODE( JOURNAL_BATCH1.AVERAGE_JOURNAL_FLAG, 'N', 'Standard', 'Y', 'Average', NULL ),
    DECODE( JOURNAL_BATCH1.BUDGETARY_CONTROL_STATUS, 'F', 'Failed', 'I', 'In Process', 'N', 'N/A', 'P', 'Passed', 'R', 'Required', NULL ),
    DECODE( JOURNAL_BATCH1.APPROVAL_STATUS_CODE, 'A', 'Approved', 'I', 'In Process', 'J', 'Rejected', 'R', 'Required', 'V','Validation Failed','Z', 'N/A',NULL ),
    JOURNAL_BATCH1.CONTROL_TOTAL,
    JOURNAL_BATCH1.RUNNING_TOTAL_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_CR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_CR,
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID,
    JOURNAL_BATCH2.NAME
    FROM
    GL_JE_BATCHES JOURNAL_BATCH1,
    GL_JE_BATCHES JOURNAL_BATCH2,
    GL_SETS_OF_BOOKS
    GL_SET_OF_BOOKS
    WHERE
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID = JOURNAL_BATCH2.JE_BATCH_ID (+) AND
    JOURNAL_BATCH1.SET_OF_BOOKS_ID = GL_SET_OF_BOOKS.SET_OF_BOOKS_ID AND
    GL_SECURITY_PKG.VALIDATE_ACCESS( JOURNAL_BATCH1.SET_OF_BOOKS_ID ) = 'TRUE' WITH READ ONLY
    Thanks,
    Lance

    Discoverer created it's own SQL.
    Please see below the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    _________________________________

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Finding the Text of SQL Query causing Full Table Scans

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    Finding the Text of SQL Query Causing Full Table Scan

  • Finding the Text of SQL Query Causing Full Table Scan

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    You might try something like this:
    select sql_text,
           object_name
    from   v$sql s,
           v$sql_plan p
    where  s.address = p.address and
           s.hash_value = p.hash_value and
           s.child_number = p.child_number and
           p.operation = 'TABLE ACCESS' and
           p.options = 'FULL' and
           p.object_owner in ('SCOTT')
    ;Please note that this query is just a snapshot of the SQL statements currently in the cache.

  • Trunc causing Full Table Scans

    I have a situtaion here where my query is as follows.
    SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate);
    COUNT(1)
    6
    PLAN_TABLE_OUTPUT
    Plan hash value: 3951750498
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 10 | 13904 (1)| 00:02:47 | | |
    | 1 | SORT AGGREGATE | | 1 | 10 | | | | |
    | 2 | PARTITION LIST SINGLE| | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
    |* 3 | TABLE ACCESS FULL | HBSM_SM_ACCOUNT_INFO | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
    Predicate Information (identified by operation id):
    3 - filter(("CUST_STATUS"='UP' OR "CUST_STATUS"='UUP') AND
    TO_DATE(INTERNAL_FUNCTION("FIRST_ACTVN_DATE"))=TO_DATE(TO_CHAR(SYSDATE@!)))
    16 rows selected.
    If I remove the trunc clause from the query the performance definitely improves the the results are wrong.
    SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and FIRST_ACTVN_DATE = trunc(sysdate);
    COUNT(1)
    0
    PLAN_TABLE_OUTPUT
    Plan hash value: 454529511
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 40 | 47 (0)| 00:00:01 | | |
    |* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| HBSM_SM_ACCOUNT_INFO | 1 | 40 | 47 (0)| 00:00:01 | 12 | 12 |
    |* 2 | INDEX RANGE SCAN | IND_FIRST_ACTVN_DATE | 51 | | 4 (0)| 00:00:01 | | |
    Can someone please help me whereby I can get the right data and I can also prevent these full table scans.

    Unless you are using a functional index, applying any function to an indexed column prevents the use of the index.
    The way round it in your case is to realise that
    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate)Is really asking that FIRST_ACTVN_DATE should be sometime today. You could therefore rewrite it as
    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP')
    and FIRST_ACTVN_DATE >= trunc(sysdate)
    and FIRST_ACTVN_DATE < trunc(sysdate) + 1Note, this still might not use the index depending on how many rows are within today's date versus how many are outside today's date.
    Also, when posting, remember to put your code between tags and to post create table scripts and sample data inserts.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Slow query due to large table and full table scan

    Hi,
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:
    SELECT table1.column, table2.column, table3.column
    FROM table1
    JOIN table2 on table1.table2Id = table2.id
    LEFT JOIN table3 on table2.table3id = table3.id
    WHERE table1.id IN(
    SELECT id
    FROM (
    (SELECT a.*, rownum rnum FROM(
    SELECT table1.id
    FROM table1,
    table2,
    table3
    WHERE
    table1.table2id = table2.id
    AND
    table2.table3id IS NULL OR table2.table3id = :table3IdParameter
    ) a
    WHERE rownum <= :end))
    WHERE rnum >= :start
    Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
    Can we avoid this? We have, what we think are, the correct indexes.
    /best regards, Håkan

    >
    Hi Håkan - welcome to the forum.
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:Firstly, please read the forum FAQ - top right of page.
    Please format your SQL using tags [code /code].
    In order to help us to help you.
    Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
    CREATE TABLE table1
      Field1  Type1,
      Field2  Type2,
    FieldN  TypeN
    );Then give us some table data - not 100's of records - just enough in the form
    INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
    ..Please post EXPLAIN PLAN - again with tags.
    HTH,
    Paul...
    /best regards, Håkan

  • Different Cost values for full table scans

    I have a very simple query that I run in two environments (Prod (20 CPU) and Dev (12 CPU)). Both environemtns are HPUX, oracle 9i.
    The query looks like this:
    SELECT prd70.jde_item_n
    FROM gdw.vjda_gdwprd68_bom_cmpnt prd68
    ,gdw.vjda_gdwprd70_gallo_item prd70
    WHERE prd70.jde_item_n = prd68.parnt_jde_item_n
    AND prd68.last_eff_t+nvl(to_number(prd70.auto_hld_dy_n),0)>= trunc(sysdate)
    GROUP BY prd70.jde_item_n
    When I look at the explain plans, there is a significant difference in cost and I can't figure out why they would be different. Both queries do full table scans, both instances have about the same number of rows, statistics on both are fresh.
    Production Plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=18398 Card=14657 B
    ytes=249169)
    1 0 SORT (GROUP BY) (Cost=18398 Card=14657 Bytes=249169)
    2 1 HASH JOIN (Cost=18304 Card=14657 Bytes=249169)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=949
    4 Card=194733 Bytes=1168398)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=5887
    Card=293149 Bytes=3224639)
    Development plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3566 Card=14754 B
    ytes=259214)
    1 0 HASH GROUP BY (GROUP BY) (Cost=3566 Card=14754 Bytes=259214)
    2 1 HASH JOIN (Cost=3558 Card=14754 Bytes=259214)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=1914
    4 Card=193655 Bytes=1323598)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=1076
    Card=295075 Bytes=3169542)
    There seems to be no reason for the costs to be so different, but I'm hoping that someone will be able to lead me in the right direction.
    Thanks,
    Jdelao

    This link may help:
    http://jaffardba.blogspot.com/2007/07/change-behavior-of-group-by-clause-in.html
    But looking at the explain plans one of them uses a SORT (GROUP BY) (higher cost query) and the other uses a HASH GROUP BY (GROUP BY) (lower cost query). From my searches on the `Net the HASH GROUP BY is a more efficient algorithm than the SORT (GROUP BY) which would lead me to believe that this is one of the reasons why the cost values are so different. I can't find which version HASH GROUP BY came in but quick searches indicate 10g for some reason.
    Are your optimizer features parameter set to the same value? In general you could compare relevant parameters to see if there is a difference.
    Hope this helps!

  • Strange full table scan behavior

    Hi all you sharp-eyed oracle gurus out there..
    I need some help/tips on a update I'm running which is taking very long time. Oracle RDMBS is 11.2.0.1 with advanced compression option.
    I'm currently updating all rows in a table from value 1 to 0. (update mistaf set b_code='0';)
    The column in question is a CHAR(1) column and the column is not indexed. The table is a fairly large heap-table with 55 million rows to be updated and it's size is approx 11GB. The table is compressed with the compress for OLTP option.
    What is strange to me is that I can clearly see that a full table scan is running but i cannot see any db file scattered read, as i would expect, but instead i'm only seeing db file sequential reads. I suppose this might be the reason for its long execution time (dbconsole estimates 20 hours to complete looking at sql monitoring).
    Any views on why Oracle would do db file sequential reads on a FTS? And do you agree that this might be the reason why it takes so long to complete?
    More info: I first started the update and left work and the next morning i saw that the update still wasnt finished to which i realised that i had a bitmap index on the column to be updated. I dropped the index and started the update once again.. It seemed to execute very fast at the begining before rapildy declining in performance..
    THanks in advance for any help!

    tried to tkprof the trace file but no sql came up..
    however the raw trace file looks like this
    *** 2010-07-15 17:32:39.829
    WAIT #1: nam='db file sequential read' ela= 7516 file#=11 block#=1185541 blocks=1 obj#=221897 tim=1279207959829762
    WAIT #1: nam='db file sequential read' ela= 7519 file#=3 block#=567053 blocks=1 obj#=0 tim=1279207959837428
    WAIT #1: nam='db file sequential read' ela= 55317 file#=3 block#=186728 blocks=1 obj#=0 tim=1279207959892903
    WAIT #1: nam='db file sequential read' ela= 23363 file#=11 block#=3528062 blocks=1 obj#=221897 tim=1279207959916438
    WAIT #1: nam='db file sequential read' ela= 4796 file#=3 block#=92969 blocks=1 obj#=0 tim=1279207959921314
    WAIT #1: nam='db file sequential read' ela= 1426 file#=11 block#=1079147 blocks=1 obj#=221897 tim=1279207959922846
    WAIT #1: nam='db file sequential read' ela= 4510 file#=11 block#=4180577 blocks=1 obj#=221897 tim=1279207959927479
    WAIT #1: nam='db file sequential read' ela= 12 file#=11 block#=478 blocks=1 obj#=221897 tim=1279207959927715
    WAIT #1: nam='db file sequential read' ela= 11 file#=3 block#=566015 blocks=1 obj#=0 tim=1279207959927768
    WAIT #1: nam='db file sequential read' ela= 17343 file#=11 block#=1142438 blocks=1 obj#=221897 tim=1279207960025312
    WAIT #1: nam='db file sequential read' ela= 11 file#=11 block#=202520 blocks=1 obj#=221897 tim=1279207960025548
    WAIT #1: nam='db file sequential read' ela= 15 file#=3 block#=612704 blocks=1 obj#=0 tim=1279207960025592
    WAIT #1: nam='db file sequential read' ela= 17604 file#=11 block#=1198573 blocks=1 obj#=221897 tim=1279207960043303
    WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=1473771 class#=1 obj#=221897 tim=1279207960059044
    WAIT #1: nam='buffer busy waits' ela= 21 file#=11 block#=4173048 class#=1 obj#=221897 tim=1279207960066512
    WAIT #1: nam='buffer busy waits' ela= 3 file#=509 block#=392139 class#=1 obj#=221897 tim=1279207960070049
    WAIT #1: nam='buffer busy waits' ela= 20 file#=11 block#=1134301 class#=1 obj#=221897 tim=1279207960075224
    WAIT #1: nam='db file sequential read' ela= 19164 file#=11 block#=3502287 blocks=1 obj#=221897 tim=1279207960120163
    WAIT #1: nam='buffer busy waits' ela= 70 file#=3 block#=156 class#=45 obj#=0 tim=1279207960126680
    WAIT #1: nam='db file sequential read' ela= 43587 file#=11 block#=3503000 blocks=1 obj#=221897 tim=1279207960189443
    WAIT #1: nam='db file sequential read' ela= 14214 file#=11 block#=4135977 blocks=1 obj#=221897 tim=1279207960203841
    WAIT #1: nam='latch: undo global data' ela= 28 address=11239411512 number=237 tries=0 obj#=221897 tim=1279207960226196
    WAIT #1: nam='buffer busy waits' ela= 376 file#=11 block#=1343104 class#=1 obj#=221897 tim=1279207960228124
    WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=1450745 class#=1 obj#=221897 tim=1279207960236628
    WAIT #1: nam='buffer busy waits' ela= 14 file#=11 block#=1456732 class#=1 obj#=221897 tim=1279207960237393
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=1469341 class#=1 obj#=221897 tim=1279207960239415
    WAIT #1: nam='buffer busy waits' ela= 16 file#=11 block#=3498660 class#=1 obj#=221897 tim=1279207960241348
    WAIT #1: nam='buffer busy waits' ela= 10 file#=11 block#=1478782 class#=1 obj#=221897 tim=1279207960242208
    WAIT #1: nam='buffer busy waits' ela= 11 file#=11 block#=3529073 class#=1 obj#=221897 tim=1279207960242774
    WAIT #1: nam='buffer busy waits' ela= 10 file#=11 block#=3506834 class#=1 obj#=221897 tim=1279207960243188
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3550683 class#=1 obj#=221897 tim=1279207960243589
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4082313 class#=1 obj#=221897 tim=1279207960244816
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4090328 class#=1 obj#=221897 tim=1279207960245086
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3555804 class#=1 obj#=221897 tim=1279207960245350
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3483832 class#=1 obj#=221897 tim=1279207960245549
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4115411 class#=1 obj#=221897 tim=1279207960246323
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4100593 class#=1 obj#=221897 tim=1279207960246791
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4135120 class#=1 obj#=221897 tim=1279207960247407
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4119599 class#=1 obj#=221897 tim=1279207960247832
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4174925 class#=1 obj#=221897 tim=1279207960249045
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4185250 class#=1 obj#=221897 tim=1279207960249699
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4188816 class#=1 obj#=221897 tim=1279207960250138
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4189312 class#=1 obj#=221897 tim=1279207960250363
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4190380 class#=1 obj#=221897 tim=1279207960250618
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4190996 class#=1 obj#=221897 tim=1279207960251339
    WAIT #1: nam='buffer busy waits' ela= 9 file#=11 block#=4176416 class#=1 obj#=221897 tim=1279207960251490
    WAIT #1: nam='buffer busy waits' ela= 11 file#=509 block#=436859 class#=1 obj#=221897 tim=1279207960253748
    WAIT #1: nam='buffer busy waits' ela= 2 file#=509 block#=426961 class#=1 obj#=221897 tim=1279207960253993
    WAIT #1: nam='db file sequential read' ela= 18802 file#=509 block#=413732 blocks=1 obj#=221897 tim=1279207960273210
    WAIT #1: nam='db file sequential read' ela= 13 file#=3 block#=615387 blocks=1 obj#=0 tim=1279207960273322
    WAIT #1: nam='db file sequential read' ela= 3948 file#=3 block#=569033 blocks=1 obj#=0 tim=1279207960277522
    WAIT #1: nam='db file sequential read' ela= 14 file#=11 block#=4191700 blocks=1 obj#=221897 tim=1279207960333755
    WAIT #1: nam='db file sequential read' ela= 3745 file#=11 block#=1197543 blocks=1 obj#=221897 tim=1279207960358279
    WAIT #1: nam='db file sequential read' ela= 4541 file#=11 block#=472946 blocks=1 obj#=221897 tim=1279207960363005
    WAIT #1: nam='db file sequential read' ela= 7775 file#=3 block#=229860 blocks=1 obj#=0 tim=1279207960370848
    WAIT #1: nam='db file sequential read' ela= 22319 file#=11 block#=1150525 blocks=1 obj#=221897 tim=1279207960393342
    WAIT #1: nam='db file sequential read' ela= 17058 file#=11 block#=3542375 blocks=1 obj#=221897 tim=1279207960410577
    WAIT #1: nam='db file sequential read' ela= 16042 file#=509 block#=437647 blocks=1 obj#=221897 tim=1279207960427928
    WAIT #1: nam='db file sequential read' ela= 6412 file#=3 block#=542118 blocks=1 obj#=0 tim=1279207960434440
    WAIT #1: nam='buffer busy waits' ela= 660 file#=3 block#=88 class#=23 obj#=0 tim=1279207960457208
    WAIT #1: nam='db file sequential read' ela= 13 file#=11 block#=4140513 blocks=1 obj#=221897 tim=1279207960467438
    WAIT #1: nam='db file sequential read' ela= 5451 file#=11 block#=3516234 blocks=1 obj#=221897 tim=1279207960472965
    WAIT #1: nam='db file sequential read' ela= 5121 file#=11 block#=3514597 blocks=1 obj#=221897 tim=1279207960478231
    WAIT #1: nam='db file sequential read' ela= 3982 file#=3 block#=1039898 blocks=1 obj#=0 tim=1279207960482281
    WAIT #1: nam='db file sequential read' ela= 5391 file#=509 block#=433941 blocks=1 obj#=221897 tim=1279207960487775
    WAIT #1: nam='db file sequential read' ela= 9707 file#=11 block#=3551543 blocks=1 obj#=221897 tim=1279207960529848
    WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=4090328 class#=1 obj#=221897 tim=1279207960610165
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4115879 class#=1 obj#=221897 tim=1279207960611710
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4100364 class#=1 obj#=221897 tim=1279207960612167
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4133339 class#=1 obj#=221897 tim=1279207960612648
    WAIT #1: nam='db file sequential read' ela= 7254 file#=509 block#=405005 blocks=1 obj#=221897 tim=1279207960631133
    WAIT #1: nam='db file sequential read' ela= 25608 file#=11 block#=1181075 blocks=1 obj#=221897 tim=1279207960693920
    etc

  • FASTER THROUGH PUT ON FULL TABLE SCAN

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-10
    Subject: Faster through put on Full table scans
    db_file_multiblock_read only affects the performance of full table scans.
    Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
    db_file_multiblock_read must be less than or equal to 64KBytes.
    If your query is really doing an index range scan then the performance
    of full scans is irrelevant. In order to improve the performance of this
    type of query it is important to reduce the number of blocks that
    the 'interesting' part of the index is contained within.
    Obviously the db_blocksize has the most impact here.
    Historically Informix has not been able to modify their database block size,
    and has had a fixed 2KB block.
    On most Unix platforms Oracle can use up to 8KBytes.
    (Some eg: Sequent allow 16KB).
    This means that for the same size of B-Tree index Oracle with
    an 8KB blocksize can read it's contents in 1/4 of the time that
    Informix with a 2KB block could do.
    You should also consider whether the PCTFREE value used for your index is
    appropriate. If it is too large then you will be wasting space
    in each index block. (It's too large IF you are not going to get any
    entry size extension OR you are not going to get any new rows for existing
    index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)
    db_file_simultaneous_writes has no direct relevance to index re-balancing.
    (PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
    HP/Allbase for the database server application that HP uses internally to
    monitor and control it's Tape drive manufacturing lines. They chose
    Oracle because: We outperformed Informix.
                             Sybase was too slow AND too
    unreliable.
                             Unify was short on functionality
    and SLOW.
                             HP/Allbase couldn't match the
    availability
                             requirements and wasn't as
    functional.
    Informix had problems demonstrating the ability to do hot backups without
    severely affecting the system throughput.
    HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
    different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
    configurations.
    NNB: It's always worth throwing in a simulated system failure whilst the
    benchmark is in progress. Informix has a history of not coping gracefully.
    That is they usually need some manual intervention to perform the database
    recovery.)
    I have a perspective client who is running a stripped down souped version of
    informix with no catalytic converter. One of their queries boils down to an
    Index Range Scan on 10000 records. How can I achieve better throughput
    on a single drive single CPU machine (HP/UX) without using raw devices.
    I had heard rebuilding the database with a block size factor greater than
    the OS block size would yield better performance. Also I tried changing
    the db_file_multiblock_read_count to 32 without much improvement.
    Adjusting the db_writers to two did not help either.     
    Also will the adjustment of the db_file_simultaneous_writes help on
    the maintenance of a index during rebalancing operations.

    2)if cbo, how are the stats collected?
    daily(less than millions rows of table) and weekly(all tables)There's no need to collect stats so frequently unless it's absolute necessary like you have massive update on tables daily or weekly.
    It will help if you can post your sample explain plan and query.

  • Cursors - More number of repeated full table scans

    Hai,
    If i execute the below mentioned query in sql*plus
    select distinct cod_acct_no, a.cod_cc_brn, a.cod_prod
    from account_master a, product_master b
    where a.prod_no = b.prod_no
    and b.cod_sc_pkg = var_pi_cod_sc_pkg
    and cod_acct_stat not in ( 1,5)
    UNION
    select distinct a.cod_acct_no, a.cod_cc_brn, a.cod_prod
    from account_master a, charge_account sc
    where sc.cod_acct_no = a.cod_acct_no
    and sc.cod_sc_pkg = var_pi_cod_sc_pkg
    and cod_acct_stat not in ( 1,5)
    I am getting results within a minute. Account_master is the big table in this join. And only one full table scan is happening for this table.
    IF the same query is used in pl/sql cursors, it is taking very long time and after analysis, it was found that multiple times of FTS is happening for account_master table. Why it is happening ? Any help pls.
    Regards
    Sridhar

    Hi,
    First thing, you don't need to issue a distinct for each Select section in a Union: Union automatically does that, by sorting and returning distinct values of both result sets combined.
    But most importantly, the performance is poor because you're running a loop, not because the query itself is any worse than what it was in SQL*Plus.
    When you run a cursor for loop, the PL/SQL engine has to switch context to the SQL engine repeatedly to fetch the next row, one at a time, and even though you may
    not notice a performance degradation when you have a small resultset, as it scales up the problem becomes more visible.
    What is it you're performing inside the loop?
    Bottom line is you should avoid the cursor for loop solution and do it in a single statement or at least using bulk collect/forall operations, that should increase your performance dramatically.
    If you can post a little more about your problem, instead the solution you initially thought of, it would help us help you. Basically please provide us your Oracle version, table structures and sample data for them,
    along with an expected output or data manipulation.
    Regards,
    Sitja.

  • Query to identify full table scans in progress

    Does anybody have a query that would help me identify:
    1) Full table scans in progress.
    2) Long running queries in progress.
    Thanks,
    Thomas
    null

    Does anybody have a query that would help me identify:
    1) Full table scans in progress.Not sure.
    2) Long running queries in progressDon’t have a query readily available, but you can write one based on the following –
    Try querying the view V$SESSION_LONGOPS. You will need to join this to V$SQL using SQL_ADDRESS to identify all the SQL’s running for more than ‘x’ minutes.
    Current System Time - V$SESSION_LONGOPS.Start_Time should give you the duration.
    Shailender Mehta

  • Full table scans

    Dear all,
    While doing a stress testing I found that there was a lot a full table scans due to which there was preformance drop. How can I avoid full table scans. Please suggest some ways as I am in clients place.
    Waiting for your help.
    Regards and thanks in advance
    SL

    Hi SL,
    How can I avoid full table scansFull table scans are not always bad! It depends foremost on your optimizer goal (all_rows vs. first_rows), plus your multiblock read count, table size, percentage of rows requested, and many other factors.
    Here are my notes:
    http://www.dba-oracle.com/art_orafaq_oracle_sql_tune_table_scan.htm
    To avoid full table scans, start by running plan9i.sql and then drill-in and see if you have missing indexes:
    http://www.dba-oracle.com/t_plan9i_sql_full_table_scans.htm
    You can also run the 10g SQLTuning advisor to find missing indexes, and also, don't forget to consider function-based indexes, a great way to eliminate unncessary lage-table full-table scans:
    http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

Maybe you are looking for