Full Table Scans for small tables... in Oracle10g v.2

Hi,
The Query optimizer may select to do a full table scan instead of an index scan when the table is small (it consists of blocks the number of which is less than the the value of db_file_multiblock_read_count).
So , i tried to see it using the dept table......
SQL> select blocks , extents , bytes from USER_segments where segment_name='DEPT';
    BLOCKS    EXTENTS      BYTES
         8          1      65536
SQL> SHOW PARAMETER DB_FILE_MULTIBLOCK_READ_COUNT;
NAME                                 TYPE        VALUE
db_file_multiblock_read_count        integer     16
SQL> explain plan for SELECT * FROM DEPT
  2    WHERE DEPTNO=10
  3  /
Explained
SQL>
SQL>  select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2852011669
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Tim
|   0 | SELECT STATEMENT            |         |     1 |    23 |     1   (0)| 00:
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPT    |     1 |    23 |     1   (0)| 00:
|*  2 |   INDEX UNIQUE SCAN         | PK_DEPT |     1 |       |     0   (0)| 00:
Predicate Information (identified by operation id):
   2 - access("DEPTNO"=10)
14 rows selectedSo , according to the above remarks
What may be the reason of not performing full table scan...????
Thanks...
Sim

No i have not generalized.... In the Oracle's extract
i have posted there is the word "might".....
I just want to find an example in which selecting
from a small table... query optimizer does perform a
full scan instead of a type of index scan...Sorry for that... I don't mean to be rude :)
See following...
create table index_test(id int, name varchar2(10));
create index index_test_idx on index_test(id);
insert into index_test values(1, 'name');
commit;
-- No statistics
select * from index_test where id = 1;
SELECT STATEMENT ALL_ROWS-Cost : 3
  TABLE ACCESS FULL MAXGAUGE.INDEX_TEST(1) ("ID"=1)
-- If first rows mode?
alter session set optimizer_mode = first_rows;
select * from index_test where id = 1;
SELECT STATEMENT FIRST_ROWS-Cost : 802
  TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
   INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1)
-- If statistics is gathered
exec dbms_stats.gather_table_stats(user, 'INDEX_TEST', cascade=>true);
alter session set optimizer_mode = first_rows;
select * from index_test where id = 1;
SELECT STATEMENT ALL_ROWS-Cost : 2
  TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
   INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1)
alter session set optimizer_mode = all_rows;
select * from index_test where id = 1;
SELECT STATEMENT ALL_ROWS-Cost : 2
  TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
   INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1) See some dramatic changes by the difference of parameters and statistics?
Jonathan Lewis has written a great book on cost mechanism of Oracle optimizer.
It will tell almost everything about your questions...
http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Jonathan-Lewis/dp/1590596366/ref=sr_1_1?ie=UTF8&s=books&qid=1195209336&sr=1-1

Similar Messages

  • Is it really another error about full table scans for small tables....?????

    Hi ,
    I have posted the following :
    Full Table Scans for small tables... in Oracle10g v.2
    and the first post of Mr. Chris Antognini was that :
    "I'm sorry to say that the documentation is wrong! In fact when a full table scan is executed, and the blocks are not cached, at least 2 I/O are performed. The first one to get the header block (where the extent map is stored) and the second to access the first and, for a very small table, only extent."
    Is it really wrong....????
    Thanks...
    Sim

    Fredrik,
    I do not say in any way that the documentation in this point is wrong.....
    In my first post , i have inserted a link to a thread made in another forum:
    Full Table Scans for small tables... in Oracle10g v.2
    Christian Antognini has written that the documentation is wrong....
    I'm sorry to say that the documentation is wrong!
    In fact when a full table scan is executed, and the
    blocks are not cached, at least 2 I/O are performed. The
    first one to get the header block (where the extent map
    is stored) and the second to access the first and, for a
    very small table, only extent.I'm just wondering if he has right......!!!!!!!
    Thanks..
    Sim

  • URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query

    Hi Everyone,
    When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
    UPDATE TABLE_A a
    SET a.current_commit_date =
    (SELECT MAX (b.loading_date)
    FROM TABLE_B b
    WHERE a.sales_order_id = b.sales_order_id
    AND a.sales_order_line_id = b.sales_order_line_id
    AND b.confirmed_qty > 0
    AND b.data_flag IS NULL
    OR b.schedule_line_delivery_date >= '23 NOV 2008')
    Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
    I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
    When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
    Please please help me in optimizing this.
    Thanks,
    Sudhindra

    Check the instruction again, you're leaving out information we need in order to help you, like optimizer information.
    - Post your exact database version, that is: the result of select * from v$version;
    - Don't use TOAD's execution plan, but use
    SQL> explain plan for <your_query>;
    SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
    Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
    It's also explained in the instruction.
    When was the last time statistics were gathered for table_a and table_b?
    You can find out by issuing the following query:select table_name
    , last_analyzed
    , num_rows
    from user_tables
    where table_name in ('TABLE_A', 'TABLE_B');
    Can you also post the results of these counts;select count(*)
    from table_b
    where confirmed_qty > 0;
    select count(*)
    from table_b
    where data_flag is null;
    select count(*)
    from table_b
    where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy');

  • How to check small table scan full table scan if we  will use index  in where clause.

    How to check small table scan full table scan if i will use index column in where clause.
    Is there example link there i can  test small table scan full table  if index is used in where clause.

    Use explain plan on your statement or set autotrace traceonly in your SQL*Plus session followed by the SQL you are testing.
    For example
    SQL> set autotrace traceonly
    SQL> select *
      2  from XXX
      3  where id='fga';
    no rows selected
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=13 Card=1 Bytes=16
              5)
       1    0   PARTITION RANGE (ALL) (Cost=13 Card=1 Bytes=165)
       2    1     TABLE ACCESS (FULL) OF 'XXX' (TABLE) (Cost=13 Card
              =1 Bytes=165)
    Statistics
              1  recursive calls
              0  db block gets
           1561  consistent gets
            540  physical reads
              0  redo size
           1864  bytes sent via SQL*Net to client
            333  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed

  • How to find the count of tables going for fts(full table scan in oracle 10g

    HI
    how to find the count of tables going for fts(full table scan) in oracle 10g
    regards

    Hi,
    Why do you want to 'find' those tables?
    Do you want to 'avoid FTS' on those tables?
    You provide little information here. (Perhaps you just migrated from 9i and having problems with certain queries now?)
    FTS is sometimes the fastest way to retrieve data, and sometimes an index scan is.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968
    There's no 'FTS view' available, if you want to know what happens on your DB you need, like Anand already said, to trace sessions that 'worry you'.

  • Different Cost values for full table scans

    I have a very simple query that I run in two environments (Prod (20 CPU) and Dev (12 CPU)). Both environemtns are HPUX, oracle 9i.
    The query looks like this:
    SELECT prd70.jde_item_n
    FROM gdw.vjda_gdwprd68_bom_cmpnt prd68
    ,gdw.vjda_gdwprd70_gallo_item prd70
    WHERE prd70.jde_item_n = prd68.parnt_jde_item_n
    AND prd68.last_eff_t+nvl(to_number(prd70.auto_hld_dy_n),0)>= trunc(sysdate)
    GROUP BY prd70.jde_item_n
    When I look at the explain plans, there is a significant difference in cost and I can't figure out why they would be different. Both queries do full table scans, both instances have about the same number of rows, statistics on both are fresh.
    Production Plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=18398 Card=14657 B
    ytes=249169)
    1 0 SORT (GROUP BY) (Cost=18398 Card=14657 Bytes=249169)
    2 1 HASH JOIN (Cost=18304 Card=14657 Bytes=249169)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=949
    4 Card=194733 Bytes=1168398)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=5887
    Card=293149 Bytes=3224639)
    Development plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3566 Card=14754 B
    ytes=259214)
    1 0 HASH GROUP BY (GROUP BY) (Cost=3566 Card=14754 Bytes=259214)
    2 1 HASH JOIN (Cost=3558 Card=14754 Bytes=259214)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=1914
    4 Card=193655 Bytes=1323598)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=1076
    Card=295075 Bytes=3169542)
    There seems to be no reason for the costs to be so different, but I'm hoping that someone will be able to lead me in the right direction.
    Thanks,
    Jdelao

    This link may help:
    http://jaffardba.blogspot.com/2007/07/change-behavior-of-group-by-clause-in.html
    But looking at the explain plans one of them uses a SORT (GROUP BY) (higher cost query) and the other uses a HASH GROUP BY (GROUP BY) (lower cost query). From my searches on the `Net the HASH GROUP BY is a more efficient algorithm than the SORT (GROUP BY) which would lead me to believe that this is one of the reasons why the cost values are so different. I can't find which version HASH GROUP BY came in but quick searches indicate 10g for some reason.
    Are your optimizer features parameter set to the same value? In general you could compare relevant parameters to see if there is a difference.
    Hope this helps!

  • SIDs for Full table scan wait events in db

    Guys,
    10.2.0.5/ 2 node RAC / RHEL-3
    CanAnyone provide me an sql to find all SID doing full table scans in a db?
    Thanks!
    Hari

    You ought to be able to query v$sql_plan for those sql plans with a full scan in them and if you want all the recent plans that contain a full table scan in them. You can join this back to v$session for those session currently executing a full table scan but if the sql in question was a previous statement executed by the session the join will not show the session because that statement is no longer the current statement.
    1 select p.sql_id, s.sid, p.object_name, p.operation, p.options
    2 from v$sql_plan p, v$session s
    3 where p.options like '%FULL%'
    4* and s.sql_id = p.sql_id
    You will probably want to filter out internal operations (operation = "FIXED TABLE")
    HTH -- Mark D Powell --

  • Entity Framework Generated SQL for paging or using Linq skip take causes full table scans.

    The slq genreated creates queries that cause a full table scan for pagination.  Is there any way to fix this?
    I am using
    ODP.NET ODTwithODAC1120320_32bit
    ASP.NET 4.5
    EF 5
    Oracle 11gR2
    This table has 2 million records. The further into the records you page the longer it takes.
    LINQ
    var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                        select errorLog).Count();
                    var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                                 orderby errorLog.ERR_LOG_ID
                                 select errorLog).Skip(cnt-10).Take(10).ToList();
    Here is the query & execution plans.
    SELECT *
    FROM   (SELECT "Extent1"."ERR_LOG_ID"  AS "ERR_LOG_ID",
                   "Extent1"."SRV_LOG_ID"  AS "SRV_LOG_ID",
                   "Extent1"."TS"          AS "TS",
                   "Extent1"."MSG"         AS "MSG",
                   "Extent1"."STACK_TRACE" AS "STACK_TRACE",
                   "Extent1"."MTD_NM"      AS "MTD_NM",
                   "Extent1"."PRM"         AS "PRM",
                   "Extent1"."INSN_ID"     AS "INSN_ID",
                   "Extent1"."TS_1"        AS "TS_1",
                   "Extent1"."LOG_ETRY"    AS "LOG_ETRY"
            FROM   (SELECT "Extent1"."ERR_LOG_ID"                                  AS "ERR_LOG_ID",
                           "Extent1"."SRV_LOG_ID"                                  AS "SRV_LOG_ID",
                           "Extent1"."TS"                                          AS "TS",
                           "Extent1"."MSG"                                         AS "MSG",
                           "Extent1"."STACK_TRACE"                                 AS "STACK_TRACE",
                           "Extent1"."MTD_NM"                                      AS "MTD_NM",
                           "Extent1"."PRM"                                         AS "PRM",
                           "Extent1"."INSN_ID"                                     AS "INSN_ID",
                           "Extent1"."TS_1"                                        AS "TS_1",
                           "Extent1"."LOG_ETRY"                                    AS "LOG_ETRY",
                           row_number() OVER (ORDER BY "Extent1"."ERR_LOG_ID" ASC) AS "row_number"
                    FROM   (SELECT "ERRORLOGANDSERVICELOG_VIEW"."ERR_LOG_ID"  AS "ERR_LOG_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."SRV_LOG_ID"  AS "SRV_LOG_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."TS"          AS "TS",
                                   "ERRORLOGANDSERVICELOG_VIEW"."MSG"         AS "MSG",
                                   "ERRORLOGANDSERVICELOG_VIEW"."STACK_TRACE" AS "STACK_TRACE",
                                   "ERRORLOGANDSERVICELOG_VIEW"."MTD_NM"      AS "MTD_NM",
                                   "ERRORLOGANDSERVICELOG_VIEW"."PRM"         AS "PRM",
                                   "ERRORLOGANDSERVICELOG_VIEW"."INSN_ID"     AS "INSN_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."TS_1"        AS "TS_1",
                                   "ERRORLOGANDSERVICELOG_VIEW"."LOG_ETRY"    AS "LOG_ETRY"
                            FROM   "IDS_CORE"."ERRORLOGANDSERVICELOG_VIEW" "ERRORLOGANDSERVICELOG_VIEW") "Extent1") "Extent1"
            WHERE  ("Extent1"."row_number" > 1933849)
            ORDER  BY "Extent1"."ERR_LOG_ID" ASC)
    WHERE  (ROWNUM <= (10))
    | Id  | Operation              | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                        |    10 | 31750 |       |   821K  (1)| 02:44:15 |
    |*  1 |  COUNT STOPKEY         |                        |       |       |       |            |          |
    |   2 |   VIEW                 |                        |  1561K|  4728M|       |   821K  (1)| 02:44:15 |
    |*  3 |    VIEW                |                        |  1561K|  4748M|       |   821K  (1)| 02:44:15 |
    |   4 |     WINDOW SORT        |                        |  1561K|  3154M|  4066M|   821K  (1)| 02:44:15 |
    |*  5 |      HASH JOIN OUTER   |                        |  1561K|  3154M|       |   130K  (1)| 00:26:09 |
    |   6 |       TABLE ACCESS FULL| IDS_SERVICES_LOG       |  1047 | 52350 |       |     5   (0)| 00:00:01 |
    |   7 |       TABLE ACCESS FULL| IDS_SERVICES_ERROR_LOG |  1561K|  3080M|       |   130K  (1)| 00:26:08 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=10)
       3 - filter("Extent1"."row_number">1933849)
       5 - access("T1"."SRV_LOG_ID"(+)="T2"."SRV_LOG_ID")

    I did try a sample from stack overflow that would apply it to all string types, but I didn't see any query results differences.  Please note, I am having the problem without any order with or where statements. Of course the skip take generates them.  Please advise how I would implement the EntityFunctions.AsNonUnicode method with this Linq query.
    LINQ
    var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                        select errorLog).Count();
                    var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                                 orderby errorLog.ERR_LOG_ID
                                 select errorLog).Skip(cnt-10).Take(10).ToList();
    This is what I inserted into my model to hopefully fix it.  FROM:c# - EF Code First - Globally set varchar mapping over nvarchar - Stack Overflow
    /// <summary>
    /// Change the "default" of all string properties for a given entity to varchar instead of nvarchar.
    /// </summary>
    /// <param name="modelBuilder"></param>
    /// <param name="entityType"></param>
    protected void SetAllStringPropertiesAsNonUnicode(
       DbModelBuilder modelBuilder,
       Type entityType)
       var stringProperties = entityType.GetProperties().Where(
      c => c.PropertyType == typeof(string)
       && c.PropertyType.IsPublic
       && c.

  • Bitmap index column goes for full table scan

    Hi all,
    Database : 10g R2
    OS : Windows xp
    my select query is :
    SELECT tran_id, city_id, valid_records
    FROM transaction_details
    WHERE type_id=101;
    And the Explain Plan is :
    Plan
    SELECT STATEMENT ALL_ROWSCost: 29 Bytes: 8,876 Cardinality: 634
    1 TABLE ACCESS FULL TABLE TRANSACTION_DETAILS** Cost: 29 Bytes: 8,876 Cardinality: 634
    total number of rows in the table = 1800 ;
    distinct value of type_ids are 101,102,103
    so i created a bit map index on it.
    CREATE BITMAP INDEX btmp_typeid ON transaction_details
    (type_id)
    LOGGING
    NOPARALLEL;
    after creating the index, the explain plan shows the same. why it goes for full table scan?.
    Kindly share ur idea on this.
    Edited by: 887268 on Apr 3, 2013 11:01 PM
    Edited by: 887268 on Apr 3, 2013 11:02 PM

    >
    I am sorry for being ignorant, can you please cite any scenario of locking due to bitmap indices? A link can be useful as well.
    >
    See my full reply in this thread
    Bitmap index for FKs on Fact tables
    >
    ETL is affected because DML operations (INSERT/UPDATE/DELETE) on tables with bitmapped indexes can have serious performance issues due to the serialization involved. Updating a single bit-mapped column value (e.g. from 'M' to 'F' for gender) requires both bitmapped index blocks to be locked until the update is complete. A bitmap index stored ROWID ranges (min rowid - max rowid) than can span many, many records. The entire 'range' of rowids is locked in order to change just one value.
    To change from 'M' the 'M' rowid range for that one row is locked and the ROWID must be removed from the range byt clearing the bit. To change to 'F' the 'F' rowid id range needs to be found, locked and the bit set that corresponds to that rowid. No other rows with rowids in the range can be changed since this is a serial operation. If the range includes 1000 rows and they all need changed it takes 1000 serial operations.

  • Selecting a column goes for a full table scan

    Hi,
    Oracle version 10.2
    I have the below query:
    SELECT   c.companyid,
             c.contactid,
             c.contactlname
        FROM contact c,company ic
       WHERE UPPER((c.contactlname)) LIKE CASE
               WHEN 'test' IS NULL THEN
                UPPER((c.contactlname))
               ELSE DECODE(1,
                       1,
                       '%' || 'TEST' || '%',
                       2,
                       'TEST' || '%',
                       3,
                       '%' || 'TEST',
                       4,
                       'TEST')
             END
         AND c.activeflag = DECODE('Y', 'N', 'Y', c.activeflag)
         AND ic.companyid=c.companyidthis is using the index on table company.
    Explain plan:
    SELECT STATEMENT, GOAL = ALL_ROWS               68     3523     176150     
    NESTED LOOPS               68     3523     176150     
      TABLE ACCESS FULL          CONTACT     65     3523     155012     
      INDEX UNIQUE SCAN          COMPANY_PK     0     1     6          Now if i include two more columns from the company table in the select statement the plan changes and it goes for a full table scan...
    Query:
    SELECT   ic.companyname,
             ic.companystatustypeid,
             c.companyid,
             c.contactid,
             c.contactlname
        FROM contact c,company ic
       WHERE UPPER((c.contactlname)) LIKE CASE
               WHEN 'test' IS NULL THEN
                UPPER((c.contactlname))
               ELSE DECODE(1,
                       1,
                       '%' || 'TEST' || '%',
                       2,
                       'TEST' || '%',
                       3,
                       '%' || 'TEST',
                       4,
                       'TEST')
             END
         AND c.activeflag = DECODE('Y', 'N', 'Y', c.activeflag)
         AND ic.companyid=c.companyidExplain Plan:
    SELECT STATEMENT, GOAL = ALL_ROWS               2126     4121     403858     
    HASH JOIN               2126     4121     403858     
      TABLE ACCESS FULL          CONTACT     108     4121     185445     
      PARTITION LIST ALL               1959     1031340     54661020     
       TABLE ACCESS FULL          COMPANY     1959     1031340     54661020     Any ideas why?

    dont think so.
    i tried removing the filters also:
    Query:
    SELECT  -- ic.companyname,
           --  ic.companystatustypeid,
    c.companyid,
             c.contactid,
             c.contactlname,
             c.contactfname,
             c.contactpositionid,
             c.contactroledesc,
             c.updateddate
        FROM contact c,company ic
       WHERE ic.companyid=c.companyidplan:
    SELECT STATEMENT, GOAL = ALL_ROWS               109     73346     3520608     
    NESTED LOOPS               109     73346     3520608     
      TABLE ACCESS FULL          CONTACT     51     73346     3080532     
      INDEX UNIQUE SCAN          COMPANY_PK     0     1     6     Query:
    SELECT   ic.companyname,
             ic.companystatustypeid,
    c.companyid,
             c.contactid,
             c.contactlname,
             c.contactfname,
             c.contactpositionid,
             c.contactroledesc,
             c.updateddate
        FROM contact c,company ic
       WHERE ic.companyid=c.companyid
         Plan:
    SELECT STATEMENT, GOAL = ALL_ROWS               2462     73346     6894524     
    HASH JOIN               2462     73346     6894524     
      TABLE ACCESS FULL          CONTACT     51     73346     3080532     
      PARTITION LIST ALL               1348     973674     50631048     
       TABLE ACCESS FULL          COMPANY     1348     973674     50631048     Does the columns selected have an impact on the plan?? I thought the plan is derived on basis of the join conditions...

  • "db file scattered read" too high and Query going for full table scan-Why ?

    Hi,
    I had a big table of around 200mb and had a index on it.
    In my query I am using the where clause which has to use the
    index. I am neither using any not null condition
    nor using any function on the index fields.
    Still my query is not using the index.
    It is going for full table scan.
    Also the statspack report is showing the
    "db file scattered read" too high.
    Can any body help and suggest me why this is happenning.
    Also tell me the possible solution for it.
    Thanks
    Arun Tayal

    "db file scattered read" are physical reads/multi block reads. This wait occurs when the session reading data blocks from disk and writing into the memory.
    Take the execution plan of the query and see what is wrong and why the index is not being used.
    However, FTS are not always bad. By the way, what is your db_block_size and db_file_multiblock_read_count values?
    If those values are set to high, Optimizer always favour FTS thinking that reading multiblock is always faster than single reads (index scans).
    Dont see oracle not using index, just find out why oracle is not using index. Use the INDEX hint to force optimizer to use index. Take the execution with/witout index and compare the cardinality,cost and of course, logical reads.
    Jaffar
    Message was edited by:
    The Human Fly

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Query is doing full table scan

    Hi All,
    The below query is doing full table scan. So many threads from application trigger this query and doing full table scan. Can you please tell me how to improve the performance of this query?
    Env is 11.2.0.3 RAC (4 node). Unique index on VZ_ID, LOGGED_IN. The table row count is 2,501,103.
    Query is :-
    select ccagentsta0_.LOGGED_IN as LOGGED1_404_, ccagentsta0_.VZ_ID as VZ2_404_, ccagentsta0_.ACTIVE as ACTIVE404_, ccagentsta0_.AGENT_STATE as AGENT4_404_,
    ccagentsta0_.APPLICATION_CODE as APPLICAT5_404_, ccagentsta0_.CREATED_ON as CREATED6_404_, ccagentsta0_.CURRENT_ORDER as CURRENT7_404_,
    ccagentsta0_.CURRENT_TASK as CURRENT8_404_, ccagentsta0_.HELM_ID as HELM9_404_, ccagentsta0_.LAST_UPDATED as LAST10_404_, ccagentsta0_.LOCATION as LOCATION404_,
    ccagentsta0_.LOGGED_OUT as LOGGED12_404_, ccagentsta0_.SUPERVISOR_VZID as SUPERVISOR13_404_, ccagentsta0_.VENDOR_NAME as VENDOR14_404_
    from AGENT_STATE ccagentsta0_ where ccagentsta0_.VZ_ID='v790531'  and ccagentsta0_.ACTIVE='Y';
    Table Scan                                                       AGENT_STATE                                                2.366666667
    Table Scan                                                       AGENT_STATE                                                0.3666666667
    Table Scan                                                       AGENT_STATE                                                1.633333333
    Table Scan                                                       AGENT_STATE                                                       0.75
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                2.533333333
    Table Scan                                                       AGENT_STATE                                                0.5333333333
    Table Scan                                                       AGENT_STATE                                                       1.95
    Table Scan                                                       AGENT_STATE                                                        0.8
    Table Scan                                                       AGENT_STATE                                                0.2833333333
    Table Scan                                                       AGENT_STATE                                                1.983333333
    Table Scan                                                       AGENT_STATE                                                        2.5
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                1.883333333
    Table Scan                                                       AGENT_STATE                                                        0.9
    Table Scan                                                       AGENT_STATE                                                2.366666667
    But the explain plan shows the query is taking the index
    Explain plan output:-
    PLAN_TABLE_OUTPUT
    Plan hash value: 1946142815
    | Id  | Operation                   | Name            | Rows  | Bytes | Cost (%C
    PU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT            |                 |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| AGENT_STATE     |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  2 |   INDEX RANGE SCAN          | AGENT_STATE_IDX |   229 |       |     4
    (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("CCAGENTSTA0_"."ACTIVE"='Y')
       2 - access("CCAGENTSTA0_"."VZ_ID"='v790531')
    The values (VZ_ID) i have given are dummy values picked from the table. I dont get the actual values since the query is coming with bind variables. Please let me know your suggestion on this.
    Thanks,
    Mani

    Hi,
    But I am not getting what is the issue..its a simple select query and index is there on one of the leading columns (VZ_ID --- PK). Explain plan says its using its using Index and it only select fraction of rows from the table. Then why it is doing FTS. For Optimizer, why its like a query doing FTS.
    The rule-based optimizer would have  picked the plan with the index. The cost-based optimizer, however, is picking the plan with the lowest cost. Apparently, the lowest cost plan is the one with the full table scan. And the optimizer isn't necessarily wrong about this.
    Reading data from a table via index probes is only efficient when selecting a relatively small percentage of rows. For larger percentages, a full table scan is generally better.
    Consider a simple example: a query that selects from a table with biographies for all people on the planet. Suppose you are interested in all people from a certain country.
    select * from all_people where country='Vatican'
    would only return only 800 rows (as Vatican is an extremely small country with population of just 800 people). For this case, obviously, using an index would be very efficient.
    Now if we run this query:
    select * from all_people where contry = 'India',
    we'd be getting over a billion of rows. For this case, a full table scan would be several thousand times faster.
    Now consider the third case:
    select * from all_people where country = :b1
    What plan should the optimizer choose? The value of :b1 bind variable is generally not known during the parse time, it will be passed by the user when the query is already parsed, during run-time.
    In this case, one of two scenarios takes place: either the optimizer relies on some built-in default selectivities (basically, it takes a wild guess), or the optimizer postpones taking the final decision until the
    first time the query is run, 'peeks' the value of the bind, and optimizes the query for this case.
    In means, that if the first time the query is parsed, it was called with :b1 = 'India', a plan with a full table scan will be generated and cached for subsequent usage. And until the cursor is aged out of library cache
    or invalidated for some reason, this will be the plan for this query.
    If the first time it was called with :b1='Vatican', then an index-based plan will be picked.
    Either way, bind peeking only gives good results if the subsequent usage of the query is the same kind as the first usage. I.e. in the first case it will be efficient, if the query would always be run for countries with big popultions.
    And in the second case, if it's always run for countries with small populations.
    This mechanism is called 'bind peeking' and it's one of the most common causes of performance problems. In 11g, there are more sophisticated mechanisms, such a cardinality feedback, but they don't always work as expected.
    This mechanism is the most likely explanation for your issue. However, without proper diagnostic information we cannot be 100% sure.
    Best regards,
      Nikolay

  • Do partition scans take longer than a full table scan on an unpartitioned table?

    Hello there,
    I have a range-partitioned table PART_TABLE which has 10 Million records and 10 partitions having 1 million records each. Partition is done based on a Column named ID which is a sequence from 1 to 10 million.
    I created another table P2_BKP (doing a select * from part_table) which has the same dataset as that of PART_TABLE except that this table is not partitioned.
    Now, I run a same query on both the tables to retrieve a range of data. Precisely I am trying to read only the data present in 5 partitions of the partitioned tables which theoretically requires less reads than when done on unpartitioned tables.
    Yet, the query seems to take extra time on partitioned table than when run on unpartitioned table.Any specific reason why is this the case?
    Below is the query I am trying to run on both the tables and their corresponding Explain Plans.
    QUERY A
    =========
    select * from P2_BKP where id<5000000;
    | Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                
    |   0 | SELECT STATEMENT  |        |  6573K|   720M| 12152   (2)| 00:02:26 |                                                                                                                                                                                                                                
    |*  1 |  TABLE ACCESS FULL| P2_BKP |  6573K|   720M| 12152   (2)| 00:02:26 |                                                                                                                                                                                                                                
    QUERY B
    ========
    select * from part_table where id<5000000;
    | Id  | Operation                | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                                                                                                                                                                                     
    |   0 | SELECT STATEMENT         |            |  3983K|   436M| 22181  (73)| 00:04:27 |       |       |                                                                                                                                                                                                     
    |   1 |  PARTITION RANGE ITERATOR|            |  3983K|   436M| 22181  (73)| 00:04:27 |     1 |     5 |                                                                                                                                                                                                     
    |*  2 |   TABLE ACCESS FULL      | PART_TABLE |  3983K|   436M| 22181  (73)| 00:04:27 |     1 |     5 |                                                                                                                                                                                                     

    at the risk of bringing unnecessary confusion into the discussion: I think there is a situation in 11g in which a Full Table Scan on a non partitioned table can be faster than the FTS on a corresponding partitioned table: if the size of the non partitioned table reaches a certain threshold (I think it's: blocks > _small_table_threshold * 5) the runtime engine may decide to use a serial direct path read to access the data. If the single partitions don't pass the threshold the engine will use the conventional path.
    Here is a small example for my assertion:
    -- I create a simple partitioned table
    -- and a corresponding non-partitioned table
    -- with 1M rows
    drop table tab_part;
    create table tab_part (
        col_part number
      , padding varchar2(100)
    partition by list (col_part)
        partition P00 values (0)
      , partition P01 values (1)
      , partition P02 values (2)
      , partition P03 values (3)
      , partition P04 values (4)
      , partition P05 values (5)
      , partition P06 values (6)
      , partition P07 values (7)
      , partition P08 values (8)
      , partition P09 values (9)
    insert into tab_part
    select mod(rownum, 10)
         , lpad('*', 100, '*')
      from dual
    connect by level <= 1000000;
    exec dbms_stats.gather_table_stats(user, 'tab_part')
    drop table tab_nopart;
    create table tab_nopart
    as
    select *
      from tab_part;
    exec dbms_stats.gather_table_stats(user, 'tab_nopart')
    -- my _small_table_threshold is 1777 and the partitions
    -- have a size of ca. 1600 blocks while the non-partitioned table
    -- contains 15360 blocks
    -- I have to flush the buffer cache since
    -- the direct path access is only used
    -- if there are few blocks already in the cache
    alter system flush buffer_cache;
    -- the execution plans are not really exciting
    | Id  | Operation           | Name     | Rows  | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |          |     1 |  8089   (0)| 00:00:41 |       |       |
    |   1 |  SORT AGGREGATE     |          |     1 |            |          |       |       |
    |   2 |   PARTITION LIST ALL|          |  1000K|  8089   (0)| 00:00:41 |     1 |    10 |
    |   3 |    TABLE ACCESS FULL| TAB_PART |  1000K|  8089   (0)| 00:00:41 |     1 |    10 |
    | Id  | Operation          | Name       | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |            |     1 |  7659   (0)| 00:00:39 |
    |   1 |  SORT AGGREGATE    |            |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| TAB_NOPART |  1000K|  7659   (0)| 00:00:39 |
    But on my PC the FTS on the non-partitioned table is faster than the FTS on the partitions (1sec to 3 sec.) and v$sesstat shows the reason for this difference:
    -- non partitioned table
    NAME                                               DIFF
    table scan rows gotten                          1000000
    file io wait time                                 15313
    session logical reads                             15156
    physical reads                                    15153
    consistent gets direct                            15152
    physical reads direct                             15152
    DB time                                              95
    -- partitioned table
    NAME                                               DIFF
    file io wait time                               2746493
    table scan rows gotten                          1000000
    session logical reads                             15558
    physical reads                                    15518
    physical reads cache prefetch                     15202
    DB time                                             295
    (maybe my choose of counters is questionable)
    So it's possible to get slower access for an FTS on a partitioned table under special conditions.
    Regards
    Martin

  • FASTER THROUGH PUT ON FULL TABLE SCAN

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-10
    Subject: Faster through put on Full table scans
    db_file_multiblock_read only affects the performance of full table scans.
    Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
    db_file_multiblock_read must be less than or equal to 64KBytes.
    If your query is really doing an index range scan then the performance
    of full scans is irrelevant. In order to improve the performance of this
    type of query it is important to reduce the number of blocks that
    the 'interesting' part of the index is contained within.
    Obviously the db_blocksize has the most impact here.
    Historically Informix has not been able to modify their database block size,
    and has had a fixed 2KB block.
    On most Unix platforms Oracle can use up to 8KBytes.
    (Some eg: Sequent allow 16KB).
    This means that for the same size of B-Tree index Oracle with
    an 8KB blocksize can read it's contents in 1/4 of the time that
    Informix with a 2KB block could do.
    You should also consider whether the PCTFREE value used for your index is
    appropriate. If it is too large then you will be wasting space
    in each index block. (It's too large IF you are not going to get any
    entry size extension OR you are not going to get any new rows for existing
    index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)
    db_file_simultaneous_writes has no direct relevance to index re-balancing.
    (PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
    HP/Allbase for the database server application that HP uses internally to
    monitor and control it's Tape drive manufacturing lines. They chose
    Oracle because: We outperformed Informix.
                             Sybase was too slow AND too
    unreliable.
                             Unify was short on functionality
    and SLOW.
                             HP/Allbase couldn't match the
    availability
                             requirements and wasn't as
    functional.
    Informix had problems demonstrating the ability to do hot backups without
    severely affecting the system throughput.
    HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
    different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
    configurations.
    NNB: It's always worth throwing in a simulated system failure whilst the
    benchmark is in progress. Informix has a history of not coping gracefully.
    That is they usually need some manual intervention to perform the database
    recovery.)
    I have a perspective client who is running a stripped down souped version of
    informix with no catalytic converter. One of their queries boils down to an
    Index Range Scan on 10000 records. How can I achieve better throughput
    on a single drive single CPU machine (HP/UX) without using raw devices.
    I had heard rebuilding the database with a block size factor greater than
    the OS block size would yield better performance. Also I tried changing
    the db_file_multiblock_read_count to 32 without much improvement.
    Adjusting the db_writers to two did not help either.     
    Also will the adjustment of the db_file_simultaneous_writes help on
    the maintenance of a index during rebalancing operations.

    2)if cbo, how are the stats collected?
    daily(less than millions rows of table) and weekly(all tables)There's no need to collect stats so frequently unless it's absolute necessary like you have massive update on tables daily or weekly.
    It will help if you can post your sample explain plan and query.

Maybe you are looking for

  • Purchased in Itunes but it'snot showing up on iPhone

    Purchased song on iTunes. Placed it on the iPhone. I see it in my iPhone library and my iTunes library when it is connected to the computer but when I disconnect the iPhone from my PC I do not see the song on my iPhone. Any suggestions on why this is

  • Does iPod Nano work with Camera Connector?

    Is the iPod Nano able to work with the Camera Connector? How about the Belkin Camera Connector or Belkin Media Reader?

  • Can't sync Palm Centro and Outlook 2007/Windows 7 - please help!

    Hello, I've never posted on a forum, but I'm at my wits end...!  I've been trying to get my Palm Centro to sync Contacts/Calendar/Memos to Outlook 2007 on my new laptop for a week now with no success.  It used to sync just fine to my old computer wit

  • Where Should a Linked Document Reside?

    I have a multi-chapter document that I can't link to its home web site because it is in multiple zipped files. I don't want my end users to have to unzip, etc. I also don't want to store the files, zipped or unzipped, in my project (a policies and pr

  • Online PDF Form

    Hi Experts, I want to deploy the sample application TutWD_OnlineInteractiveForm_Init.zip and get the online pdf form.I have installed adobe reader,adobe lifecycle designer in the system for which I am deploying the application.Can you please tell whi