Slow queries and full table scans in spite of context index

I have defined a USER_DATASTORE, which uses a PL/SQL procedure to compile data from several tables. The master table has 1.3 million rows, and one of the fields being joined is a CLOB field.
The resulting token table has 65,000 rows, which seems about right.
If I query the token table for a word, such as "ORACLE" in the token_text field, I see that the token_count is 139. This query returns instantly.
The query against the master table is very slow, taking about 15 minutes to return the 139 rows.
Example query:
select hnd from master_table where contains(myindex,'ORACLE',1) > 0;
I've run a sql_trace on this query, and it shows full table scans on both the master table and the DR$MYINDEX$I table. Why is it doing this, and how can I fix it?

After looking at the tuning FAQ, I can see that this is doing a functional lookup instead of an indexed lookup. But why, when the rows are not constrained by any structural query, and how can I get it to instead to an indexed lookup?
Thanks in advance,
Annie

Similar Messages

  • How to check small table scan full table scan if we  will use index  in where clause.

    How to check small table scan full table scan if i will use index column in where clause.
    Is there example link there i can  test small table scan full table  if index is used in where clause.

    Use explain plan on your statement or set autotrace traceonly in your SQL*Plus session followed by the SQL you are testing.
    For example
    SQL> set autotrace traceonly
    SQL> select *
      2  from XXX
      3  where id='fga';
    no rows selected
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=13 Card=1 Bytes=16
              5)
       1    0   PARTITION RANGE (ALL) (Cost=13 Card=1 Bytes=165)
       2    1     TABLE ACCESS (FULL) OF 'XXX' (TABLE) (Cost=13 Card
              =1 Bytes=165)
    Statistics
              1  recursive calls
              0  db block gets
           1561  consistent gets
            540  physical reads
              0  redo size
           1864  bytes sent via SQL*Net to client
            333  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed

  • Slow query due to large table and full table scan

    Hi,
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:
    SELECT table1.column, table2.column, table3.column
    FROM table1
    JOIN table2 on table1.table2Id = table2.id
    LEFT JOIN table3 on table2.table3id = table3.id
    WHERE table1.id IN(
    SELECT id
    FROM (
    (SELECT a.*, rownum rnum FROM(
    SELECT table1.id
    FROM table1,
    table2,
    table3
    WHERE
    table1.table2id = table2.id
    AND
    table2.table3id IS NULL OR table2.table3id = :table3IdParameter
    ) a
    WHERE rownum <= :end))
    WHERE rnum >= :start
    Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
    Can we avoid this? We have, what we think are, the correct indexes.
    /best regards, Håkan

    >
    Hi Håkan - welcome to the forum.
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:Firstly, please read the forum FAQ - top right of page.
    Please format your SQL using tags [code /code].
    In order to help us to help you.
    Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
    CREATE TABLE table1
      Field1  Type1,
      Field2  Type2,
    FieldN  TypeN
    );Then give us some table data - not 100's of records - just enough in the form
    INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
    ..Please post EXPLAIN PLAN - again with tags.
    HTH,
    Paul...
    /best regards, Håkan

  • Nested Tables and Full Table Scans

    Hello,
    I am hoping someone help can help me as I am truly scratching my head.
    I have recently been introduced to nested tables due to the fact that we have a poor running query in production. What I have discovered is that when a table is created with a column that is a nested table a unique index is automatically created on that column.
    When I do an explain plan on this table A it states that a full scan is being doine on Table A and on the nested table B. I can add an index to the offending columns to remove the full scan on Table A but the explain plan still identifies that a full scan is being done on the nested table B. Bare in mind that the column with the nested table has a cardinality of 27.
    What can I do? As I stated, there is an index on this nested table column but clearly it is being ignored.The query bombed out after 4 hours and when I ran a query to see what the record count was it was only 2046.
    Any suggestions would be greatly appreciated.
    Edited by: user11887286 on Sep 10, 2009 1:05 PM

    Hi and welcome to the forum.
    Since your question is in fact a tuning request, you need to provide us some more insights.
    See:
    [How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]
    and also
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    In short:
    - database version
    - the actual queries you're executing
    - the execution plans (explain plans)
    - trace/tkprof output if available, or ask your DBA for it
    - utopiamode a small but concisive testcase would be ideal (create table + insert statements). +/utopiamode+

  • To_timestamp -vs- to_date and full table scans

    Is it expected behavior for the to_timstamp function to negate the use of an index where the to_date does not?
    Example:
    The query:
    SELECT hugea.intkey, hugea.stringkey, hugea.intnum, hugea.stringnum,
    hugea.floatnum, hugea.longnum, hugea.doublenum, hugea.bytenum,
    hugea.datevalue, hugea.timevalue, hugea.timestampvalue,
    hugea.booleanvalue, hugea.charvalue, hugea.shortvalue,
    hugea.bigintegervalue, hugea.bigdecimalvalue, hugea.objectvalue
    FROM hugea
    WHERE hugea.timestampvalue = {ts'2000-01-01 03:21:37[u].0'}
    Submitted thru a java application generates the following trace IN ORACLE
    =====================
    PARSING IN CURSOR #4 len=452 dep=0 uid=65 oct=3 lid=65 tim=18446744071203520496 hv=3183903894 ad='6582cbe4'
    SELECT hugea.intkey, hugea.stringkey, hugea.intnum, hugea.stringnum,
    hugea.floatnum, hugea.longnum, hugea.doublenum, hugea.bytenum,
    hugea.datevalue, hugea.timevalue, hugea.timestampvalue,
    hugea.booleanvalue, hugea.charvalue, hugea.shortvalue,
    hugea.bigintegervalue, hugea.bigdecimalvalue, hugea.objectvalue
    FROM hugea
    WHERE hugea.timestampvalue = to_timestamp('2000-01-01 03:21:37.0','YYYY-MM-DD HH24:MI:SS.FF')
    END OF STMT
    PARSE #4:c=359375,e=355026,p=0,cr=63,cu=3,mis=1,r=0,dep=0,og=4,tim=18446744071203520489
    EXEC #4:c=0,e=1253028,p=0,cr=3,cu=0,mis=0,r=0,dep=0,og=4,tim=18446744071204773686
    FETCH #4:c=0,e=2055128,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=4,tim=18446744071206831146
    STAT #4 id=1 cnt=0 pid=0 pos=1 obj=30620 op='TABLE ACCESS FULL HUGEA '*** 2004-07-30 16:06:28.000
    =====================
    The query:
    SELECT hugea.intkey, hugea.stringkey, hugea.intnum, hugea.stringnum,
    hugea.floatnum, hugea.longnum, hugea.doublenum, hugea.bytenum,
    hugea.datevalue, hugea.timevalue, hugea.timestampvalue,
    hugea.booleanvalue, hugea.charvalue, hugea.shortvalue,
    hugea.bigintegervalue, hugea.bigdecimalvalue, hugea.objectvalue
    FROM hugea
    WHERE hugea.timestampvalue = {ts'2000-01-01 03:21:[u]37'}
    Submitted thru a java application generates the following trace IN ORACLE
    PARSING IN CURSOR #4 len=442 dep=0 uid=65 oct=3 lid=65 tim=18446744071260555440 hv=2315259180 ad='65343e30'
    SELECT hugea.intkey, hugea.stringkey, hugea.intnum, hugea.stringnum,
    hugea.floatnum, hugea.longnum, hugea.doublenum, hugea.bytenum,
    hugea.datevalue, hugea.timevalue, hugea.timestampvalue,
    hugea.booleanvalue, hugea.charvalue, hugea.shortvalue,
    hugea.bigintegervalue, hugea.bigdecimalvalue, hugea.objectvalue
    FROM hugea
    WHERE hugea.timestampvalue = to_date('2000-01-01 03:21:37','YYYY-MM-DD HH24:MI:SS')
    END OF STMT
    PARSE #4:c=0,e=1058,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=18446744071260555431
    EXEC #4:c=0,e=41,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=18446744071260555686
    FETCH #4:c=0,e=58,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=4,tim=18446744071260557318
    STAT #4 id=1 cnt=1 pid=0 pos=1 obj=30620 op='TABLE ACCESS BY INDEX ROWID HUGEA '
    STAT #4 id=2 cnt=1 pid=1 pos=1 obj=31428 op='INDEX RANGE SCAN TIMESTAMPVALUE_IX '
    Though the problem is related to JAVA/JDBC in this case. The underlying problem (to_date -vs- to_timestamp not acting the same insofar as INDEX utilization) is reproducible within SQL*Plus.
    I also tried a function-based index and forced use with hints but had no success.
    I'm hoping there is a patch of some kind and that this is simply due to the immaturity of the timestamp datatype.
    please advise.

    your test is then not relevant. if your table is to small, your statistics too old, your range too big, then oracle will not use an index.
    SQL> insert into test_timestamp select timestamp '2000-01-01 00:00:00.00' + numtodsinterval(dbms_random.value,'DAY') from all_objects;
    SQL> insert into test_timestamp select timestamp '2000-01-01 00:00:00.00' + numtodsinterval(dbms_random.value,'DAY') from all_objects;
    SQL> select count(*) from test_timestamp;
      COUNT(*)
         74706
    SQL> analyze table test_timestamp compute statistics;
    Table analyzed.
    SQL> create index test_timestamp_ix on test_timestamp(ts) compute statistics;
    Index created.
    SQL> set autot trace exp
    SQL> select * from test_timestamp where ts between to_timestamp('2000-01-01','YYYY-MM-DD')  and to_timestamp('2000-01-01 00:00:05.00','YYYY-MM-DD HH24:MI:SS.FF');
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=187 Bytes=2
              057)
       1    0   FILTER
       2    1     INDEX (RANGE SCAN) OF 'TEST_TIMESTAMP_IX' (INDEX) (Cost=
              2 Card=187 Bytes=2057)

  • Full table scans and EUL

    Hi All,
    I am using Discoverer Version 10.1.2.1.
    I have few reports in which tables used in queries (used in Custom folders) go into full table scans inspite of all efforts in tuning.
    I came to know that full table scans also come from the way EUL is built and maintained.
    Can anyone please throw some light on how does EUL and full table scan relate.
    Also one more thing here, which is better to use, Database View on which we can base our folder or writing the complete complex query in the folder.
    Any help in this case will be appreciated.
    Regards,
    Ankur

    Hi,
    Can anyone please throw some light on how does EUL and full table scan relateThe database cost base optimiser processes the SQL that has been generated by Discoverer and creates an execution plan for the SQL. Now the execution plan will contain full table scans if the CBO calculates that FTS will give the best results. The CBO mainly uses the statistics held against the tables and the conditions in the SQL to calculate whether FTS would be better than using an index. The table join conditions are usually defined in the EUL but other conditions are usually in the workbook.
    So there are many factors which control whether the database uses an FTS and only a few of them are affected by how the EUL is built.
    Database View on which we can base our folder or writing the complete complex query in the folderIn general, it is always better to create a database view if that option is available to you. You can control and monitor the SQL in a database view much more easily than using a query in a custom folder.
    Rod West

  • Different Cost values for full table scans

    I have a very simple query that I run in two environments (Prod (20 CPU) and Dev (12 CPU)). Both environemtns are HPUX, oracle 9i.
    The query looks like this:
    SELECT prd70.jde_item_n
    FROM gdw.vjda_gdwprd68_bom_cmpnt prd68
    ,gdw.vjda_gdwprd70_gallo_item prd70
    WHERE prd70.jde_item_n = prd68.parnt_jde_item_n
    AND prd68.last_eff_t+nvl(to_number(prd70.auto_hld_dy_n),0)>= trunc(sysdate)
    GROUP BY prd70.jde_item_n
    When I look at the explain plans, there is a significant difference in cost and I can't figure out why they would be different. Both queries do full table scans, both instances have about the same number of rows, statistics on both are fresh.
    Production Plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=18398 Card=14657 B
    ytes=249169)
    1 0 SORT (GROUP BY) (Cost=18398 Card=14657 Bytes=249169)
    2 1 HASH JOIN (Cost=18304 Card=14657 Bytes=249169)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=949
    4 Card=194733 Bytes=1168398)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=5887
    Card=293149 Bytes=3224639)
    Development plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3566 Card=14754 B
    ytes=259214)
    1 0 HASH GROUP BY (GROUP BY) (Cost=3566 Card=14754 Bytes=259214)
    2 1 HASH JOIN (Cost=3558 Card=14754 Bytes=259214)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=1914
    4 Card=193655 Bytes=1323598)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=1076
    Card=295075 Bytes=3169542)
    There seems to be no reason for the costs to be so different, but I'm hoping that someone will be able to lead me in the right direction.
    Thanks,
    Jdelao

    This link may help:
    http://jaffardba.blogspot.com/2007/07/change-behavior-of-group-by-clause-in.html
    But looking at the explain plans one of them uses a SORT (GROUP BY) (higher cost query) and the other uses a HASH GROUP BY (GROUP BY) (lower cost query). From my searches on the `Net the HASH GROUP BY is a more efficient algorithm than the SORT (GROUP BY) which would lead me to believe that this is one of the reasons why the cost values are so different. I can't find which version HASH GROUP BY came in but quick searches indicate 10g for some reason.
    Are your optimizer features parameter set to the same value? In general you could compare relevant parameters to see if there is a difference.
    Hope this helps!

  • Avod full table scan help...

    HI ,
    I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
    please help to resolve the issue and it should recognize index rather than full table scan..

    user13301356 wrote:
    HI ,
    I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
    please help to resolve the issue and it should recognize index rather than full table scan..What is database version? Are all columns in the table indexed? Copy and paste the query that you are executing.

  • Prompt on DATE forces FULL TABLE SCAN

    When using a prompt on a datetime field OBIEE sends a SQL to the Database with the TIMESTAMP COMMAND.
    Due to Timestamp the Oracle database does a Full Table Scan. The field ATIST is a date with time on the physical database.
    By Default ATIST was configured as TIMESTAMP in the rpd physical layer. The SQL request is sent to a Oracle 10 Database.
    That is the query sent to the database:
    -------------------- Sending query to database named PlantControl1 (id: <<10167>>):
    select distinct T1471.ATIST as c1,
    T1471.GUTMENGEMELD2 as c2
    from
    AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
    where ( T1471.ATIST = TIMESTAMP '2005-04-01 13:48:05' )
    order by c1, c2
    The result takes more than half a minute to appear.
    Because BIEE is using "TIMESTAMP" , the database performs a full table scan instead of using the index.
    By using TO_DATE instead of timestamp the result appears after a second.
    select distinct T1471.ATIST, T1471.GUTMENGEMELD2 as c2
    from
    AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
    where ( T1471.ATIST = to_date('2005.04.01 13:48:05', 'yyyy.mm.dd hh24:mi:ss') );
    Is there any way resolving the issue?
    PS:When the field ATIST is configured as date at the physical layer the SQL performs well is it uses the command "to_date" instead of "timestamp". But this cuts the time of the date, When it is configured as DATETIME, OBIEE uses TIMESTAMP again.
    What I need is a working date + time field.
    Anybody has encountered a similiar problem?

    To be honest I haven't come across many scenarios where the Time has been important. Most of our reporting stops at Day level.
    What is the real world business question being asked here that requires DayTime?
    Incidentally if you change your datatype on the base table you will see it works fine.
    CREATE TABLE daytime( daytime TIMESTAMP );
    CREATE UNIQUE INDEX dt ON daytime  (daytime)
    SQL> set autotrace traceonly
    SQL> SELECT * FROM daytime
      2  WHERE daytime = TIMESTAMP '2007-04-01 13:00:45';
    no rows selected
    Execution Plan
    Plan hash value: 3985400340
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |    13 |     1   (0)| 00:00:01 |
    |*  1 |  INDEX UNIQUE SCAN| DT   |     1 |    13 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("DAYTIME"=TIMESTAMP' 2007-04-01 13:00:45.000000000')
    Statistics
              1  recursive calls
              0  db block gets
              1  consistent gets
              0  physical reads
              0  redo size
            242  bytes sent via SQL*Net to client
            362  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    SQL>However if its a date it would appear to do some internal function call which I guess is the source of the problem ...
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |     1 |     9 |     2   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DAYTIME |     1 |     9 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(INTERNAL_FUNCTION("DAYTIME")=TIMESTAMP' 2007-04-01
                  13:00:45.000000000')

  • Update doing full table scan and taking long time

    Hi All,
    I am running an update statement which is doing a full table scan.
    UPDATE Database.TABLE AS T
    SET COMMENTS = CAST(CAST(COALESCE(T.COMMENTS,0) AS INTEGER) + 1 AS
    CHARACTER)
    WHERE T.TRACKINGPOINT = 'NDEL'
    AND T.REFERENCENUMBER =
    SUBSTRING(Root.XML.EE_EAI_MESSAGE.ReferenceNumber || '
    ' FROM 1 FOR 32);
    Any advice.
    Regards,
    Umair

    Mustafa,
    No Developer is writing it in his program.
    Regards,
    Umair

  • Full table scan and how to avoid it

    Hello,
    I have two tables, one with 425,000 records, and the other with 5,200,000 records in it. The smaller table has an index on its unique primary key, and the bigger table has an index on the foreign key of the smaller table.
    When joining these two tables, I keep getting full table scans on both of these tables, and I would like to understand the philosophy behind it as well as ways to avoid this.
    Thanks

    Are you manipulating the join columns in any fashion? Such as applying a function to them like in
    to_char(column_a) = to_char(column_b)Because any manipulation like that will obviate your index (assuming you don't have function based indexes).
    Really though, without your tables, indexes and query, we're left with voodoo, which is cool, but not really that effective.
    *note to any and all practicing witch doctors, i really do think voodoo is cool and effective, please don't persecute me for my speakings.
    Message was edited by:
    Tubby

  • Question about Full Table Scans and Tablespaces

    Good evening (or morning),
    I'm reading the Oracle Concepts (I'm new to Oracle) and it seems that, based on the way that Oracle allocates and manages storage the following premise would be true:
    Premise: A table that is often accessed using a full table scan (for whatever reasons) would best reside in its own dedicated tablespace.
    The main reason I came to this conclusion is that when doing a full table scan, Oracle does multiblock I/O, likely reading one extent at a time. If the Tablespace's datafile(s) only contain data for a single table then a serial read will not have to skip over segments that contain data for other tables (as would be the case if the tablespace is shared with other tables). The performance improvement is probably small but, it would seem that there is one nonetheless.
    I'd like to have the thoughts of experienced DBAs regarding the above premise.
    Thank you for your contribution,
    John.

    Good morning :) Aman,
    >
    A little correction! A segment(be it a table,index, cluster, temporary) , would stay always in its own tablespace. Segments can't span tablespaces!
    >
    Fortunately, I understood that from the beginning :)
    You mentioned fragmentation, I understand that too. As rows get deleted small holes start existing in the segment and those holes are not easily reusable because of their limited size.
    What I am referring to is different though.
    Let's consider a tablespace that is the home of 2 or more tables, the tablespace in turn is represented by one or more OS datafiles, in that case the situation will be as shown in the following diagram (not a very good diagram but... best I can do here ;) ):
    Tablespace TablespaceWithManyTables
      (segment 1 contents)
        TableA Extent 1
          TableA Block 1
          TableA Block 2
          Fragmentation may happen in these blocks or
          even across blocks because Oracle allows rows
          to span blocks
          TableA Block n
        End of TableA Extent 1
        more extents here all for TableA
      (end of segment 1 contents)
      (segment 2 contents)
        TableZ Extent 5
          blocks here
        End of TableZ Extent 5
        more extents here, all for tableZ
      (end of segment 2 contents)
        and so on
      (more segments belonging to various tables)
    end of Tablespace TablespaceWithManyTablesOn the other hand, if the tablespace hosts only one table, the layout will be:
    Tablespace TablespaceExclusiveForTableA
      (segment 1 contents)
        TableA Extent 1
          TableA Block 1
          TableA Block 2
          Fragmentation may happen in these blocks or
          even across blocks because Oracle allows rows
          to span blocks
          TableA Block n
        End of TableA Extent 1
        another extent for TableA
      (end of segment 1 contents)
      (segment 2 contents)
        TableA Extent 5
          blocks here
        End of TableA Extent 5
        more extents for TableA
      (end of segment 2 contents)
      and so on
      (more segments belonging to TableA)
    end of Tablespace TablespaceExclusiveForTableAThe fragmentation you mentioned takes place in both cases. In the first case, regardless of fragmentation, some segments don't belong to the table that is being serially scanned, therefore they have to be skipped over at the OS level. In the second case, since all the extents belong to the same table, they can be read serially at the OS level. I realize that in that case the segments may not be read in the "right" sequence but they don't have to because they can be served to the client app in sequence.
    It is because of this that, I thought that if a particular table is mostly read serially, there might be a performance benefit (and also less work for Oracle) to dedicate a tablespace to it.
    I can't wait to see what you think of this :)
    John.

  • Full Table Scans and LRU

    Hello,
    In a full table scan I understand that the memory block used for a newly read table block is placed at the end of the LRU.
    When the second table block is read, is the same memory block replaced?
    What I am asking basically is whether for a full table scan only one block in the data buffer is ever used, with the same single block being recycled for the entire content of the table.
    Kind regards,
    Peter Strauss

    Hi Fidel,
    > In oracle 10g it changes a little the behavior. It is
    > recommended not to set MULTIBLOCK_READ_COUNT. You
    > calculate system statistics and Oracle "decides" the
    > <i>best </i>value.
    Take care... oracle 10gR2 uses the system statistic values to calculate the costs (= execution plan) including the i/o statistics but for the multiple i/o it uses the parameter DB_FILE_MULTIBLOCK_READ_COUNT.
    So the result is: For calculating it uses the system statistic and for the work itself it uses DB_FILE_MULTIBLOCK_READ_COUNT (if set).
    http://jonathanlewis.wordpress.com/2007/05/20/system-stats-strategy/
    For the LRU thing ... oracle has a nice explanation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm
    Regards
    Stefan

  • "db file scattered read" too high and Query going for full table scan-Why ?

    Hi,
    I had a big table of around 200mb and had a index on it.
    In my query I am using the where clause which has to use the
    index. I am neither using any not null condition
    nor using any function on the index fields.
    Still my query is not using the index.
    It is going for full table scan.
    Also the statspack report is showing the
    "db file scattered read" too high.
    Can any body help and suggest me why this is happenning.
    Also tell me the possible solution for it.
    Thanks
    Arun Tayal

    "db file scattered read" are physical reads/multi block reads. This wait occurs when the session reading data blocks from disk and writing into the memory.
    Take the execution plan of the query and see what is wrong and why the index is not being used.
    However, FTS are not always bad. By the way, what is your db_block_size and db_file_multiblock_read_count values?
    If those values are set to high, Optimizer always favour FTS thinking that reading multiblock is always faster than single reads (index scans).
    Dont see oracle not using index, just find out why oracle is not using index. Use the INDEX hint to force optimizer to use index. Take the execution with/witout index and compare the cardinality,cost and of course, logical reads.
    Jaffar
    Message was edited by:
    The Human Fly

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

Maybe you are looking for