Tunf Fast Table Scan

Hi all,
I have a sql that uses full table scan since many rows will be selected?How can i further tune this sql?Can I speed up full table scan??
Regards
David

One of the first issues that you can do to optimize a query is to analyze the explain plan of it. Example:
SQL>
SQL> explain plan for
  2  select sysdate from dual;
Explained.
SQL> set linesize 400
SQL> desc dbms_xplan
FUNCTION DISPLAY RETURNS DBMS_XPLAN_TYPE_TABLE
Argument Name                  Type                    In/Out Default?
TABLE_NAME                     VARCHAR2                IN     DEFAULT
STATEMENT_ID                   VARCHAR2                IN     DEFAULT
FORMAT                         VARCHAR2                IN     DEFAULT
SQL>
SQL>
SQL> select * from table( dbms_xplan.display );
PLAN_TABLE_OUTPUT
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
|   0 | SELECT STATEMENT     |             |       |       |       |
|   1 |  TABLE ACCESS FULL   | DUAL        |       |       |       |
Note: rule based optimization
9 rows selected.
SQL>Joel Pérez
http://otn.oracle.com/experts

Similar Messages

  • Why full index scan is faster than full table scan?

    Hi friends,
    In the where clause of a query,if we give a column that contains index on it,then oracle uses index to search data rather than a TABLE ACCESS FULL Operation.
    Why index searching is faster?

    Sometimes it is faster to use index and sometimes it is faster to use full table scan. If your statistics are up to date Oracle is far more likely to get it right. If the query can be satisfied entirely from the index, then an index scan will almost always be faster as there are fewer blocks to read in the index than there would be if the table itself were scanned. However if the query must extract data from the table when that data is not in te index, then the index scan will be faster only if a small percentage of the rows are to be returned. Consiter the case of an index where 40% of the rows are returned. Assume the index values are distributed evenly among the data blocks. Assume 10 rows will fit in each data block thus 4 of the 10 rows will match the condition. Then the average datablock will be fetched 4 times since most of the time adjacent index entries will not be in the same block. The number of single datablock fetches will be about 4 times the number of datablocks. Compare this to a full table scan that does multiblock reads. Far fewer reads are required to read the entire table. Though it depends on the number of rows per block, a general rule is any query returning more than about 10% of a table is faster NOT using an index.

  • FASTER THROUGH PUT ON FULL TABLE SCAN

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-10
    Subject: Faster through put on Full table scans
    db_file_multiblock_read only affects the performance of full table scans.
    Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
    db_file_multiblock_read must be less than or equal to 64KBytes.
    If your query is really doing an index range scan then the performance
    of full scans is irrelevant. In order to improve the performance of this
    type of query it is important to reduce the number of blocks that
    the 'interesting' part of the index is contained within.
    Obviously the db_blocksize has the most impact here.
    Historically Informix has not been able to modify their database block size,
    and has had a fixed 2KB block.
    On most Unix platforms Oracle can use up to 8KBytes.
    (Some eg: Sequent allow 16KB).
    This means that for the same size of B-Tree index Oracle with
    an 8KB blocksize can read it's contents in 1/4 of the time that
    Informix with a 2KB block could do.
    You should also consider whether the PCTFREE value used for your index is
    appropriate. If it is too large then you will be wasting space
    in each index block. (It's too large IF you are not going to get any
    entry size extension OR you are not going to get any new rows for existing
    index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)
    db_file_simultaneous_writes has no direct relevance to index re-balancing.
    (PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
    HP/Allbase for the database server application that HP uses internally to
    monitor and control it's Tape drive manufacturing lines. They chose
    Oracle because: We outperformed Informix.
                             Sybase was too slow AND too
    unreliable.
                             Unify was short on functionality
    and SLOW.
                             HP/Allbase couldn't match the
    availability
                             requirements and wasn't as
    functional.
    Informix had problems demonstrating the ability to do hot backups without
    severely affecting the system throughput.
    HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
    different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
    configurations.
    NNB: It's always worth throwing in a simulated system failure whilst the
    benchmark is in progress. Informix has a history of not coping gracefully.
    That is they usually need some manual intervention to perform the database
    recovery.)
    I have a perspective client who is running a stripped down souped version of
    informix with no catalytic converter. One of their queries boils down to an
    Index Range Scan on 10000 records. How can I achieve better throughput
    on a single drive single CPU machine (HP/UX) without using raw devices.
    I had heard rebuilding the database with a block size factor greater than
    the OS block size would yield better performance. Also I tried changing
    the db_file_multiblock_read_count to 32 without much improvement.
    Adjusting the db_writers to two did not help either.     
    Also will the adjustment of the db_file_simultaneous_writes help on
    the maintenance of a index during rebalancing operations.

    2)if cbo, how are the stats collected?
    daily(less than millions rows of table) and weekly(all tables)There's no need to collect stats so frequently unless it's absolute necessary like you have massive update on tables daily or weekly.
    It will help if you can post your sample explain plan and query.

  • What is mean by index range scan and fast full scan

    What is mean by the following execution plans
    1)Table access by index rowid
    2) Index range scan
    3) Index fast full scan
    4) global index by rowid
    ..etc
    where i can get this information.In what situation CBO take these paths.Can you pls give me a link where i can find all these.I read these long time ago but not able to recollect
    Thanks
    Anand

    Oracle® Database Performance Tuning Guide
    10g Release 2 (10.2)
    Part Number B14211-01
    13.5 Understanding Access Paths for the Query Optimizer
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#sthref1281

  • Select statement in a function does Full Table Scan

    All,
    I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
    Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
    Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
    Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
    Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = case_number_in and
    a1.case_number = a2.case_number and
    a2.application_date <= effective_date_in and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = 'A006438' and
    a1.case_number = a2.case_number and
    a2.application_date <= '01-Apr-2009' and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Why using the values in the passed parameter in the first select statement causes full table scan?
    Thank you for your help,
    Seyed
    Edited by: user11117178 on Jul 30, 2009 6:22 AM
    Edited by: user11117178 on Jul 30, 2009 6:23 AM
    Edited by: user11117178 on Jul 30, 2009 6:24 AM

    Hello Dan,
    Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
    PLAN_TABLE_OUTPUT
    Plan hash value: 2132048964
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT              |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |*  1 |  HASH JOIN                    |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |   2 |   BITMAP CONVERSION TO ROWIDS |                         |     3 |     9 |     1   (0)| 00:00:01 |       |       |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES    |       |       |            |          |       |       |
    |   4 |   PARTITION RANGE ITERATOR    |                         |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    |   5 |    TABLE ACCESS FULL          | PS2_FS_TRANSACTION_FACT |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    Predicate Information (identified by operation id):
       1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
       3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
    Thank you very much,
    Seyed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Serial table scan with direct path read compared to db file scattered read

    Hi,
    The environment
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
    8K block size
    db_file_multiblock_read_count is 128
    show sga
    Total System Global Area 1.6702E+10 bytes
    Fixed Size                  2219952 bytes
    Variable Size            7918846032 bytes
    Database Buffers         8724152320 bytes
    Redo Buffers               57090048 bytes
    16GB of SGA with 8GB of db buffer cache.
    -- database is built on Solid State Disks
    -- SQL trace and wait events
    DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true )
    -- The underlying table is called tdash. It has 1.7 Million rows based on data in all_objects. NO index
    TABLE_NAME                             Rows Table Size/MB      Used/MB    Free/MB
    TDASH                             1,729,204        15,242       15,186         56
    TABLE_NAME                     Allocated blocks Empty blocks Average space/KB Free list blocks
    TDASH                                 1,943,823        7,153              805                0
    Objectives
    To show that when serial scans are performed on database built on Solid State Disks (SSD) compared to Magnetic disks (HDD), the performance gain is far less compared to random reads with index scans on SSD compared to HDD
    Approach
    We want to read the first 100 rows of tdash table randomly into buffer, taking account of wait events and wait times generated. The idea is that on SSD the wait times will be better compared to HDD but not that much given the serial nature of table scans.
    The code used
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_with_tdash_ssdtester_noindex';
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    /Server is rebooted prior to any tests
    Whern run as default, the optimizer (although some attribute this to the execution engine) chooses direct path read into PGA in preference to db file scattered read.
    With this choice it takes 6,520 seconds to complete the query. The results are shown below
    SQL ID: 78kxqdhk1ubvq
    Plan Hash: 1148949653
    SELECT *
    FROM
    TDASH WHERE OBJECT_ID = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          2         47          0           0
    Execute    100      0.00       0.00          1         51          0           0
    Fetch      100     10.88    6519.89  194142802  194831012          0         100
    total      201     10.90    6519.90  194142805  194831110          0         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS FULL TDASH (cr=1948310 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                         2        0.00          0.00
      direct path read                          1517504        0.05       6199.93
      asynch descriptor resize                      196        0.00          0.00
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      3.84       4.03        320      48666          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      3.84       4.03        320      48666          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 9babjv8yq8ru3
    Plan Hash: 0
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      3.84       4.03        320      48666          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.84       4.03        320      48666          0           2
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      log file sync                                   1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.01       0.00          2         47          0           0
    Execute    129      0.01       0.00          1         52          2           1
    Fetch      140     10.88    6519.89  194142805  194831110          0         130
    total      278     10.91    6519.91  194142808  194831209          2         131
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         5        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      direct path read                          1517504        0.05       6199.93
      asynch descriptor resize                      196        0.00          0.00
      102  user  SQL statements in session.
       29  internal SQL statements in session.
      131  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: mydb_ora_16394_test_with_tdash_ssdtester_noindex.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
         102  user  SQL statements in trace file.
          29  internal SQL statements in trace file.
         131  SQL statements in trace file.
          11  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               ssdtester.plan_table
                 Schema was specified.
                 Table was created.
                 Table was dropped.
    1531657  lines in trace file.
        6520  elapsed seconds in trace file.I then force the query not to use direct path read by invoking
    ALTER SESSION SET EVENTS '10949 trace name context forever, level 1'  -- No Direct path read  ;In this case the optimizer uses db file scattered read predominantly and the query takes 4,299 seconds to finish which is around 34% faster than using direct path read (default).
    The report is shown below
    SQL ID: 78kxqdhk1ubvq
    Plan Hash: 1148949653
    SELECT *
    FROM
    TDASH WHERE OBJECT_ID = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          2         47          0           0
    Execute    100      0.00       0.00          2         51          0           0
    Fetch      100    143.44    4298.87  110348670  194490912          0         100
    total      201    143.45    4298.88  110348674  194491010          0         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS FULL TDASH (cr=1944909 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                    129759        0.01         17.50
      db file scattered read                    1218651        0.05       3770.02
      latch: object queue header operation            2        0.00          0.00
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      3.92       4.07        319      48625          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      3.92       4.07        319      48625          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 9babjv8yq8ru3
    Plan Hash: 0
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      3.92       4.07        319      48625          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.92       4.07        319      48625          0           2
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      log file sync                                   1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.01       0.00          2         47          0           0
    Execute    129      0.00       0.00          2         52          2           1
    Fetch      140    143.44    4298.87  110348674  194491010          0         130
    total      278    143.46    4298.88  110348678  194491109          2         131
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    129763        0.01         17.50
      Disk file operations I/O                        3        0.00          0.00
      db file scattered read                    1218651        0.05       3770.02
      latch: object queue header operation            2        0.00          0.00
      102  user  SQL statements in session.
       29  internal SQL statements in session.
      131  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: mydb_ora_26796_test_with_tdash_ssdtester_noindex_NDPR.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
         102  user  SQL statements in trace file.
          29  internal SQL statements in trace file.
         131  SQL statements in trace file.
          11  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               ssdtester.plan_table
                 Schema was specified.
                 Table was created.
                 Table was dropped.
    1357958  lines in trace file.
        4299  elapsed seconds in trace file.I note that there are 1,517,504 waits with direct path read with total time of nearly 6,200 seconds. In comparison with no direct path read, there are 1,218,651 db file scattered read waits with total wait time of 3,770 seconds. My understanding is that direct path read can use single or multi-block read into the PGA. However db file scattered reads do multi-block read into multiple discontinuous SGA buffers. So it is possible given the higher number of direct path waits that the optimizer cannot do multi-block reads (contigious buffers within PGA) and hence has to revert to single blocks reads which results in more calls and more waits?.
    Appreciate any advise and apologies for being long winded.
    Thanks,
    Mich

    Hi Charles,
    I am doing your tests for t1 table using my server.
    Just to clarify my environment is:
    I did the whole of this test on my server. My server has I7-980 HEX core processor with 24GB of RAM and 1 TB of HDD SATA II for test/scratch backup and archive. The operating system is RHES 5.2 64-bit installed on a 120GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive.
    Oracle version installed was 11g Enterprise Edition Release 11.2.0.1.0 -64bit. The binaries were created on HDD. Oracle itself was configured with 16GB of SGA, of which 7.5GB was allocated to Variable Size and 8GB to Database Buffers.
    For Oracle tablespaces including SYS, SYSTEM, SYSAUX, TEMPORARY, UNDO and redo logs, I used file systems on 240GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive. With 4K Random Read at 53,500 IOPS and 4K Random Write at 56,000 IOPS (manufacturer’s figures), this drive is probably one of the fastest commodity SSDs using NAND flash memory with Multi-Level Cell (MLC). Now my T1 table created as per your script and has the following rows and blocks (8k block size)
    SELECT
      NUM_ROWS,
      BLOCKS
    FROM
      USER_TABLES
    WHERE
      TABLE_NAME='T1';
      NUM_ROWS     BLOCKS
      12000000     178952which is pretty identical to yours.
    Then I run the query as brelow
    set timing on
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_bed_T1';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SELECT
            COUNT(*)
    FROM
            T1
    WHERE
            RN=1;
    which gives
      COUNT(*)
         60000
    Elapsed: 00:00:05.29
    tkprof output shows
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.02       5.28     178292     178299          0           1
    total        4      0.02       5.28     178292     178299          0           1
    Compared to yours:
    Fetch        2      0.60       4.10     178493     178498          0           1
    It appears to me that my CPU utilisation is by order of magnitude better but my elapsed time is worse!
    Now the way I see it elapsed time = CPU time + wait time. Further down I have
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=178299 pr=178292 pw=0 time=0 us)
      60000   TABLE ACCESS FULL T1 (cr=178299 pr=178292 pw=0 time=42216 us cost=48697 size=240000 card=60000)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
      60000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       3        0.00          0.00
      SQL*Net message from client                     3        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      direct path read                             1405        0.00          4.68
    Your direct path reads are
      direct path read                             1404        0.01          3.40Which indicates to me you have faster disks compared to mine, whereas it sounds like my CPU is faster than yours.
    With db file scattered read I get
    Elapsed: 00:00:06.95
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      1.22       6.93     178293     178315          0           1
    total        4      1.22       6.94     178293     178315          0           1
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=178315 pr=178293 pw=0 time=0 us)
      60000   TABLE ACCESS FULL T1 (cr=178315 pr=178293 pw=0 time=41832 us cost=48697 size=240000 card=60000)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
      60000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                         1        0.00          0.00
      db file scattered read                       1414        0.00          5.36
      SQL*Net message from client                     2        0.00          0.00
    compared to your
      db file scattered read                       1415        0.00          4.16On the face of it with this test mine shows 21% improvement with direct path read compared to db scattered file read. So now I can go back to re-visit my original test results:
    First default with direct path read
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          2         47          0           0
    Execute    100      0.00       0.00          1         51          0           0
    Fetch      100     10.88    6519.89  194142802  194831012          0         100
    total      201     10.90    6519.90  194142805  194831110          0         100
    CPU ~ 11 sec, elapsed ~ 6520 sec
    wait stats
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      direct path read                          1517504        0.05       6199.93
    roughly 0.004 sec for each I/ONow with db scattered file read I get
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          2         47          0           0
    Execute    100      0.00       0.00          2         51          0           0
    Fetch      100    143.44    4298.87  110348670  194490912          0         100
    total      201    143.45    4298.88  110348674  194491010          0         100
    CPU ~ 143 sec, elapsed ~ 4299 sec
    and waits:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    129759        0.01         17.50
      db file scattered read                    1218651        0.05       3770.02
    roughly 17.5/129759 = .00013 sec for single block I/O and  3770.02/1218651 = .0030 for multi-block I/ONow my theory is that the improvements comes from the large buffer cache (8320MB) inducing it to do some read aheads (async pre-fetch). Read aheads are like quasi logical I/Os and they will be cheaper compared to physical I/O. When there is large buffer cache and read aheads can be done then using buffer cache is a better choice than PGA?
    Regards,
    Mich

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Tables in subquery resulting in full table scans

    Hi,
    This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
    Problem Description:
    All the tables in sub-query are resulting in full table scans and hence are executing for hours.
    Here is the query
    SELECT /*+ PARALLEL*/
    act.assignment_action_id
    , act.assignment_id
    , act.tax_unit_id
    , as1.person_id
    , as1.effective_start_date
    , as1.primary_flag
    FROM pay_payroll_actions pa1
    , pay_population_ranges pop
    , per_periods_of_service pos
    , per_all_assignments_f as1
    , pay_assignment_actions act
    , pay_payroll_actions pa2
    , pay_action_classifications pcl
    , per_all_assignments_f as2
    WHERE pa1.payroll_action_id = :b2
    AND pa2.payroll_id = pa1.payroll_id
    AND pa2.effective_date
    BETWEEN pa1.start_date
    AND pa1.effective_date
    AND act.payroll_action_id = pa2.payroll_action_id
    AND act.action_status IN ('C', 'S')
    AND pcl.classification_name = :b3
    AND pa2.consolidation_set_id = pa1.consolidation_set_id
    AND pa2.action_type = pcl.action_type
    AND nvl (pa2.future_process_mode, 'Y') = 'Y'
    AND as1.assignment_id = act.assignment_id
    AND pa1.effective_date
    BETWEEN as1.effective_start_date
    AND as1.effective_end_date
    AND as2.assignment_id = act.assignment_id
    AND pa2.effective_date
    BETWEEN as2.effective_start_date
    AND as2.effective_end_date
    AND as2.payroll_id = as1.payroll_id
    AND pos.period_of_service_id = as1.period_of_service_id
    AND pop.payroll_action_id = :b2
    AND pop.chunk_number = :b1
    AND pos.person_id = pop.person_id
    AND (
    as1.payroll_id = pa1.payroll_id
    OR pa1.payroll_id IS NULL
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/ NULL
    FROM pay_assignment_actions ac2
    , pay_payroll_actions pa3
    , pay_action_interlocks int
    WHERE int.locked_action_id = act.assignment_action_id
    AND ac2.assignment_action_id = int.locking_action_id
    AND pa3.payroll_action_id = ac2.payroll_action_id
    AND pa3.action_type IN ('P', 'U')
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/
    NULL
    FROM per_all_assignments_f as3
    , pay_assignment_actions ac3
    WHERE :b4 = 'N'
    AND ac3.payroll_action_id = pa2.payroll_action_id
    AND ac3.action_status NOT IN ('C', 'S')
    AND as3.assignment_id = ac3.assignment_id
    AND pa2.effective_date
    BETWEEN as3.effective_start_date
    AND as3.effective_end_date
    AND as3.person_id = as2.person_id
    ORDER BY as1.person_id
    , as1.primary_flag DESC
    , as1.effective_start_date
    , act.assignment_id
    FOR UPDATE OF as1.assignment_id
    , pos.period_of_service_id
    Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
    Suspecting some db parameter which is causing this issue.
    In the
    - Full table scans on tables in the first sub-query
    PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
    - Full table scans on tables in Second sub-query
    PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 29 398.80 2192.99 238706 4991924 2383 0
    Fetch 1136 378.38 1921.39 0 4820511 0 1108
    total 1166 777.19 4114.38 238706 9812435 2383 1108
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 41 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 FOR UPDATE
    0 PX COORDINATOR
    0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
    0 SORT (ORDER BY) [:Q1009]
    0 PX RECEIVE [:Q1009]
    0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
    0 HASH JOIN (ANTI BUFFERED) [:Q1008]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 HASH JOIN (ANTI) [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10002'
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
    0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
    0 HASH JOIN [:Q1005]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (BROADCAST) OF ':TQ10000'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 HASH JOIN [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
    0 PX BLOCK (ITERATOR) [:Q1004]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10001'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
    0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
    0 FILTER [:Q1007]
    0 HASH JOIN [:Q1007]
    0 BUFFER (SORT) [:Q1007]
    0 PX RECEIVE [:Q1007]
    0 PX SEND (BROADCAST) OF ':TQ10003'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 PX BLOCK (ITERATOR) [:Q1007]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    enq: KO - fast object checkpoint 32 0.02 0.12
    os thread startup 8 0.02 0.19
    PX Deq: Join ACK 198 0.00 0.04
    PX Deq Credit: send blkd 167116 1.95 1103.72
    PX Deq Credit: need buffer 327389 1.95 266.30
    PX Deq: Parse Reply 148 0.01 0.03
    PX Deq: Execute Reply 11531 1.95 1901.50
    PX qref latch 23060 0.00 0.60
    db file sequential read 108199 0.17 22.11
    db file scattered read 9272 0.19 51.74
    PX Deq: Table Q qref 78 0.00 0.03
    PX Deq: Signal ACK 1165 0.10 10.84
    enq: PS - contention 73 0.00 0.00
    reliable message 27 0.00 0.00
    latch free 218 0.00 0.01
    latch: session allocation 11 0.00 0.00
    Thanks in advance
    Suresh PV

    Hi,
    welcome,
    how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
    Herald ten Dam
    http://htendam.wordpress.com
    PS. Use "{code}" for showing your code and explain plans, it looks nicer

  • SQL Performance causing full table scan

    I have this SQL:
    SELECT DISTINCT
    UPPER (RTRIM (LTRIM (SS.PRESCDEAID))) PRESCRIBER,
    UPPER (RTRIM (LTRIM (SS.NPIPRESCR))) NPI_NUMBER
    FROM
    PBM_SXC_STAGING SS,
    PBM_PHYSICIANS P
    WHERE
    P.PHYSICIAN_ID = SS.PRESCDEAID
    AND P.NPI_NUMBER <> SS.NPIPRESCR
    AND SS.NPIPRESCR <> SS.PRESCDEAID
    Uses this plan:
    SELECT STATEMENT ALL_ROWSCost: 13,843 Bytes: 3,636,232 Cardinality: 106,948                
         4 SORT UNIQUE Cost: 13,843 Bytes: 3,636,232 Cardinality: 106,948           
              3 HASH JOIN Cost: 12,866 Bytes: 3,636,232 Cardinality: 106,948      
                   1 TABLE ACCESS FULL TABLE PBM.PBM_PHYSICIANS Cost: 4,156 Bytes: 17,639,063 Cardinality: 1,356,851
                   2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042
    If I comment out "AND P.NPI_NUMBER <> SS.NPIPRESCR" I get this plan that uses the PK index (PBM.PBM_PHYSICIAN_PK) that is on P.PHYSICIAN_ID. I do have an index on P.NPI_NUMBER
    SELECT STATEMENT ALL_ROWSCost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078                
         4 SORT UNIQUE Cost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078           
              3 HASH JOIN Cost: 9,617 Bytes: 64,514,496 Cardinality: 2,016,078      
                   1 INDEX FAST FULL SCAN INDEX (UNIQUE) PBM.PBM_PHYSICIAN_PK Cost: 1,035 Bytes: 14,925,361 Cardinality: 1,356,851
                   2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042

    Sorry for the delay, I was out of the office.
    PLAN_TABLE_OUTPUT
    SQL_ID  4j270u8fbhwpu, child number 0
    SELECT /*+ gather_plan_statistics */          DISTINCT          upper
    (rtrim (ltrim (ss.prescdeaid))) prescriber         ,upper (rtrim (ltrim
    (ss.npiprescr))) npi_number FROM pbm_sxc_staging ss     ,pbm_physicians
    p WHERE p.physician_id = ss.prescdeaid   AND p.npi_number !=
    ss.npiprescr   AND ss.npiprescr != ss.prescdeaid
    Plan hash value: 2275909877
    PLAN_TABLE_OUTPUT
    | Id  | Operation              | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |   1 |  HASH UNIQUE           |                |      1 |    125K|     68 |00:00:01.54 |   24466 |  14552 |  1001K|  1001K| 1296K (0)|
    |*  2 |   HASH JOIN            |                |      1 |    125K|   6941 |00:00:01.14 |   24466 |  14552 |    47M|  6159K|   68M (0)|
    |   3 |    TABLE ACCESS FULL   | PBM_PHYSICIANS |      1 |   1341K|   1341K|00:00:00.01 |   14556 |  14552 |       |       |          |
    |*  4 |    INDEX FAST FULL SCAN| SXCSTG_IDX1    |      1 |   1872K|   1887K|00:00:00.01 |    9910 |   0 |          |       |          |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - access("P"."PHYSICIAN_ID"="SS"."PRESCDEAID")
           filter("P"."NPI_NUMBER"<>"SS"."NPIPRESCR")
       4 - filter("SS"."NPIPRESCR"<>"SS"."PRESCDEAID")Edited by: Chris on Jul 12, 2011 8:19 AM

  • Why does not  a query go by index but FULL TABLE SCAN

    I have two tables;
    table 1 has 1400 rwos and more than 30 columns , one of them is named as 'site_code' , a index was created on this column;
    table 2 has more 150 rows than 20 column , this primary key is 'site_code' also.
    the two tables were analysed by dbms_stats.gather_table()...
    when I run the explain for the 2 sqls below:
    select * from table1 where site_code='XXXXXXXXXX';
    select * from table1 where site_code='XXXXXXXXXX';
    certainly the oracle explain report show that 'Index scan'
    but the problem raised that
    I try to explain the sql
    select *
    from table1,table2
    where 1.'site_code'=2.'site_code'
    the explain report that :
    select .....
    FULL Table1 Scan
    FULL Table2 Scan
    why......

    Nikolay Ivankin  wrote:
    BluShadow wrote:
    Nikolay Ivankin  wrote:
    Try to use hint, but I really doubt it will be faster.No, using hints should only be advised when investigating an issue, not recommended for production code, as it assumes that, as a developer, you know better than the Oracle Optimizer how the data is distributed in the data files, and how the data is going to grow and change over time, and how best to access that data for performance etc.Yes, you are absolutly right. But aren't we performing such an investigation, are we? ;-)The way you wrote it, made it sound that a hint would be the solution, not just something for investigation.
    select * from .. always performs full scan, so limit your query.No, select * will not always perform a full scan, that's just selecting all the columns.
    A select without a where clause or that has a where clause that has low selectivity will result in full table scans.But this is what I have ment.But not what you said.

  • Query is doing full table scan

    Hi All,
    The below query is doing full table scan. So many threads from application trigger this query and doing full table scan. Can you please tell me how to improve the performance of this query?
    Env is 11.2.0.3 RAC (4 node). Unique index on VZ_ID, LOGGED_IN. The table row count is 2,501,103.
    Query is :-
    select ccagentsta0_.LOGGED_IN as LOGGED1_404_, ccagentsta0_.VZ_ID as VZ2_404_, ccagentsta0_.ACTIVE as ACTIVE404_, ccagentsta0_.AGENT_STATE as AGENT4_404_,
    ccagentsta0_.APPLICATION_CODE as APPLICAT5_404_, ccagentsta0_.CREATED_ON as CREATED6_404_, ccagentsta0_.CURRENT_ORDER as CURRENT7_404_,
    ccagentsta0_.CURRENT_TASK as CURRENT8_404_, ccagentsta0_.HELM_ID as HELM9_404_, ccagentsta0_.LAST_UPDATED as LAST10_404_, ccagentsta0_.LOCATION as LOCATION404_,
    ccagentsta0_.LOGGED_OUT as LOGGED12_404_, ccagentsta0_.SUPERVISOR_VZID as SUPERVISOR13_404_, ccagentsta0_.VENDOR_NAME as VENDOR14_404_
    from AGENT_STATE ccagentsta0_ where ccagentsta0_.VZ_ID='v790531'  and ccagentsta0_.ACTIVE='Y';
    Table Scan                                                       AGENT_STATE                                                2.366666667
    Table Scan                                                       AGENT_STATE                                                0.3666666667
    Table Scan                                                       AGENT_STATE                                                1.633333333
    Table Scan                                                       AGENT_STATE                                                       0.75
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                2.533333333
    Table Scan                                                       AGENT_STATE                                                0.5333333333
    Table Scan                                                       AGENT_STATE                                                       1.95
    Table Scan                                                       AGENT_STATE                                                        0.8
    Table Scan                                                       AGENT_STATE                                                0.2833333333
    Table Scan                                                       AGENT_STATE                                                1.983333333
    Table Scan                                                       AGENT_STATE                                                        2.5
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                1.883333333
    Table Scan                                                       AGENT_STATE                                                        0.9
    Table Scan                                                       AGENT_STATE                                                2.366666667
    But the explain plan shows the query is taking the index
    Explain plan output:-
    PLAN_TABLE_OUTPUT
    Plan hash value: 1946142815
    | Id  | Operation                   | Name            | Rows  | Bytes | Cost (%C
    PU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT            |                 |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| AGENT_STATE     |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  2 |   INDEX RANGE SCAN          | AGENT_STATE_IDX |   229 |       |     4
    (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("CCAGENTSTA0_"."ACTIVE"='Y')
       2 - access("CCAGENTSTA0_"."VZ_ID"='v790531')
    The values (VZ_ID) i have given are dummy values picked from the table. I dont get the actual values since the query is coming with bind variables. Please let me know your suggestion on this.
    Thanks,
    Mani

    Hi,
    But I am not getting what is the issue..its a simple select query and index is there on one of the leading columns (VZ_ID --- PK). Explain plan says its using its using Index and it only select fraction of rows from the table. Then why it is doing FTS. For Optimizer, why its like a query doing FTS.
    The rule-based optimizer would have  picked the plan with the index. The cost-based optimizer, however, is picking the plan with the lowest cost. Apparently, the lowest cost plan is the one with the full table scan. And the optimizer isn't necessarily wrong about this.
    Reading data from a table via index probes is only efficient when selecting a relatively small percentage of rows. For larger percentages, a full table scan is generally better.
    Consider a simple example: a query that selects from a table with biographies for all people on the planet. Suppose you are interested in all people from a certain country.
    select * from all_people where country='Vatican'
    would only return only 800 rows (as Vatican is an extremely small country with population of just 800 people). For this case, obviously, using an index would be very efficient.
    Now if we run this query:
    select * from all_people where contry = 'India',
    we'd be getting over a billion of rows. For this case, a full table scan would be several thousand times faster.
    Now consider the third case:
    select * from all_people where country = :b1
    What plan should the optimizer choose? The value of :b1 bind variable is generally not known during the parse time, it will be passed by the user when the query is already parsed, during run-time.
    In this case, one of two scenarios takes place: either the optimizer relies on some built-in default selectivities (basically, it takes a wild guess), or the optimizer postpones taking the final decision until the
    first time the query is run, 'peeks' the value of the bind, and optimizes the query for this case.
    In means, that if the first time the query is parsed, it was called with :b1 = 'India', a plan with a full table scan will be generated and cached for subsequent usage. And until the cursor is aged out of library cache
    or invalidated for some reason, this will be the plan for this query.
    If the first time it was called with :b1='Vatican', then an index-based plan will be picked.
    Either way, bind peeking only gives good results if the subsequent usage of the query is the same kind as the first usage. I.e. in the first case it will be efficient, if the query would always be run for countries with big popultions.
    And in the second case, if it's always run for countries with small populations.
    This mechanism is called 'bind peeking' and it's one of the most common causes of performance problems. In 11g, there are more sophisticated mechanisms, such a cardinality feedback, but they don't always work as expected.
    This mechanism is the most likely explanation for your issue. However, without proper diagnostic information we cannot be 100% sure.
    Best regards,
      Nikolay

  • Star Query Full Table Scan

    Hi, Folks:
    I have a complex SQL statement that runs very slowly.
    Following is the statement:
    SELECT
    T3.POSITION_ID,
    T12.PR_POSTN_ID,
    T12.PR_TERR_ID,
    T12.PR_REP_MANL_FLG,
    T9.CREATED,
    T10.PR_EMP_ID,
    T9.MODIFICATION_NUM,
    T12.DEDUP_TOKEN,
    T12.LOCATION_LEVEL,
    T12.PR_PRTNR_OU_ID,
    T12.PR_OU_TYPE_ID,
    T12.PAR_DUNS_NUM,
    T3.ACCNT_NAME,
    T11.ATTRIB_16,
    T6.PAR_ROW_ID,
    T3.INVSTR_FLG,
    T6.ROW_ID,
    T12.DUNS_NUM,
    T12.BU_ID,
    T10.ROW_ID,
    T2.LAST_NAME,
    T3.SRV_PROVDR_FLG,
    T12.X_PR_MERCH_NBR_ID,
    T3.ROW_STATUS,
    T12.NAME,
    T11.PAR_ROW_ID,
    T6.LAST_UPD_BY,
    T6.MODIFICATION_NUM,
    T3.PRIORITY_FLG,
    T10.NAME,
    T3.ASGN_SYS_FLG,
    T9.PROFIT,
    T12.PR_BL_ADDR_ID,
    T12.PR_REP_ASGN_TYPE,
    T9.LAST_UPD_BY,
    T3.FACILITY_FLG,
    T12.LAST_UPD_BY,
    T12.PR_SHIP_ADDR_ID,
    T11.MODIFICATION_NUM,
    T11.LAST_UPD_BY,
    T5.LOGIN,
    T3.ASGN_MANL_FLG
    FROM
    S_ADDR_ORG T1,
    S_CONTACT T2,
    S_ACCNT_POSTN T3,
    S_ORG_INT T4,
    S_EMPLOYEE T5,
    S_ORG_EXT_FNX T6,
    S_ORG_SYN T7,
    S_INDUST T8,
    S_ORG_EXT_T T9,
    S_POSTN T10,
    S_ORG_EXT_X T11,
    S_ORG_EXT T12
    WHERE
    T12.BU_ID = T4.ROW_ID (+) AND
    T12.PR_CON_ID = T2.ROW_ID (+) AND
    T12.ROW_ID = T7.OU_ID AND
    T12.ROW_ID = T11.PAR_ROW_ID (+) AND
    T12.ROW_ID = T6.PAR_ROW_ID (+) AND
    T12.ROW_ID = T9.PAR_ROW_ID (+) AND
    T12.PR_INDUST_ID = T8.ROW_ID (+) AND
    T12.PR_ADDR_ID = T1.ROW_ID (+) AND
    T12.PR_POSTN_ID = T10.ROW_ID AND
    T12.PR_POSTN_ID = T3.POSITION_ID AND
    T12.ROW_ID = T3.OU_EXT_ID AND
    T10.PR_EMP_ID = T5.ROW_ID (+) AND
    (T12.X_BMO_CUST_FLG = 'Y') AND
    (T7.NAME IS NULL );
    ***** SQL Statement Execute Time: 31.703 seconds *****
    I do a explain plan and found the table S_ORG_EXT (T12)
    get a full table scan.
    But I found the table S_ORG_EXT did have lots of indexes
    build on each column shown in the where statement.
    Our database use RULE base optimizer and it should use
    index instead of full table scan.
    Then, I look at this SQL and realize it is a star query.
    One more thing is that the table S_ORG_SYN (T7) defined
    the column NAME as NOT NULL. If the query process, it
    should return no row.
    But I still don't know for what reason the Oracle use
    full table scan and ignore the S_ORG_SYN.NAME should be
    NOT NULL
    If I want to avoid the full table scan, how can I do by
    not switching to COST base optimizer mode ?
    Thanks,
    Ke

    Michael:
    A nice explanantion. In my experience, in versions up to 8.1.7, the RBO seems to be faster in the large majority of queries than the CBO. In our payroll application (version 8.0.5), removing statistics cut the time for the calculation run from 6.5 hours to under 2.
    The CBO seems to be significantly faster in 9i. We only have one application currently running in a 9.0.1 database. In this app, a large stored procedure took about 2 minutes to run when there were no statistics, and about 10 seconds after we analyzed the tables.
    As more of our vendors migrate to 9 (we just got the last vendor migrated off 7.3 to 8.0.6 a couple of months ago), I may become a bigger fan of the CBO. John,
    I remember having a discussion with you about the CBO in a thread once and am aware of your opinion of the CBO. My opinion has been test which works for you RBO or CBO - in our case we verified that CBO worked better for us. Anyway, I was searching metalink and it looks like you'll be forced to become a "bigger fan" of the CBO after 9i release 2. This is from part of Doc ID 189702.1 on metalink:
    The rule-based optimizer (RBO) will be no longer be a valid optimization choice when Oracle9i is de-supported. The release after Oracle9i
    (referred to in this article as Oracle10i) will only support the cost-based optimizer (CBO). Hence Oracle9i Release 2 is the last releases to
    contain the RBO. Partners and customers should certify their applications with the CBO before that time.
    ...but of course Oracle has been warning people of the demise of the RBO for some time.
    Al

  • Problem of full table scan on a partitioned table

    hi all
    There is a table called "si_sync_operation" that have 171040 number of rows.
    I partitioned that table into a new table called "si_sync_operation_par" with 7 partitoins.
    I issued the following statements
    SELECT * FROM si_sync_operation_par.
    SELECT * FROM si_sync_operation.
    The explain plan show that the cost of the first statement is 1626 and that of the second statments is 1810.
    The "cost" of full table scan on partitioned table is lower than the that of non-partitioned table.That's fine.
    But the "Bytes" of full table scan on partitioned table is 5761288680 and that of the non-partitioned table is 263743680.
    Why full table scan on partitioned table access more bytes than non-partitioned table?
    And how could a statment that access more bytes results a lower cost?
    Thank u very much

    As Hemant metioned bytes reported are approximate number of bytes. As far as Cost is concerned, according to Tom its just a number and we should not compare queries by their cost. (search asktom.oracle.com for more information)
    SQL> drop table non_part purge;
    Table dropped.
    SQL> drop table part purge;
    Table dropped.
    SQL>
    SQL> CREATE TABLE non_part
      2        (id  NUMBER(5),
      3         dt    DATE);
    Table created.
    SQL>
    SQL> CREATE TABLE part
      2        (id  NUMBER(5),
      3         dt    DATE)
      4         PARTITION BY RANGE(dt)
      5         (
      6         PARTITION part1_jan2008 VALUES LESS THAN(TO_DATE('01/02/2008','DD/MM/YYYY')),
      7         PARTITION part2_feb2008 VALUES LESS THAN(TO_DATE('01/03/2008','DD/MM/YYYY')),
      8         PARTITION part3_mar2008 VALUES LESS THAN(TO_DATE('01/04/2008','DD/MM/YYYY')),
      9         PARTITION part4_apr2008 VALUES LESS THAN(TO_DATE('01/05/2008','DD/MM/YYYY')),
    10         PARTITION part5_may2008 VALUES LESS THAN(TO_DATE('01/06/2008','DD/MM/YYYY'))
    11       );
    Table created.
    SQL>
    SQL>
    SQL> insert into non_part select rownum, trunc(sysdate) - rownum from dual connect by level <= 140;
    140 rows created.
    Execution Plan
    Plan hash value: 1731520519
    | Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  COUNT                        |      |       |            |          |
    |*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(LEVEL<=140)
    SQL>
    SQL> insert into part select * from non_part;
    140 rows created.
    Execution Plan
    Plan hash value: 1654070669
    | Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT  |          |   140 |  3080 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| NON_PART |   140 |  3080 |     3   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> set line 10000
    SQL> set autotrace traceonly exp
    SQL> select * from non_part;
    Execution Plan
    Plan hash value: 1654070669
    | Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |          |   140 |  3080 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| NON_PART |   140 |  3080 |     3   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement
    SQL> select * from part;
    Execution Plan
    Plan hash value: 3392317243
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |      |   140 |  3080 |     9   (0)| 00:00:01 |       |       |
    |   1 |  PARTITION RANGE ALL|      |   140 |  3080 |     9   (0)| 00:00:01 |     1 |     5 |
    |   2 |   TABLE ACCESS FULL | PART |   140 |  3080 |     9   (0)| 00:00:01 |     1 |     5 |
    Note
       - dynamic sampling used for this statement
    SQL>
    SQL> exec dbms_stats.gather_table_stats(user, 'non_part');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user, 'part');
    PL/SQL procedure successfully completed.
    SQL>
    SQL>
    SQL> select * from non_part;
    Execution Plan
    Plan hash value: 1654070669
    | Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |          |   140 |  1540 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| NON_PART |   140 |  1540 |     3   (0)| 00:00:01 |
    SQL> select * from part;
    Execution Plan
    Plan hash value: 3392317243
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |      |   140 |  1540 |     9   (0)| 00:00:01 |       |       |
    |   1 |  PARTITION RANGE ALL|      |   140 |  1540 |     9   (0)| 00:00:01 |     1 |     5 |
    |   2 |   TABLE ACCESS FULL | PART |   140 |  1540 |     9   (0)| 00:00:01 |     1 |     5 |
    SQL>After analyzing the tables, notice that the Bytes column has changed value

  • Strange full table scan behavior

    Hi all you sharp-eyed oracle gurus out there..
    I need some help/tips on a update I'm running which is taking very long time. Oracle RDMBS is 11.2.0.1 with advanced compression option.
    I'm currently updating all rows in a table from value 1 to 0. (update mistaf set b_code='0';)
    The column in question is a CHAR(1) column and the column is not indexed. The table is a fairly large heap-table with 55 million rows to be updated and it's size is approx 11GB. The table is compressed with the compress for OLTP option.
    What is strange to me is that I can clearly see that a full table scan is running but i cannot see any db file scattered read, as i would expect, but instead i'm only seeing db file sequential reads. I suppose this might be the reason for its long execution time (dbconsole estimates 20 hours to complete looking at sql monitoring).
    Any views on why Oracle would do db file sequential reads on a FTS? And do you agree that this might be the reason why it takes so long to complete?
    More info: I first started the update and left work and the next morning i saw that the update still wasnt finished to which i realised that i had a bitmap index on the column to be updated. I dropped the index and started the update once again.. It seemed to execute very fast at the begining before rapildy declining in performance..
    THanks in advance for any help!

    tried to tkprof the trace file but no sql came up..
    however the raw trace file looks like this
    *** 2010-07-15 17:32:39.829
    WAIT #1: nam='db file sequential read' ela= 7516 file#=11 block#=1185541 blocks=1 obj#=221897 tim=1279207959829762
    WAIT #1: nam='db file sequential read' ela= 7519 file#=3 block#=567053 blocks=1 obj#=0 tim=1279207959837428
    WAIT #1: nam='db file sequential read' ela= 55317 file#=3 block#=186728 blocks=1 obj#=0 tim=1279207959892903
    WAIT #1: nam='db file sequential read' ela= 23363 file#=11 block#=3528062 blocks=1 obj#=221897 tim=1279207959916438
    WAIT #1: nam='db file sequential read' ela= 4796 file#=3 block#=92969 blocks=1 obj#=0 tim=1279207959921314
    WAIT #1: nam='db file sequential read' ela= 1426 file#=11 block#=1079147 blocks=1 obj#=221897 tim=1279207959922846
    WAIT #1: nam='db file sequential read' ela= 4510 file#=11 block#=4180577 blocks=1 obj#=221897 tim=1279207959927479
    WAIT #1: nam='db file sequential read' ela= 12 file#=11 block#=478 blocks=1 obj#=221897 tim=1279207959927715
    WAIT #1: nam='db file sequential read' ela= 11 file#=3 block#=566015 blocks=1 obj#=0 tim=1279207959927768
    WAIT #1: nam='db file sequential read' ela= 17343 file#=11 block#=1142438 blocks=1 obj#=221897 tim=1279207960025312
    WAIT #1: nam='db file sequential read' ela= 11 file#=11 block#=202520 blocks=1 obj#=221897 tim=1279207960025548
    WAIT #1: nam='db file sequential read' ela= 15 file#=3 block#=612704 blocks=1 obj#=0 tim=1279207960025592
    WAIT #1: nam='db file sequential read' ela= 17604 file#=11 block#=1198573 blocks=1 obj#=221897 tim=1279207960043303
    WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=1473771 class#=1 obj#=221897 tim=1279207960059044
    WAIT #1: nam='buffer busy waits' ela= 21 file#=11 block#=4173048 class#=1 obj#=221897 tim=1279207960066512
    WAIT #1: nam='buffer busy waits' ela= 3 file#=509 block#=392139 class#=1 obj#=221897 tim=1279207960070049
    WAIT #1: nam='buffer busy waits' ela= 20 file#=11 block#=1134301 class#=1 obj#=221897 tim=1279207960075224
    WAIT #1: nam='db file sequential read' ela= 19164 file#=11 block#=3502287 blocks=1 obj#=221897 tim=1279207960120163
    WAIT #1: nam='buffer busy waits' ela= 70 file#=3 block#=156 class#=45 obj#=0 tim=1279207960126680
    WAIT #1: nam='db file sequential read' ela= 43587 file#=11 block#=3503000 blocks=1 obj#=221897 tim=1279207960189443
    WAIT #1: nam='db file sequential read' ela= 14214 file#=11 block#=4135977 blocks=1 obj#=221897 tim=1279207960203841
    WAIT #1: nam='latch: undo global data' ela= 28 address=11239411512 number=237 tries=0 obj#=221897 tim=1279207960226196
    WAIT #1: nam='buffer busy waits' ela= 376 file#=11 block#=1343104 class#=1 obj#=221897 tim=1279207960228124
    WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=1450745 class#=1 obj#=221897 tim=1279207960236628
    WAIT #1: nam='buffer busy waits' ela= 14 file#=11 block#=1456732 class#=1 obj#=221897 tim=1279207960237393
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=1469341 class#=1 obj#=221897 tim=1279207960239415
    WAIT #1: nam='buffer busy waits' ela= 16 file#=11 block#=3498660 class#=1 obj#=221897 tim=1279207960241348
    WAIT #1: nam='buffer busy waits' ela= 10 file#=11 block#=1478782 class#=1 obj#=221897 tim=1279207960242208
    WAIT #1: nam='buffer busy waits' ela= 11 file#=11 block#=3529073 class#=1 obj#=221897 tim=1279207960242774
    WAIT #1: nam='buffer busy waits' ela= 10 file#=11 block#=3506834 class#=1 obj#=221897 tim=1279207960243188
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3550683 class#=1 obj#=221897 tim=1279207960243589
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4082313 class#=1 obj#=221897 tim=1279207960244816
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4090328 class#=1 obj#=221897 tim=1279207960245086
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3555804 class#=1 obj#=221897 tim=1279207960245350
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=3483832 class#=1 obj#=221897 tim=1279207960245549
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4115411 class#=1 obj#=221897 tim=1279207960246323
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4100593 class#=1 obj#=221897 tim=1279207960246791
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4135120 class#=1 obj#=221897 tim=1279207960247407
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4119599 class#=1 obj#=221897 tim=1279207960247832
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4174925 class#=1 obj#=221897 tim=1279207960249045
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4185250 class#=1 obj#=221897 tim=1279207960249699
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4188816 class#=1 obj#=221897 tim=1279207960250138
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4189312 class#=1 obj#=221897 tim=1279207960250363
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4190380 class#=1 obj#=221897 tim=1279207960250618
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4190996 class#=1 obj#=221897 tim=1279207960251339
    WAIT #1: nam='buffer busy waits' ela= 9 file#=11 block#=4176416 class#=1 obj#=221897 tim=1279207960251490
    WAIT #1: nam='buffer busy waits' ela= 11 file#=509 block#=436859 class#=1 obj#=221897 tim=1279207960253748
    WAIT #1: nam='buffer busy waits' ela= 2 file#=509 block#=426961 class#=1 obj#=221897 tim=1279207960253993
    WAIT #1: nam='db file sequential read' ela= 18802 file#=509 block#=413732 blocks=1 obj#=221897 tim=1279207960273210
    WAIT #1: nam='db file sequential read' ela= 13 file#=3 block#=615387 blocks=1 obj#=0 tim=1279207960273322
    WAIT #1: nam='db file sequential read' ela= 3948 file#=3 block#=569033 blocks=1 obj#=0 tim=1279207960277522
    WAIT #1: nam='db file sequential read' ela= 14 file#=11 block#=4191700 blocks=1 obj#=221897 tim=1279207960333755
    WAIT #1: nam='db file sequential read' ela= 3745 file#=11 block#=1197543 blocks=1 obj#=221897 tim=1279207960358279
    WAIT #1: nam='db file sequential read' ela= 4541 file#=11 block#=472946 blocks=1 obj#=221897 tim=1279207960363005
    WAIT #1: nam='db file sequential read' ela= 7775 file#=3 block#=229860 blocks=1 obj#=0 tim=1279207960370848
    WAIT #1: nam='db file sequential read' ela= 22319 file#=11 block#=1150525 blocks=1 obj#=221897 tim=1279207960393342
    WAIT #1: nam='db file sequential read' ela= 17058 file#=11 block#=3542375 blocks=1 obj#=221897 tim=1279207960410577
    WAIT #1: nam='db file sequential read' ela= 16042 file#=509 block#=437647 blocks=1 obj#=221897 tim=1279207960427928
    WAIT #1: nam='db file sequential read' ela= 6412 file#=3 block#=542118 blocks=1 obj#=0 tim=1279207960434440
    WAIT #1: nam='buffer busy waits' ela= 660 file#=3 block#=88 class#=23 obj#=0 tim=1279207960457208
    WAIT #1: nam='db file sequential read' ela= 13 file#=11 block#=4140513 blocks=1 obj#=221897 tim=1279207960467438
    WAIT #1: nam='db file sequential read' ela= 5451 file#=11 block#=3516234 blocks=1 obj#=221897 tim=1279207960472965
    WAIT #1: nam='db file sequential read' ela= 5121 file#=11 block#=3514597 blocks=1 obj#=221897 tim=1279207960478231
    WAIT #1: nam='db file sequential read' ela= 3982 file#=3 block#=1039898 blocks=1 obj#=0 tim=1279207960482281
    WAIT #1: nam='db file sequential read' ela= 5391 file#=509 block#=433941 blocks=1 obj#=221897 tim=1279207960487775
    WAIT #1: nam='db file sequential read' ela= 9707 file#=11 block#=3551543 blocks=1 obj#=221897 tim=1279207960529848
    WAIT #1: nam='buffer busy waits' ela= 4 file#=11 block#=4090328 class#=1 obj#=221897 tim=1279207960610165
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4115879 class#=1 obj#=221897 tim=1279207960611710
    WAIT #1: nam='buffer busy waits' ela= 2 file#=11 block#=4100364 class#=1 obj#=221897 tim=1279207960612167
    WAIT #1: nam='buffer busy waits' ela= 3 file#=11 block#=4133339 class#=1 obj#=221897 tim=1279207960612648
    WAIT #1: nam='db file sequential read' ela= 7254 file#=509 block#=405005 blocks=1 obj#=221897 tim=1279207960631133
    WAIT #1: nam='db file sequential read' ela= 25608 file#=11 block#=1181075 blocks=1 obj#=221897 tim=1279207960693920
    etc

  • Do partition scans take longer than a full table scan on an unpartitioned table?

    Hello there,
    I have a range-partitioned table PART_TABLE which has 10 Million records and 10 partitions having 1 million records each. Partition is done based on a Column named ID which is a sequence from 1 to 10 million.
    I created another table P2_BKP (doing a select * from part_table) which has the same dataset as that of PART_TABLE except that this table is not partitioned.
    Now, I run a same query on both the tables to retrieve a range of data. Precisely I am trying to read only the data present in 5 partitions of the partitioned tables which theoretically requires less reads than when done on unpartitioned tables.
    Yet, the query seems to take extra time on partitioned table than when run on unpartitioned table.Any specific reason why is this the case?
    Below is the query I am trying to run on both the tables and their corresponding Explain Plans.
    QUERY A
    =========
    select * from P2_BKP where id<5000000;
    | Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                
    |   0 | SELECT STATEMENT  |        |  6573K|   720M| 12152   (2)| 00:02:26 |                                                                                                                                                                                                                                
    |*  1 |  TABLE ACCESS FULL| P2_BKP |  6573K|   720M| 12152   (2)| 00:02:26 |                                                                                                                                                                                                                                
    QUERY B
    ========
    select * from part_table where id<5000000;
    | Id  | Operation                | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                                                                                                                                                                                     
    |   0 | SELECT STATEMENT         |            |  3983K|   436M| 22181  (73)| 00:04:27 |       |       |                                                                                                                                                                                                     
    |   1 |  PARTITION RANGE ITERATOR|            |  3983K|   436M| 22181  (73)| 00:04:27 |     1 |     5 |                                                                                                                                                                                                     
    |*  2 |   TABLE ACCESS FULL      | PART_TABLE |  3983K|   436M| 22181  (73)| 00:04:27 |     1 |     5 |                                                                                                                                                                                                     

    at the risk of bringing unnecessary confusion into the discussion: I think there is a situation in 11g in which a Full Table Scan on a non partitioned table can be faster than the FTS on a corresponding partitioned table: if the size of the non partitioned table reaches a certain threshold (I think it's: blocks > _small_table_threshold * 5) the runtime engine may decide to use a serial direct path read to access the data. If the single partitions don't pass the threshold the engine will use the conventional path.
    Here is a small example for my assertion:
    -- I create a simple partitioned table
    -- and a corresponding non-partitioned table
    -- with 1M rows
    drop table tab_part;
    create table tab_part (
        col_part number
      , padding varchar2(100)
    partition by list (col_part)
        partition P00 values (0)
      , partition P01 values (1)
      , partition P02 values (2)
      , partition P03 values (3)
      , partition P04 values (4)
      , partition P05 values (5)
      , partition P06 values (6)
      , partition P07 values (7)
      , partition P08 values (8)
      , partition P09 values (9)
    insert into tab_part
    select mod(rownum, 10)
         , lpad('*', 100, '*')
      from dual
    connect by level <= 1000000;
    exec dbms_stats.gather_table_stats(user, 'tab_part')
    drop table tab_nopart;
    create table tab_nopart
    as
    select *
      from tab_part;
    exec dbms_stats.gather_table_stats(user, 'tab_nopart')
    -- my _small_table_threshold is 1777 and the partitions
    -- have a size of ca. 1600 blocks while the non-partitioned table
    -- contains 15360 blocks
    -- I have to flush the buffer cache since
    -- the direct path access is only used
    -- if there are few blocks already in the cache
    alter system flush buffer_cache;
    -- the execution plans are not really exciting
    | Id  | Operation           | Name     | Rows  | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |          |     1 |  8089   (0)| 00:00:41 |       |       |
    |   1 |  SORT AGGREGATE     |          |     1 |            |          |       |       |
    |   2 |   PARTITION LIST ALL|          |  1000K|  8089   (0)| 00:00:41 |     1 |    10 |
    |   3 |    TABLE ACCESS FULL| TAB_PART |  1000K|  8089   (0)| 00:00:41 |     1 |    10 |
    | Id  | Operation          | Name       | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |            |     1 |  7659   (0)| 00:00:39 |
    |   1 |  SORT AGGREGATE    |            |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| TAB_NOPART |  1000K|  7659   (0)| 00:00:39 |
    But on my PC the FTS on the non-partitioned table is faster than the FTS on the partitions (1sec to 3 sec.) and v$sesstat shows the reason for this difference:
    -- non partitioned table
    NAME                                               DIFF
    table scan rows gotten                          1000000
    file io wait time                                 15313
    session logical reads                             15156
    physical reads                                    15153
    consistent gets direct                            15152
    physical reads direct                             15152
    DB time                                              95
    -- partitioned table
    NAME                                               DIFF
    file io wait time                               2746493
    table scan rows gotten                          1000000
    session logical reads                             15558
    physical reads                                    15518
    physical reads cache prefetch                     15202
    DB time                                             295
    (maybe my choose of counters is questionable)
    So it's possible to get slower access for an FTS on a partitioned table under special conditions.
    Regards
    Martin

Maybe you are looking for

  • Sql query taking long time

    the below query is taking very long time. select /*+ PARALLEL(a,8) PARALLEL(b,8) */ a.personid,a.winning_id, b.questionid from winning_id_cleanup a , rm_personquestion b where a.personid = b.personid and (a.winning_id,b.questionid) not in (select /*+

  • BIP PDF Default Settings

    How can I set the default pdf zoom in BI Publisher? I know you can do in Adobe Reader, how can I do in BI Publisher and overwrite Adobe settings? Edited by: user6745988 on Jun 22, 2012 8:28 AM

  • Selective deletion of data from ODS

    Hai,          I need to delete selectively data from an ODS. I know that I have the option of selective deltionin cube. But how can I delete them in ODS? Thank you.

  • Migration to an existing 6i repository

    We updated the repository from 1.3.2 to 6 (the repository owner is AAA). At the same time, we created a new 6i target repository (the repository owner is BBB). We now plan to migrate the repository from 6 to 6i. Two questions: 1. The repositories hav

  • Having trouble upgrading mac to yosemite with 10.6.8

    I have OS X 10.6.8 and however everytime I click the "get" option in the app store the computer pretends to load for a while and then just stops and doesnt download a thing