SQL Performance causing full table scan

I have this SQL:
SELECT DISTINCT
UPPER (RTRIM (LTRIM (SS.PRESCDEAID))) PRESCRIBER,
UPPER (RTRIM (LTRIM (SS.NPIPRESCR))) NPI_NUMBER
FROM
PBM_SXC_STAGING SS,
PBM_PHYSICIANS P
WHERE
P.PHYSICIAN_ID = SS.PRESCDEAID
AND P.NPI_NUMBER <> SS.NPIPRESCR
AND SS.NPIPRESCR <> SS.PRESCDEAID
Uses this plan:
SELECT STATEMENT ALL_ROWSCost: 13,843 Bytes: 3,636,232 Cardinality: 106,948                
     4 SORT UNIQUE Cost: 13,843 Bytes: 3,636,232 Cardinality: 106,948           
          3 HASH JOIN Cost: 12,866 Bytes: 3,636,232 Cardinality: 106,948      
               1 TABLE ACCESS FULL TABLE PBM.PBM_PHYSICIANS Cost: 4,156 Bytes: 17,639,063 Cardinality: 1,356,851
               2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042
If I comment out "AND P.NPI_NUMBER <> SS.NPIPRESCR" I get this plan that uses the PK index (PBM.PBM_PHYSICIAN_PK) that is on P.PHYSICIAN_ID. I do have an index on P.NPI_NUMBER
SELECT STATEMENT ALL_ROWSCost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078                
     4 SORT UNIQUE Cost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078           
          3 HASH JOIN Cost: 9,617 Bytes: 64,514,496 Cardinality: 2,016,078      
               1 INDEX FAST FULL SCAN INDEX (UNIQUE) PBM.PBM_PHYSICIAN_PK Cost: 1,035 Bytes: 14,925,361 Cardinality: 1,356,851
               2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042

Sorry for the delay, I was out of the office.
PLAN_TABLE_OUTPUT
SQL_ID  4j270u8fbhwpu, child number 0
SELECT /*+ gather_plan_statistics */          DISTINCT          upper
(rtrim (ltrim (ss.prescdeaid))) prescriber         ,upper (rtrim (ltrim
(ss.npiprescr))) npi_number FROM pbm_sxc_staging ss     ,pbm_physicians
p WHERE p.physician_id = ss.prescdeaid   AND p.npi_number !=
ss.npiprescr   AND ss.npiprescr != ss.prescdeaid
Plan hash value: 2275909877
PLAN_TABLE_OUTPUT
| Id  | Operation              | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
|   1 |  HASH UNIQUE           |                |      1 |    125K|     68 |00:00:01.54 |   24466 |  14552 |  1001K|  1001K| 1296K (0)|
|*  2 |   HASH JOIN            |                |      1 |    125K|   6941 |00:00:01.14 |   24466 |  14552 |    47M|  6159K|   68M (0)|
|   3 |    TABLE ACCESS FULL   | PBM_PHYSICIANS |      1 |   1341K|   1341K|00:00:00.01 |   14556 |  14552 |       |       |          |
|*  4 |    INDEX FAST FULL SCAN| SXCSTG_IDX1    |      1 |   1872K|   1887K|00:00:00.01 |    9910 |   0 |          |       |          |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
   2 - access("P"."PHYSICIAN_ID"="SS"."PRESCDEAID")
       filter("P"."NPI_NUMBER"<>"SS"."NPIPRESCR")
   4 - filter("SS"."NPIPRESCR"<>"SS"."PRESCDEAID")Edited by: Chris on Jul 12, 2011 8:19 AM

Similar Messages

  • Finding the Text of SQL Query causing Full Table Scans

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    Finding the Text of SQL Query Causing Full Table Scan

  • Finding the Text of SQL Query Causing Full Table Scan

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    You might try something like this:
    select sql_text,
           object_name
    from   v$sql s,
           v$sql_plan p
    where  s.address = p.address and
           s.hash_value = p.hash_value and
           s.child_number = p.child_number and
           p.operation = 'TABLE ACCESS' and
           p.options = 'FULL' and
           p.object_owner in ('SCOTT')
    ;Please note that this query is just a snapshot of the SQL statements currently in the cache.

  • VARRAY bind parameter in IN clause causes Full Table Scan

    Hi
    My problem is that Oracle elects to perform a full table scan when I want it to use an index.
    The situation is this: I have a single table SQL query with an IN clause such as:
    SELECT EMPNO, ENAME, JOB FROM EMP WHERE ENAME IN (...)
    Since this is running in an application, I want to allow the user to provide a list of ENAMES to search. Because IN clauses don't accept bind parameters I've been using the Tom Kyte workaround which relies on setting a bind variable to an array-valued scalar, and then casting this array to be a table of records that the database can handle in an IN clause:
    SELECT *
    FROM EMP
    WHERE ENAME IN (
    SELECT *
    FROM TABLE(CAST( ? AS TABLE_OF_VARCHAR)))
    This resulted in very slow performance due to a full table scan. To test, I ran the SQL in SQL*Plus and provided the IN clause values in the query itself. The explain plan showed it using my index...ok good. But once I changed the IN clause to the 'select * from table...' syntax Oracle went into Full Scan mode. I added an index hint but it didn't change the plan. Has anyone had success using this technique without a full scan?
    Thanks
    John
    p.s.
    Please let me know if you think this should be posted on a different forum. Even though my context is a Java app developed with JDev this seemed like a SQL question.

    Justin and 3360 - that was great advice and certainly nothing I would have come up with. However, as posted, the performance still wasn't good...but it gave me a term to Google on. I found this Ask Tom page http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:3779680732446#15740265481549, where he included a seemingly magical 'where rownum >=0' which, when applied with your suggestions, turned my query from minutes into seconds.
    My plans are as follows:
    1 - Query with standard IN clause
    SQL> explain plan for
    2 select accession_number, protein_name, sequence_id from protein_dim
    3 where accession_number in ('33460', '33458', '33451');
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 7 | 336 | 4 |
    | 1 | INLIST ITERATOR | | | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| PROTEIN_DIM | 7 | 336 | 4 |
    | 3 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 7 | | 3 |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    2 - Standard IN Clause with Index hint
    SQL> explain plan for
    2 select /*+ INDEX(protein_dim IDX_PROTEIN_ACCNUM) */
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
    4 from pdssuser.protein_dim
    5 where accession_number in
    6 ('33460', '33458', '33451');
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 7 | 588 | 4 |
    | 1 | INLIST ITERATOR | | | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| PROTEIN_DIM | 7 | 588 | 4 |
    | 3 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 7 | | 3 |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    3 - Using custom TABLE_OF_VARCHAR type
    CREATE TYPE TABLE_OF_VARCHAR AS
    TABLE OF VARCHAR2(50);
    SQL> explain plan for
    2 select
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
    4 from pdssuser.protein_dim
    5 where accession_number in
    6 (select * from table(cast(TABLE_OF_VARCHAR('33460', '33458', '33451') as TABLE_OF_VARCHAR)) t);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 168 | 57M|
    | 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
    | 2 | TABLE ACCESS FULL | PROTEIN_DIM | 5235K| 419M| 13729 |
    | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    4 - Using custom TABLE_OF_VARCHAR type w/ Index hint
    SQL> explain plan for
    2 select /*+ INDEX(protein_dim IDX_PROTEIN_ACCNUM) */
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
    4 from pdssuser.protein_dim
    5 where accession_number in
    6 (select * from table(cast(TABLE_OF_VARCHAR('33460', '33458', '33451') as TABLE_OF_VARCHAR)) t);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 168 | 57M|
    | 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
    | 2 | TABLE ACCESS BY INDEX ROWID | PROTEIN_DIM | 5235K| 419M| 252K|
    | 3 | INDEX FULL SCAN | IDX_PROTEIN_ACCNUM | 5235K| | 17255 |
    | 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    PLAN_TABLE_OUTPUT
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    12 rows selected.
    5 - Using custom TABLE_OF_VARCHAR type w/ cardinality hint
    SQL> explain plan for
    2 select
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source from protein_dim
    4 where accession_number in (select /*+ cardinality( t 10 ) */
    5 * from TABLE(CAST (TABLE_OF_VARCHAR('33460', '33458', '33451') AS TABLE_OF_VARCHAR)) t);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 168 | 57M|
    | 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
    | 2 | TABLE ACCESS FULL | PROTEIN_DIM | 5235K| 419M| 13729 |
    | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    6 - Using custom TABLE_OF_VARCHAR type w/ cardinality hint
    and rownum >= 0 constraint
    SQL> explain plan for
    2 select
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source from protein_dim
    4 where accession_number in (select /*+ cardinality( t 10 ) */
    5 * from TABLE(CAST (TABLE_OF_VARCHAR('33460', '33458', '33451') AS TABLE_OF_VARCHAR)) t
    6 where rownum >= 0);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 25 | 2775 | 43 |
    | 1 | TABLE ACCESS BY INDEX ROWID | PROTEIN_DIM | 2 | 168 | 3 |
    | 2 | NESTED LOOPS | | 25 | 2775 | 43 |
    | 3 | VIEW | VW_NSO_1 | 10 | 270 | 11 |
    | 4 | SORT UNIQUE | | 10 | | |
    | 5 | COUNT | | | | |
    | 6 | FILTER | | | | |
    PLAN_TABLE_OUTPUT
    | 7 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    | 8 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 2 | | 2 |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    16 rows selected.
    I don't understand why performance improved so dramatically but happy that it did.
    Thanks a ton!
    John

  • Locate SQL causes full table scans from Statspack

    Hello,
    In my statspack reports I see a lot of full tables scans (1,425,297)
    How can I locate the query that causes this ?
    stats$sql_plan should fit?
    Oracle is 9i
    Thank you

    >
    How can I locate the query that causes this ?
    It can be hard. One idea is to put comments in queries identifying where they come from, something like
    select /* my_package.my_procedure */ *
      from dual;
    [/code
    The comment should remain with the sql text so various reports showing the sql text should also indicate where the query is]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Entity Framework Generated SQL for paging or using Linq skip take causes full table scans.

    The slq genreated creates queries that cause a full table scan for pagination.  Is there any way to fix this?
    I am using
    ODP.NET ODTwithODAC1120320_32bit
    ASP.NET 4.5
    EF 5
    Oracle 11gR2
    This table has 2 million records. The further into the records you page the longer it takes.
    LINQ
    var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                        select errorLog).Count();
                    var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                                 orderby errorLog.ERR_LOG_ID
                                 select errorLog).Skip(cnt-10).Take(10).ToList();
    Here is the query & execution plans.
    SELECT *
    FROM   (SELECT "Extent1"."ERR_LOG_ID"  AS "ERR_LOG_ID",
                   "Extent1"."SRV_LOG_ID"  AS "SRV_LOG_ID",
                   "Extent1"."TS"          AS "TS",
                   "Extent1"."MSG"         AS "MSG",
                   "Extent1"."STACK_TRACE" AS "STACK_TRACE",
                   "Extent1"."MTD_NM"      AS "MTD_NM",
                   "Extent1"."PRM"         AS "PRM",
                   "Extent1"."INSN_ID"     AS "INSN_ID",
                   "Extent1"."TS_1"        AS "TS_1",
                   "Extent1"."LOG_ETRY"    AS "LOG_ETRY"
            FROM   (SELECT "Extent1"."ERR_LOG_ID"                                  AS "ERR_LOG_ID",
                           "Extent1"."SRV_LOG_ID"                                  AS "SRV_LOG_ID",
                           "Extent1"."TS"                                          AS "TS",
                           "Extent1"."MSG"                                         AS "MSG",
                           "Extent1"."STACK_TRACE"                                 AS "STACK_TRACE",
                           "Extent1"."MTD_NM"                                      AS "MTD_NM",
                           "Extent1"."PRM"                                         AS "PRM",
                           "Extent1"."INSN_ID"                                     AS "INSN_ID",
                           "Extent1"."TS_1"                                        AS "TS_1",
                           "Extent1"."LOG_ETRY"                                    AS "LOG_ETRY",
                           row_number() OVER (ORDER BY "Extent1"."ERR_LOG_ID" ASC) AS "row_number"
                    FROM   (SELECT "ERRORLOGANDSERVICELOG_VIEW"."ERR_LOG_ID"  AS "ERR_LOG_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."SRV_LOG_ID"  AS "SRV_LOG_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."TS"          AS "TS",
                                   "ERRORLOGANDSERVICELOG_VIEW"."MSG"         AS "MSG",
                                   "ERRORLOGANDSERVICELOG_VIEW"."STACK_TRACE" AS "STACK_TRACE",
                                   "ERRORLOGANDSERVICELOG_VIEW"."MTD_NM"      AS "MTD_NM",
                                   "ERRORLOGANDSERVICELOG_VIEW"."PRM"         AS "PRM",
                                   "ERRORLOGANDSERVICELOG_VIEW"."INSN_ID"     AS "INSN_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."TS_1"        AS "TS_1",
                                   "ERRORLOGANDSERVICELOG_VIEW"."LOG_ETRY"    AS "LOG_ETRY"
                            FROM   "IDS_CORE"."ERRORLOGANDSERVICELOG_VIEW" "ERRORLOGANDSERVICELOG_VIEW") "Extent1") "Extent1"
            WHERE  ("Extent1"."row_number" > 1933849)
            ORDER  BY "Extent1"."ERR_LOG_ID" ASC)
    WHERE  (ROWNUM <= (10))
    | Id  | Operation              | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                        |    10 | 31750 |       |   821K  (1)| 02:44:15 |
    |*  1 |  COUNT STOPKEY         |                        |       |       |       |            |          |
    |   2 |   VIEW                 |                        |  1561K|  4728M|       |   821K  (1)| 02:44:15 |
    |*  3 |    VIEW                |                        |  1561K|  4748M|       |   821K  (1)| 02:44:15 |
    |   4 |     WINDOW SORT        |                        |  1561K|  3154M|  4066M|   821K  (1)| 02:44:15 |
    |*  5 |      HASH JOIN OUTER   |                        |  1561K|  3154M|       |   130K  (1)| 00:26:09 |
    |   6 |       TABLE ACCESS FULL| IDS_SERVICES_LOG       |  1047 | 52350 |       |     5   (0)| 00:00:01 |
    |   7 |       TABLE ACCESS FULL| IDS_SERVICES_ERROR_LOG |  1561K|  3080M|       |   130K  (1)| 00:26:08 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=10)
       3 - filter("Extent1"."row_number">1933849)
       5 - access("T1"."SRV_LOG_ID"(+)="T2"."SRV_LOG_ID")

    I did try a sample from stack overflow that would apply it to all string types, but I didn't see any query results differences.  Please note, I am having the problem without any order with or where statements. Of course the skip take generates them.  Please advise how I would implement the EntityFunctions.AsNonUnicode method with this Linq query.
    LINQ
    var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                        select errorLog).Count();
                    var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                                 orderby errorLog.ERR_LOG_ID
                                 select errorLog).Skip(cnt-10).Take(10).ToList();
    This is what I inserted into my model to hopefully fix it.  FROM:c# - EF Code First - Globally set varchar mapping over nvarchar - Stack Overflow
    /// <summary>
    /// Change the "default" of all string properties for a given entity to varchar instead of nvarchar.
    /// </summary>
    /// <param name="modelBuilder"></param>
    /// <param name="entityType"></param>
    protected void SetAllStringPropertiesAsNonUnicode(
       DbModelBuilder modelBuilder,
       Type entityType)
       var stringProperties = entityType.GetProperties().Where(
      c => c.PropertyType == typeof(string)
       && c.PropertyType.IsPublic
       && c.

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Trunc causing Full Table Scans

    I have a situtaion here where my query is as follows.
    SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate);
    COUNT(1)
    6
    PLAN_TABLE_OUTPUT
    Plan hash value: 3951750498
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 10 | 13904 (1)| 00:02:47 | | |
    | 1 | SORT AGGREGATE | | 1 | 10 | | | | |
    | 2 | PARTITION LIST SINGLE| | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
    |* 3 | TABLE ACCESS FULL | HBSM_SM_ACCOUNT_INFO | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
    Predicate Information (identified by operation id):
    3 - filter(("CUST_STATUS"='UP' OR "CUST_STATUS"='UUP') AND
    TO_DATE(INTERNAL_FUNCTION("FIRST_ACTVN_DATE"))=TO_DATE(TO_CHAR(SYSDATE@!)))
    16 rows selected.
    If I remove the trunc clause from the query the performance definitely improves the the results are wrong.
    SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and FIRST_ACTVN_DATE = trunc(sysdate);
    COUNT(1)
    0
    PLAN_TABLE_OUTPUT
    Plan hash value: 454529511
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 40 | 47 (0)| 00:00:01 | | |
    |* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| HBSM_SM_ACCOUNT_INFO | 1 | 40 | 47 (0)| 00:00:01 | 12 | 12 |
    |* 2 | INDEX RANGE SCAN | IND_FIRST_ACTVN_DATE | 51 | | 4 (0)| 00:00:01 | | |
    Can someone please help me whereby I can get the right data and I can also prevent these full table scans.

    Unless you are using a functional index, applying any function to an indexed column prevents the use of the index.
    The way round it in your case is to realise that
    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate)Is really asking that FIRST_ACTVN_DATE should be sometime today. You could therefore rewrite it as
    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP')
    and FIRST_ACTVN_DATE >= trunc(sysdate)
    and FIRST_ACTVN_DATE < trunc(sysdate) + 1Note, this still might not use the index depending on how many rows are within today's date versus how many are outside today's date.
    Also, when posting, remember to put your code between tags and to post create table scripts and sample data inserts.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Where columnname like '%somevalue' causing full table scan

    hi,
    10.2.0.4 database
    is it possible to force an index scan over a full table scan if I use a where clause similar to the following:
    where col1 like '%somevalue';
    There is an index with col1 as the first segment of the index and another column as the second segment of the index.
    Thanks
    JOhn

    I have done it for you
    SQL> create index empX on emp(job) ;
    Index created.
    SQL> explain plan for select * from emp where job like '%ERK' ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 3956160932
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |    37 |     3   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| EMP  |     1 |    37 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("JOB" LIKE '%ERK')
    13 rows selected.
    SQL> explain plan for select * from emp where job like 'C%ERK' ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 140376749
    | Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |      |     4 |   148 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP  |     4 |   148 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | EMPX |     4 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("JOB" LIKE 'C%ERK')
           filter("JOB" LIKE 'C%ERK')
    15 rows selected.
    SQL> explain plan for select /*+index (emp,EMPX) */ * from emp where job like '%ERK' ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 3745534319
    | Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |      |     1 |    37 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP  |     1 |    37 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX FULL SCAN           | EMPX |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("JOB" LIKE '%ERK')
    14 rows selected.SS

  • Prompt on DATE forces FULL TABLE SCAN

    When using a prompt on a datetime field OBIEE sends a SQL to the Database with the TIMESTAMP COMMAND.
    Due to Timestamp the Oracle database does a Full Table Scan. The field ATIST is a date with time on the physical database.
    By Default ATIST was configured as TIMESTAMP in the rpd physical layer. The SQL request is sent to a Oracle 10 Database.
    That is the query sent to the database:
    -------------------- Sending query to database named PlantControl1 (id: <<10167>>):
    select distinct T1471.ATIST as c1,
    T1471.GUTMENGEMELD2 as c2
    from
    AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
    where ( T1471.ATIST = TIMESTAMP '2005-04-01 13:48:05' )
    order by c1, c2
    The result takes more than half a minute to appear.
    Because BIEE is using "TIMESTAMP" , the database performs a full table scan instead of using the index.
    By using TO_DATE instead of timestamp the result appears after a second.
    select distinct T1471.ATIST, T1471.GUTMENGEMELD2 as c2
    from
    AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
    where ( T1471.ATIST = to_date('2005.04.01 13:48:05', 'yyyy.mm.dd hh24:mi:ss') );
    Is there any way resolving the issue?
    PS:When the field ATIST is configured as date at the physical layer the SQL performs well is it uses the command "to_date" instead of "timestamp". But this cuts the time of the date, When it is configured as DATETIME, OBIEE uses TIMESTAMP again.
    What I need is a working date + time field.
    Anybody has encountered a similiar problem?

    To be honest I haven't come across many scenarios where the Time has been important. Most of our reporting stops at Day level.
    What is the real world business question being asked here that requires DayTime?
    Incidentally if you change your datatype on the base table you will see it works fine.
    CREATE TABLE daytime( daytime TIMESTAMP );
    CREATE UNIQUE INDEX dt ON daytime  (daytime)
    SQL> set autotrace traceonly
    SQL> SELECT * FROM daytime
      2  WHERE daytime = TIMESTAMP '2007-04-01 13:00:45';
    no rows selected
    Execution Plan
    Plan hash value: 3985400340
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |    13 |     1   (0)| 00:00:01 |
    |*  1 |  INDEX UNIQUE SCAN| DT   |     1 |    13 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("DAYTIME"=TIMESTAMP' 2007-04-01 13:00:45.000000000')
    Statistics
              1  recursive calls
              0  db block gets
              1  consistent gets
              0  physical reads
              0  redo size
            242  bytes sent via SQL*Net to client
            362  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    SQL>However if its a date it would appear to do some internal function call which I guess is the source of the problem ...
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |     1 |     9 |     2   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DAYTIME |     1 |     9 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(INTERNAL_FUNCTION("DAYTIME")=TIMESTAMP' 2007-04-01
                  13:00:45.000000000')

  • Query doing full table scan

    Hai all,
    10.2.0.4 on solaris 10
    SELECT sum(RechargeForPrepaid/10000), to_date(substr (TIMESTAMP, 1,8),'YYYY/MM/DD')
    FROM medt.crm_t  WHERE to_date(substr (TIMESTAMP, 1,8),'YYYY/MM/DD') >= trunc(sysdate)-1 and tradetype != '0' group by to_date(substr (TIMESTAMP, 1,8),'YYYY/MM/DD');The explain shows that it performs a full table scan on crm_t . I created indexes on the column
    timestamp and tradetype too . collected stats too. but still it is doing a full table scan.
    Please guide
    Thanks
    Kai

    sybrand_b wrote:
    The column is wrongly named --> timestamp is a reserved word.True:
    SQL> select * from v$reserved_words where keyword = 'TIMESTAMP'
      2  /
    KEYWORD                                                              LENGTH
    TIMESTAMP                                                                 9
    1 row selected.
    It is also of the wrong type--> dates shouldn't be stored as varchar2.All we know, is that the column is treated as if it is a VARCHAR2. It doesn't have to be a varchar2, because he might be relying on some implicit datatype conversion.
    The design of this table is a complete mess.
    Drop it and redesign.You'd better tell that to Oracle as well then :-) :
    SQL> desc user_objects
    Name                                                                      Null?    Type
    OBJECT_NAME                                                                        VARCHAR2(128)
    SUBOBJECT_NAME                                                                     VARCHAR2(30)
    OBJECT_ID                                                                          NUMBER
    DATA_OBJECT_ID                                                                     NUMBER
    OBJECT_TYPE                                                                        VARCHAR2(18)
    CREATED                                                                            DATE
    LAST_DDL_TIME                                                                      DATE
    TIMESTAMP                                                                          VARCHAR2(19)
    STATUS                                                                             VARCHAR2(7)
    TEMPORARY                                                                          VARCHAR2(1)
    GENERATED                                                                          VARCHAR2(1)
    SECONDARY                                                                          VARCHAR2(1)My point is that the TIMESTAMP datatype was introduced in version 9, and versions prior to that were free to use the name "TIMESTAMP". And all those older applications still function. A redesign would be ideal, but maybe not economically feasible. Oracle certainly didn't choose a redesign.
    If you are going to build a new application, then I agree that you'd better not use that reserved word anymore.
    Regards,
    Rob.

  • Select statement in a function does Full Table Scan

    All,
    I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
    Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
    Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
    Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
    Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = case_number_in and
    a1.case_number = a2.case_number and
    a2.application_date <= effective_date_in and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = 'A006438' and
    a1.case_number = a2.case_number and
    a2.application_date <= '01-Apr-2009' and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Why using the values in the passed parameter in the first select statement causes full table scan?
    Thank you for your help,
    Seyed
    Edited by: user11117178 on Jul 30, 2009 6:22 AM
    Edited by: user11117178 on Jul 30, 2009 6:23 AM
    Edited by: user11117178 on Jul 30, 2009 6:24 AM

    Hello Dan,
    Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
    PLAN_TABLE_OUTPUT
    Plan hash value: 2132048964
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT              |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |*  1 |  HASH JOIN                    |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |   2 |   BITMAP CONVERSION TO ROWIDS |                         |     3 |     9 |     1   (0)| 00:00:01 |       |       |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES    |       |       |            |          |       |       |
    |   4 |   PARTITION RANGE ITERATOR    |                         |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    |   5 |    TABLE ACCESS FULL          | PS2_FS_TRANSACTION_FACT |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    Predicate Information (identified by operation id):
       1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
       3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
    Thank you very much,
    Seyed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • FASTER THROUGH PUT ON FULL TABLE SCAN

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-10
    Subject: Faster through put on Full table scans
    db_file_multiblock_read only affects the performance of full table scans.
    Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
    db_file_multiblock_read must be less than or equal to 64KBytes.
    If your query is really doing an index range scan then the performance
    of full scans is irrelevant. In order to improve the performance of this
    type of query it is important to reduce the number of blocks that
    the 'interesting' part of the index is contained within.
    Obviously the db_blocksize has the most impact here.
    Historically Informix has not been able to modify their database block size,
    and has had a fixed 2KB block.
    On most Unix platforms Oracle can use up to 8KBytes.
    (Some eg: Sequent allow 16KB).
    This means that for the same size of B-Tree index Oracle with
    an 8KB blocksize can read it's contents in 1/4 of the time that
    Informix with a 2KB block could do.
    You should also consider whether the PCTFREE value used for your index is
    appropriate. If it is too large then you will be wasting space
    in each index block. (It's too large IF you are not going to get any
    entry size extension OR you are not going to get any new rows for existing
    index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)
    db_file_simultaneous_writes has no direct relevance to index re-balancing.
    (PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
    HP/Allbase for the database server application that HP uses internally to
    monitor and control it's Tape drive manufacturing lines. They chose
    Oracle because: We outperformed Informix.
                             Sybase was too slow AND too
    unreliable.
                             Unify was short on functionality
    and SLOW.
                             HP/Allbase couldn't match the
    availability
                             requirements and wasn't as
    functional.
    Informix had problems demonstrating the ability to do hot backups without
    severely affecting the system throughput.
    HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
    different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
    configurations.
    NNB: It's always worth throwing in a simulated system failure whilst the
    benchmark is in progress. Informix has a history of not coping gracefully.
    That is they usually need some manual intervention to perform the database
    recovery.)
    I have a perspective client who is running a stripped down souped version of
    informix with no catalytic converter. One of their queries boils down to an
    Index Range Scan on 10000 records. How can I achieve better throughput
    on a single drive single CPU machine (HP/UX) without using raw devices.
    I had heard rebuilding the database with a block size factor greater than
    the OS block size would yield better performance. Also I tried changing
    the db_file_multiblock_read_count to 32 without much improvement.
    Adjusting the db_writers to two did not help either.     
    Also will the adjustment of the db_file_simultaneous_writes help on
    the maintenance of a index during rebalancing operations.

    2)if cbo, how are the stats collected?
    daily(less than millions rows of table) and weekly(all tables)There's no need to collect stats so frequently unless it's absolute necessary like you have massive update on tables daily or weekly.
    It will help if you can post your sample explain plan and query.

  • Find out the SQLs which are using a full table scan

    Hello all , how can i to find out the queries which are using a full table scan ? Any idea ?

    In general, though, why would you want to tune SQL statements that aren't causing problems? Statspack will tell you what the most resource-intensive SQL statements on your system are. A SQL*Net trace of sessions that are performing poorly will indicate which statements are the most resource-intensive for that session. If a statement is incorrectly doing a full-table scan, but it is not causing a problem, why spend time tuning it? If you're not focusing your tuning attention on identifying statements that are causing problems, you'll also miss out on 90% of tuning opportunities which involve rewriting (or eliminating) code to make it more efficient. I can simulate a join on two tables with nested cursor loops, which won't generate a single full table scan, but replacing that code with a real join, while it will cause at least one full table scan, will be orders of magnitude faster.
    As an aside, full table scans aren't necessarily a bad thing. If a statement needs to retrieve more than a couple percent of the rows of a table, full table scans are the most efficient way to go.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Full Table Scans on Auto Ship Confirm Report (WSHRDASC) Causing

    Severe performance issue with Auto Ship Confirm report WSHRDASC.
    From the statspack reports, a single sql statement that iscurrently consuming approximately 50% of all physical disk I/O.
    2 full table scans coming from this problem query :-
    Rows Row Source Operation
    SORT ORDER BY
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    TABLE ACCESS FULL WSH_NEW_DELIVERIES
    TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS
    INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (object id 81392)
    TABLE ACCESS BY INDEX ROWID HZ_PARTIES
    INDEX UNIQUE SCAN HZ_PARTIES_U1 (object id 172682)
    TABLE ACCESS BY INDEX ROWID WSH_EXCEPTIONS
    INDEX RANGE SCAN WSH_EXCEPTIONS_N9 (object id 133587)
    TABLE ACCESS BY INDEX ROWID WSH_DELIVERY_LEGS
    INDEX RANGE SCAN WSH_DELIVERY_LEGS_N1 (object id 46224)
    TABLE ACCESS BY INDEX ROWID WSH_DOCUMENT_INSTANCES
    INDEX RANGE SCAN WSH_DOCUMENT_INSTANCES_N1 (object id 46405)
    TABLE ACCESS FULL WSH_PICKING_BATCHES
    TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES
    INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (object id 34010)
    Please help i have applied one patch for this issue Patch 5531283. Then also same issue

    Hi;
    WHat is your EBS version?
    If note Full Table Scans on Auto Ship Confirm Report (WSHRDASC) Causing Performance Issues [ID 393014.1] doesnt help than I suggest rise SR
    Regard
    Helios

Maybe you are looking for

  • BAPI o FM for posting goods receipt of an inbound delivery

    Hi    I have a program that post a goods receipt for an inbound delivery using batch input of transaction VL32 (in the first screen of this trx pushing button "Post Goods Recepit"). We need to change it for a BAPI o FM that makes the same process. I'

  • Sending a e-mail to all people in Entourage mail address book

    I have to abandon my present e-mail provider and would like to send a e-mail to all the people in my address book (the one that is integrated in Entourage, not the Apple one). Is there a way of automating this? Would love to know if anybody has a sol

  • Right Click tools for SCCM 2007 issue

    We have SCCM 2007 R2 with Right click tools installed (I know this is not a direct SCCM query but linked to it and thought I might try my luck here) I use the "ConfigMgr Collection Tools - Add Systems to a collection" option within Right click tools

  • Keytool.exe : does it take a predefined list of values?

    Hi: I have an installer program that installs an application. I want to run keytool.exe in order to generate a certif for my SSL application during my installer execution. I would prefer to ask the keytool questions to the user in the beginning, get

  • Problem with my mac os

    hi i have a big problem with my mac, i install backtrack linux on my mac, but accidentally i erase my hdd and now i only have linux os i try to reinstall snow leopard with de dvd but the mac dont star with command+c. what can i do to reinstall mac os