Trunc causing Full Table Scans

I have a situtaion here where my query is as follows.
SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate);
COUNT(1)
6
PLAN_TABLE_OUTPUT
Plan hash value: 3951750498
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 10 | 13904 (1)| 00:02:47 | | |
| 1 | SORT AGGREGATE | | 1 | 10 | | | | |
| 2 | PARTITION LIST SINGLE| | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
|* 3 | TABLE ACCESS FULL | HBSM_SM_ACCOUNT_INFO | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
Predicate Information (identified by operation id):
3 - filter(("CUST_STATUS"='UP' OR "CUST_STATUS"='UUP') AND
TO_DATE(INTERNAL_FUNCTION("FIRST_ACTVN_DATE"))=TO_DATE(TO_CHAR(SYSDATE@!)))
16 rows selected.
If I remove the trunc clause from the query the performance definitely improves the the results are wrong.
SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and FIRST_ACTVN_DATE = trunc(sysdate);
COUNT(1)
0
PLAN_TABLE_OUTPUT
Plan hash value: 454529511
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 40 | 47 (0)| 00:00:01 | | |
|* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| HBSM_SM_ACCOUNT_INFO | 1 | 40 | 47 (0)| 00:00:01 | 12 | 12 |
|* 2 | INDEX RANGE SCAN | IND_FIRST_ACTVN_DATE | 51 | | 4 (0)| 00:00:01 | | |
Can someone please help me whereby I can get the right data and I can also prevent these full table scans.

Unless you are using a functional index, applying any function to an indexed column prevents the use of the index.
The way round it in your case is to realise that
select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate)Is really asking that FIRST_ACTVN_DATE should be sometime today. You could therefore rewrite it as
select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP')
and FIRST_ACTVN_DATE >= trunc(sysdate)
and FIRST_ACTVN_DATE < trunc(sysdate) + 1Note, this still might not use the index depending on how many rows are within today's date versus how many are outside today's date.
Also, when posting, remember to put your code between tags and to post create table scripts and sample data inserts.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Finding the Text of SQL Query causing Full Table Scans

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    Finding the Text of SQL Query Causing Full Table Scan

  • Finding the Text of SQL Query Causing Full Table Scan

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    You might try something like this:
    select sql_text,
           object_name
    from   v$sql s,
           v$sql_plan p
    where  s.address = p.address and
           s.hash_value = p.hash_value and
           s.child_number = p.child_number and
           p.operation = 'TABLE ACCESS' and
           p.options = 'FULL' and
           p.object_owner in ('SCOTT')
    ;Please note that this query is just a snapshot of the SQL statements currently in the cache.

  • Entity Framework Generated SQL for paging or using Linq skip take causes full table scans.

    The slq genreated creates queries that cause a full table scan for pagination.  Is there any way to fix this?
    I am using
    ODP.NET ODTwithODAC1120320_32bit
    ASP.NET 4.5
    EF 5
    Oracle 11gR2
    This table has 2 million records. The further into the records you page the longer it takes.
    LINQ
    var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                        select errorLog).Count();
                    var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                                 orderby errorLog.ERR_LOG_ID
                                 select errorLog).Skip(cnt-10).Take(10).ToList();
    Here is the query & execution plans.
    SELECT *
    FROM   (SELECT "Extent1"."ERR_LOG_ID"  AS "ERR_LOG_ID",
                   "Extent1"."SRV_LOG_ID"  AS "SRV_LOG_ID",
                   "Extent1"."TS"          AS "TS",
                   "Extent1"."MSG"         AS "MSG",
                   "Extent1"."STACK_TRACE" AS "STACK_TRACE",
                   "Extent1"."MTD_NM"      AS "MTD_NM",
                   "Extent1"."PRM"         AS "PRM",
                   "Extent1"."INSN_ID"     AS "INSN_ID",
                   "Extent1"."TS_1"        AS "TS_1",
                   "Extent1"."LOG_ETRY"    AS "LOG_ETRY"
            FROM   (SELECT "Extent1"."ERR_LOG_ID"                                  AS "ERR_LOG_ID",
                           "Extent1"."SRV_LOG_ID"                                  AS "SRV_LOG_ID",
                           "Extent1"."TS"                                          AS "TS",
                           "Extent1"."MSG"                                         AS "MSG",
                           "Extent1"."STACK_TRACE"                                 AS "STACK_TRACE",
                           "Extent1"."MTD_NM"                                      AS "MTD_NM",
                           "Extent1"."PRM"                                         AS "PRM",
                           "Extent1"."INSN_ID"                                     AS "INSN_ID",
                           "Extent1"."TS_1"                                        AS "TS_1",
                           "Extent1"."LOG_ETRY"                                    AS "LOG_ETRY",
                           row_number() OVER (ORDER BY "Extent1"."ERR_LOG_ID" ASC) AS "row_number"
                    FROM   (SELECT "ERRORLOGANDSERVICELOG_VIEW"."ERR_LOG_ID"  AS "ERR_LOG_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."SRV_LOG_ID"  AS "SRV_LOG_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."TS"          AS "TS",
                                   "ERRORLOGANDSERVICELOG_VIEW"."MSG"         AS "MSG",
                                   "ERRORLOGANDSERVICELOG_VIEW"."STACK_TRACE" AS "STACK_TRACE",
                                   "ERRORLOGANDSERVICELOG_VIEW"."MTD_NM"      AS "MTD_NM",
                                   "ERRORLOGANDSERVICELOG_VIEW"."PRM"         AS "PRM",
                                   "ERRORLOGANDSERVICELOG_VIEW"."INSN_ID"     AS "INSN_ID",
                                   "ERRORLOGANDSERVICELOG_VIEW"."TS_1"        AS "TS_1",
                                   "ERRORLOGANDSERVICELOG_VIEW"."LOG_ETRY"    AS "LOG_ETRY"
                            FROM   "IDS_CORE"."ERRORLOGANDSERVICELOG_VIEW" "ERRORLOGANDSERVICELOG_VIEW") "Extent1") "Extent1"
            WHERE  ("Extent1"."row_number" > 1933849)
            ORDER  BY "Extent1"."ERR_LOG_ID" ASC)
    WHERE  (ROWNUM <= (10))
    | Id  | Operation              | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                        |    10 | 31750 |       |   821K  (1)| 02:44:15 |
    |*  1 |  COUNT STOPKEY         |                        |       |       |       |            |          |
    |   2 |   VIEW                 |                        |  1561K|  4728M|       |   821K  (1)| 02:44:15 |
    |*  3 |    VIEW                |                        |  1561K|  4748M|       |   821K  (1)| 02:44:15 |
    |   4 |     WINDOW SORT        |                        |  1561K|  3154M|  4066M|   821K  (1)| 02:44:15 |
    |*  5 |      HASH JOIN OUTER   |                        |  1561K|  3154M|       |   130K  (1)| 00:26:09 |
    |   6 |       TABLE ACCESS FULL| IDS_SERVICES_LOG       |  1047 | 52350 |       |     5   (0)| 00:00:01 |
    |   7 |       TABLE ACCESS FULL| IDS_SERVICES_ERROR_LOG |  1561K|  3080M|       |   130K  (1)| 00:26:08 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=10)
       3 - filter("Extent1"."row_number">1933849)
       5 - access("T1"."SRV_LOG_ID"(+)="T2"."SRV_LOG_ID")

    I did try a sample from stack overflow that would apply it to all string types, but I didn't see any query results differences.  Please note, I am having the problem without any order with or where statements. Of course the skip take generates them.  Please advise how I would implement the EntityFunctions.AsNonUnicode method with this Linq query.
    LINQ
    var cnt = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                        select errorLog).Count();
                    var query = (from errorLog in ctx.ERRORLOGANDSERVICELOG_VIEW
                                 orderby errorLog.ERR_LOG_ID
                                 select errorLog).Skip(cnt-10).Take(10).ToList();
    This is what I inserted into my model to hopefully fix it.  FROM:c# - EF Code First - Globally set varchar mapping over nvarchar - Stack Overflow
    /// <summary>
    /// Change the "default" of all string properties for a given entity to varchar instead of nvarchar.
    /// </summary>
    /// <param name="modelBuilder"></param>
    /// <param name="entityType"></param>
    protected void SetAllStringPropertiesAsNonUnicode(
       DbModelBuilder modelBuilder,
       Type entityType)
       var stringProperties = entityType.GetProperties().Where(
      c => c.PropertyType == typeof(string)
       && c.PropertyType.IsPublic
       && c.

  • Locate SQL causes full table scans from Statspack

    Hello,
    In my statspack reports I see a lot of full tables scans (1,425,297)
    How can I locate the query that causes this ?
    stats$sql_plan should fit?
    Oracle is 9i
    Thank you

    >
    How can I locate the query that causes this ?
    It can be hard. One idea is to put comments in queries identifying where they come from, something like
    select /* my_package.my_procedure */ *
      from dual;
    [/code
    The comment should remain with the sql text so various reports showing the sql text should also indicate where the query is]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Where columnname like '%somevalue' causing full table scan

    hi,
    10.2.0.4 database
    is it possible to force an index scan over a full table scan if I use a where clause similar to the following:
    where col1 like '%somevalue';
    There is an index with col1 as the first segment of the index and another column as the second segment of the index.
    Thanks
    JOhn

    I have done it for you
    SQL> create index empX on emp(job) ;
    Index created.
    SQL> explain plan for select * from emp where job like '%ERK' ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 3956160932
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |    37 |     3   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| EMP  |     1 |    37 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("JOB" LIKE '%ERK')
    13 rows selected.
    SQL> explain plan for select * from emp where job like 'C%ERK' ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 140376749
    | Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |      |     4 |   148 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP  |     4 |   148 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | EMPX |     4 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("JOB" LIKE 'C%ERK')
           filter("JOB" LIKE 'C%ERK')
    15 rows selected.
    SQL> explain plan for select /*+index (emp,EMPX) */ * from emp where job like '%ERK' ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 3745534319
    | Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |      |     1 |    37 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP  |     1 |    37 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX FULL SCAN           | EMPX |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("JOB" LIKE '%ERK')
    14 rows selected.SS

  • VARRAY bind parameter in IN clause causes Full Table Scan

    Hi
    My problem is that Oracle elects to perform a full table scan when I want it to use an index.
    The situation is this: I have a single table SQL query with an IN clause such as:
    SELECT EMPNO, ENAME, JOB FROM EMP WHERE ENAME IN (...)
    Since this is running in an application, I want to allow the user to provide a list of ENAMES to search. Because IN clauses don't accept bind parameters I've been using the Tom Kyte workaround which relies on setting a bind variable to an array-valued scalar, and then casting this array to be a table of records that the database can handle in an IN clause:
    SELECT *
    FROM EMP
    WHERE ENAME IN (
    SELECT *
    FROM TABLE(CAST( ? AS TABLE_OF_VARCHAR)))
    This resulted in very slow performance due to a full table scan. To test, I ran the SQL in SQL*Plus and provided the IN clause values in the query itself. The explain plan showed it using my index...ok good. But once I changed the IN clause to the 'select * from table...' syntax Oracle went into Full Scan mode. I added an index hint but it didn't change the plan. Has anyone had success using this technique without a full scan?
    Thanks
    John
    p.s.
    Please let me know if you think this should be posted on a different forum. Even though my context is a Java app developed with JDev this seemed like a SQL question.

    Justin and 3360 - that was great advice and certainly nothing I would have come up with. However, as posted, the performance still wasn't good...but it gave me a term to Google on. I found this Ask Tom page http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:3779680732446#15740265481549, where he included a seemingly magical 'where rownum >=0' which, when applied with your suggestions, turned my query from minutes into seconds.
    My plans are as follows:
    1 - Query with standard IN clause
    SQL> explain plan for
    2 select accession_number, protein_name, sequence_id from protein_dim
    3 where accession_number in ('33460', '33458', '33451');
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 7 | 336 | 4 |
    | 1 | INLIST ITERATOR | | | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| PROTEIN_DIM | 7 | 336 | 4 |
    | 3 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 7 | | 3 |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    2 - Standard IN Clause with Index hint
    SQL> explain plan for
    2 select /*+ INDEX(protein_dim IDX_PROTEIN_ACCNUM) */
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
    4 from pdssuser.protein_dim
    5 where accession_number in
    6 ('33460', '33458', '33451');
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 7 | 588 | 4 |
    | 1 | INLIST ITERATOR | | | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| PROTEIN_DIM | 7 | 588 | 4 |
    | 3 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 7 | | 3 |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    3 - Using custom TABLE_OF_VARCHAR type
    CREATE TYPE TABLE_OF_VARCHAR AS
    TABLE OF VARCHAR2(50);
    SQL> explain plan for
    2 select
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
    4 from pdssuser.protein_dim
    5 where accession_number in
    6 (select * from table(cast(TABLE_OF_VARCHAR('33460', '33458', '33451') as TABLE_OF_VARCHAR)) t);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 168 | 57M|
    | 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
    | 2 | TABLE ACCESS FULL | PROTEIN_DIM | 5235K| 419M| 13729 |
    | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    4 - Using custom TABLE_OF_VARCHAR type w/ Index hint
    SQL> explain plan for
    2 select /*+ INDEX(protein_dim IDX_PROTEIN_ACCNUM) */
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source
    4 from pdssuser.protein_dim
    5 where accession_number in
    6 (select * from table(cast(TABLE_OF_VARCHAR('33460', '33458', '33451') as TABLE_OF_VARCHAR)) t);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 168 | 57M|
    | 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
    | 2 | TABLE ACCESS BY INDEX ROWID | PROTEIN_DIM | 5235K| 419M| 252K|
    | 3 | INDEX FULL SCAN | IDX_PROTEIN_ACCNUM | 5235K| | 17255 |
    | 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    PLAN_TABLE_OUTPUT
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    12 rows selected.
    5 - Using custom TABLE_OF_VARCHAR type w/ cardinality hint
    SQL> explain plan for
    2 select
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source from protein_dim
    4 where accession_number in (select /*+ cardinality( t 10 ) */
    5 * from TABLE(CAST (TABLE_OF_VARCHAR('33460', '33458', '33451') AS TABLE_OF_VARCHAR)) t);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 168 | 57M|
    | 1 | NESTED LOOPS SEMI | | 2 | 168 | 57M|
    | 2 | TABLE ACCESS FULL | PROTEIN_DIM | 5235K| 419M| 13729 |
    | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    11 rows selected.
    6 - Using custom TABLE_OF_VARCHAR type w/ cardinality hint
    and rownum >= 0 constraint
    SQL> explain plan for
    2 select
    3 accession_number, protein_name, sequence_id, taxon_id, organism_name, data_source from protein_dim
    4 where accession_number in (select /*+ cardinality( t 10 ) */
    5 * from TABLE(CAST (TABLE_OF_VARCHAR('33460', '33458', '33451') AS TABLE_OF_VARCHAR)) t
    6 where rownum >= 0);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 25 | 2775 | 43 |
    | 1 | TABLE ACCESS BY INDEX ROWID | PROTEIN_DIM | 2 | 168 | 3 |
    | 2 | NESTED LOOPS | | 25 | 2775 | 43 |
    | 3 | VIEW | VW_NSO_1 | 10 | 270 | 11 |
    | 4 | SORT UNIQUE | | 10 | | |
    | 5 | COUNT | | | | |
    | 6 | FILTER | | | | |
    PLAN_TABLE_OUTPUT
    | 7 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | | | |
    | 8 | INDEX RANGE SCAN | IDX_PROTEIN_ACCNUM | 2 | | 2 |
    Note: cpu costing is off, 'PLAN_TABLE' is old version
    16 rows selected.
    I don't understand why performance improved so dramatically but happy that it did.
    Thanks a ton!
    John

  • SQL Performance causing full table scan

    I have this SQL:
    SELECT DISTINCT
    UPPER (RTRIM (LTRIM (SS.PRESCDEAID))) PRESCRIBER,
    UPPER (RTRIM (LTRIM (SS.NPIPRESCR))) NPI_NUMBER
    FROM
    PBM_SXC_STAGING SS,
    PBM_PHYSICIANS P
    WHERE
    P.PHYSICIAN_ID = SS.PRESCDEAID
    AND P.NPI_NUMBER <> SS.NPIPRESCR
    AND SS.NPIPRESCR <> SS.PRESCDEAID
    Uses this plan:
    SELECT STATEMENT ALL_ROWSCost: 13,843 Bytes: 3,636,232 Cardinality: 106,948                
         4 SORT UNIQUE Cost: 13,843 Bytes: 3,636,232 Cardinality: 106,948           
              3 HASH JOIN Cost: 12,866 Bytes: 3,636,232 Cardinality: 106,948      
                   1 TABLE ACCESS FULL TABLE PBM.PBM_PHYSICIANS Cost: 4,156 Bytes: 17,639,063 Cardinality: 1,356,851
                   2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042
    If I comment out "AND P.NPI_NUMBER <> SS.NPIPRESCR" I get this plan that uses the PK index (PBM.PBM_PHYSICIAN_PK) that is on P.PHYSICIAN_ID. I do have an index on P.NPI_NUMBER
    SELECT STATEMENT ALL_ROWSCost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078                
         4 SORT UNIQUE Cost: 27,230 Bytes: 64,514,496 Cardinality: 2,016,078           
              3 HASH JOIN Cost: 9,617 Bytes: 64,514,496 Cardinality: 2,016,078      
                   1 INDEX FAST FULL SCAN INDEX (UNIQUE) PBM.PBM_PHYSICIAN_PK Cost: 1,035 Bytes: 14,925,361 Cardinality: 1,356,851
                   2 INDEX FAST FULL SCAN INDEX PBM.SXCSTG_IDX1 Cost: 3,859 Bytes: 43,302,882 Cardinality: 2,062,042

    Sorry for the delay, I was out of the office.
    PLAN_TABLE_OUTPUT
    SQL_ID  4j270u8fbhwpu, child number 0
    SELECT /*+ gather_plan_statistics */          DISTINCT          upper
    (rtrim (ltrim (ss.prescdeaid))) prescriber         ,upper (rtrim (ltrim
    (ss.npiprescr))) npi_number FROM pbm_sxc_staging ss     ,pbm_physicians
    p WHERE p.physician_id = ss.prescdeaid   AND p.npi_number !=
    ss.npiprescr   AND ss.npiprescr != ss.prescdeaid
    Plan hash value: 2275909877
    PLAN_TABLE_OUTPUT
    | Id  | Operation              | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |   1 |  HASH UNIQUE           |                |      1 |    125K|     68 |00:00:01.54 |   24466 |  14552 |  1001K|  1001K| 1296K (0)|
    |*  2 |   HASH JOIN            |                |      1 |    125K|   6941 |00:00:01.14 |   24466 |  14552 |    47M|  6159K|   68M (0)|
    |   3 |    TABLE ACCESS FULL   | PBM_PHYSICIANS |      1 |   1341K|   1341K|00:00:00.01 |   14556 |  14552 |       |       |          |
    |*  4 |    INDEX FAST FULL SCAN| SXCSTG_IDX1    |      1 |   1872K|   1887K|00:00:00.01 |    9910 |   0 |          |       |          |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - access("P"."PHYSICIAN_ID"="SS"."PRESCDEAID")
           filter("P"."NPI_NUMBER"<>"SS"."NPIPRESCR")
       4 - filter("SS"."NPIPRESCR"<>"SS"."PRESCDEAID")Edited by: Chris on Jul 12, 2011 8:19 AM

  • Select statement in a function does Full Table Scan

    All,
    I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
    Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
    Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
    Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
    Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = case_number_in and
    a1.case_number = a2.case_number and
    a2.application_date <= effective_date_in and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = 'A006438' and
    a1.case_number = a2.case_number and
    a2.application_date <= '01-Apr-2009' and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Why using the values in the passed parameter in the first select statement causes full table scan?
    Thank you for your help,
    Seyed
    Edited by: user11117178 on Jul 30, 2009 6:22 AM
    Edited by: user11117178 on Jul 30, 2009 6:23 AM
    Edited by: user11117178 on Jul 30, 2009 6:24 AM

    Hello Dan,
    Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
    PLAN_TABLE_OUTPUT
    Plan hash value: 2132048964
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT              |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |*  1 |  HASH JOIN                    |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |   2 |   BITMAP CONVERSION TO ROWIDS |                         |     3 |     9 |     1   (0)| 00:00:01 |       |       |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES    |       |       |            |          |       |       |
    |   4 |   PARTITION RANGE ITERATOR    |                         |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    |   5 |    TABLE ACCESS FULL          | PS2_FS_TRANSACTION_FACT |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    Predicate Information (identified by operation id):
       1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
       3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
    Thank you very much,
    Seyed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Full Table Scans on Auto Ship Confirm Report (WSHRDASC) Causing

    Severe performance issue with Auto Ship Confirm report WSHRDASC.
    From the statspack reports, a single sql statement that iscurrently consuming approximately 50% of all physical disk I/O.
    2 full table scans coming from this problem query :-
    Rows Row Source Operation
    SORT ORDER BY
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    TABLE ACCESS FULL WSH_NEW_DELIVERIES
    TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS
    INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (object id 81392)
    TABLE ACCESS BY INDEX ROWID HZ_PARTIES
    INDEX UNIQUE SCAN HZ_PARTIES_U1 (object id 172682)
    TABLE ACCESS BY INDEX ROWID WSH_EXCEPTIONS
    INDEX RANGE SCAN WSH_EXCEPTIONS_N9 (object id 133587)
    TABLE ACCESS BY INDEX ROWID WSH_DELIVERY_LEGS
    INDEX RANGE SCAN WSH_DELIVERY_LEGS_N1 (object id 46224)
    TABLE ACCESS BY INDEX ROWID WSH_DOCUMENT_INSTANCES
    INDEX RANGE SCAN WSH_DOCUMENT_INSTANCES_N1 (object id 46405)
    TABLE ACCESS FULL WSH_PICKING_BATCHES
    TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES
    INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (object id 34010)
    Please help i have applied one patch for this issue Patch 5531283. Then also same issue

    Hi;
    WHat is your EBS version?
    If note Full Table Scans on Auto Ship Confirm Report (WSHRDASC) Causing Performance Issues [ID 393014.1] doesnt help than I suggest rise SR
    Regard
    Helios

  • Preventing Discoverer using Full Table Scans with Decode in a View

    Hi Forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query in Discoverer.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds. Changing the condition to Batch Status that = ‘Unposted’ returns the query in seconds.
    I’ve been doing some digging and have found the database view that is linked to the Journal Batches folder in Discoverer. See at end of post.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans.
    Any idea how do we get around this?
    SELECT
    JOURNAL_BATCH1.JE_BATCH_ID,
    JOURNAL_BATCH1.NAME,
    JOURNAL_BATCH1.SET_OF_BOOKS_ID,
    GL_SET_OF_BOOKS.NAME,
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),
    DECODE( JOURNAL_BATCH1.ACTUAL_FLAG, 'A', 'Actual', 'B', 'Budget', 'E', 'Encumbrance', NULL ),
    JOURNAL_BATCH1.DEFAULT_PERIOD_NAME,
    JOURNAL_BATCH1.POSTED_DATE,
    JOURNAL_BATCH1.DATE_CREATED,
    JOURNAL_BATCH1.DESCRIPTION,
    DECODE( JOURNAL_BATCH1.AVERAGE_JOURNAL_FLAG, 'N', 'Standard', 'Y', 'Average', NULL ),
    DECODE( JOURNAL_BATCH1.BUDGETARY_CONTROL_STATUS, 'F', 'Failed', 'I', 'In Process', 'N', 'N/A', 'P', 'Passed', 'R', 'Required', NULL ),
    DECODE( JOURNAL_BATCH1.APPROVAL_STATUS_CODE, 'A', 'Approved', 'I', 'In Process', 'J', 'Rejected', 'R', 'Required', 'V','Validation Failed','Z', 'N/A',NULL ),
    JOURNAL_BATCH1.CONTROL_TOTAL,
    JOURNAL_BATCH1.RUNNING_TOTAL_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_CR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_CR,
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID,
    JOURNAL_BATCH2.NAME
    FROM
    GL_JE_BATCHES JOURNAL_BATCH1,
    GL_JE_BATCHES JOURNAL_BATCH2,
    GL_SETS_OF_BOOKS
    GL_SET_OF_BOOKS
    WHERE
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID = JOURNAL_BATCH2.JE_BATCH_ID (+) AND
    JOURNAL_BATCH1.SET_OF_BOOKS_ID = GL_SET_OF_BOOKS.SET_OF_BOOKS_ID AND
    GL_SECURITY_PKG.VALIDATE_ACCESS( JOURNAL_BATCH1.SET_OF_BOOKS_ID ) = 'TRUE' WITH READ ONLY
    Thanks,
    Lance

    Discoverer created it's own SQL.
    Please see below the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    _________________________________

  • Tables in subquery resulting in full table scans

    Hi,
    This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
    Problem Description:
    All the tables in sub-query are resulting in full table scans and hence are executing for hours.
    Here is the query
    SELECT /*+ PARALLEL*/
    act.assignment_action_id
    , act.assignment_id
    , act.tax_unit_id
    , as1.person_id
    , as1.effective_start_date
    , as1.primary_flag
    FROM pay_payroll_actions pa1
    , pay_population_ranges pop
    , per_periods_of_service pos
    , per_all_assignments_f as1
    , pay_assignment_actions act
    , pay_payroll_actions pa2
    , pay_action_classifications pcl
    , per_all_assignments_f as2
    WHERE pa1.payroll_action_id = :b2
    AND pa2.payroll_id = pa1.payroll_id
    AND pa2.effective_date
    BETWEEN pa1.start_date
    AND pa1.effective_date
    AND act.payroll_action_id = pa2.payroll_action_id
    AND act.action_status IN ('C', 'S')
    AND pcl.classification_name = :b3
    AND pa2.consolidation_set_id = pa1.consolidation_set_id
    AND pa2.action_type = pcl.action_type
    AND nvl (pa2.future_process_mode, 'Y') = 'Y'
    AND as1.assignment_id = act.assignment_id
    AND pa1.effective_date
    BETWEEN as1.effective_start_date
    AND as1.effective_end_date
    AND as2.assignment_id = act.assignment_id
    AND pa2.effective_date
    BETWEEN as2.effective_start_date
    AND as2.effective_end_date
    AND as2.payroll_id = as1.payroll_id
    AND pos.period_of_service_id = as1.period_of_service_id
    AND pop.payroll_action_id = :b2
    AND pop.chunk_number = :b1
    AND pos.person_id = pop.person_id
    AND (
    as1.payroll_id = pa1.payroll_id
    OR pa1.payroll_id IS NULL
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/ NULL
    FROM pay_assignment_actions ac2
    , pay_payroll_actions pa3
    , pay_action_interlocks int
    WHERE int.locked_action_id = act.assignment_action_id
    AND ac2.assignment_action_id = int.locking_action_id
    AND pa3.payroll_action_id = ac2.payroll_action_id
    AND pa3.action_type IN ('P', 'U')
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/
    NULL
    FROM per_all_assignments_f as3
    , pay_assignment_actions ac3
    WHERE :b4 = 'N'
    AND ac3.payroll_action_id = pa2.payroll_action_id
    AND ac3.action_status NOT IN ('C', 'S')
    AND as3.assignment_id = ac3.assignment_id
    AND pa2.effective_date
    BETWEEN as3.effective_start_date
    AND as3.effective_end_date
    AND as3.person_id = as2.person_id
    ORDER BY as1.person_id
    , as1.primary_flag DESC
    , as1.effective_start_date
    , act.assignment_id
    FOR UPDATE OF as1.assignment_id
    , pos.period_of_service_id
    Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
    Suspecting some db parameter which is causing this issue.
    In the
    - Full table scans on tables in the first sub-query
    PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
    - Full table scans on tables in Second sub-query
    PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 29 398.80 2192.99 238706 4991924 2383 0
    Fetch 1136 378.38 1921.39 0 4820511 0 1108
    total 1166 777.19 4114.38 238706 9812435 2383 1108
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 41 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 FOR UPDATE
    0 PX COORDINATOR
    0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
    0 SORT (ORDER BY) [:Q1009]
    0 PX RECEIVE [:Q1009]
    0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
    0 HASH JOIN (ANTI BUFFERED) [:Q1008]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 HASH JOIN (ANTI) [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10002'
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
    0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
    0 HASH JOIN [:Q1005]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (BROADCAST) OF ':TQ10000'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 HASH JOIN [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
    0 PX BLOCK (ITERATOR) [:Q1004]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10001'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
    0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
    0 FILTER [:Q1007]
    0 HASH JOIN [:Q1007]
    0 BUFFER (SORT) [:Q1007]
    0 PX RECEIVE [:Q1007]
    0 PX SEND (BROADCAST) OF ':TQ10003'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 PX BLOCK (ITERATOR) [:Q1007]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    enq: KO - fast object checkpoint 32 0.02 0.12
    os thread startup 8 0.02 0.19
    PX Deq: Join ACK 198 0.00 0.04
    PX Deq Credit: send blkd 167116 1.95 1103.72
    PX Deq Credit: need buffer 327389 1.95 266.30
    PX Deq: Parse Reply 148 0.01 0.03
    PX Deq: Execute Reply 11531 1.95 1901.50
    PX qref latch 23060 0.00 0.60
    db file sequential read 108199 0.17 22.11
    db file scattered read 9272 0.19 51.74
    PX Deq: Table Q qref 78 0.00 0.03
    PX Deq: Signal ACK 1165 0.10 10.84
    enq: PS - contention 73 0.00 0.00
    reliable message 27 0.00 0.00
    latch free 218 0.00 0.01
    latch: session allocation 11 0.00 0.00
    Thanks in advance
    Suresh PV

    Hi,
    welcome,
    how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
    Herald ten Dam
    http://htendam.wordpress.com
    PS. Use "{code}" for showing your code and explain plans, it looks nicer

  • Different Cost values for full table scans

    I have a very simple query that I run in two environments (Prod (20 CPU) and Dev (12 CPU)). Both environemtns are HPUX, oracle 9i.
    The query looks like this:
    SELECT prd70.jde_item_n
    FROM gdw.vjda_gdwprd68_bom_cmpnt prd68
    ,gdw.vjda_gdwprd70_gallo_item prd70
    WHERE prd70.jde_item_n = prd68.parnt_jde_item_n
    AND prd68.last_eff_t+nvl(to_number(prd70.auto_hld_dy_n),0)>= trunc(sysdate)
    GROUP BY prd70.jde_item_n
    When I look at the explain plans, there is a significant difference in cost and I can't figure out why they would be different. Both queries do full table scans, both instances have about the same number of rows, statistics on both are fresh.
    Production Plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=18398 Card=14657 B
    ytes=249169)
    1 0 SORT (GROUP BY) (Cost=18398 Card=14657 Bytes=249169)
    2 1 HASH JOIN (Cost=18304 Card=14657 Bytes=249169)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=949
    4 Card=194733 Bytes=1168398)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=5887
    Card=293149 Bytes=3224639)
    Development plan:
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3566 Card=14754 B
    ytes=259214)
    1 0 HASH GROUP BY (GROUP BY) (Cost=3566 Card=14754 Bytes=259214)
    2 1 HASH JOIN (Cost=3558 Card=14754 Bytes=259214)
    3 2 TABLE ACCESS (FULL) OF 'GDWPRD70_GALLO_ITEM' (Cost=1914
    4 Card=193655 Bytes=1323598)
    4 2 TABLE ACCESS (FULL) OF 'GDWPRD68_BOM_CMPNT' (Cost=1076
    Card=295075 Bytes=3169542)
    There seems to be no reason for the costs to be so different, but I'm hoping that someone will be able to lead me in the right direction.
    Thanks,
    Jdelao

    This link may help:
    http://jaffardba.blogspot.com/2007/07/change-behavior-of-group-by-clause-in.html
    But looking at the explain plans one of them uses a SORT (GROUP BY) (higher cost query) and the other uses a HASH GROUP BY (GROUP BY) (lower cost query). From my searches on the `Net the HASH GROUP BY is a more efficient algorithm than the SORT (GROUP BY) which would lead me to believe that this is one of the reasons why the cost values are so different. I can't find which version HASH GROUP BY came in but quick searches indicate 10g for some reason.
    Are your optimizer features parameter set to the same value? In general you could compare relevant parameters to see if there is a difference.
    Hope this helps!

  • Query is doing full table scan

    Hi All,
    The below query is doing full table scan. So many threads from application trigger this query and doing full table scan. Can you please tell me how to improve the performance of this query?
    Env is 11.2.0.3 RAC (4 node). Unique index on VZ_ID, LOGGED_IN. The table row count is 2,501,103.
    Query is :-
    select ccagentsta0_.LOGGED_IN as LOGGED1_404_, ccagentsta0_.VZ_ID as VZ2_404_, ccagentsta0_.ACTIVE as ACTIVE404_, ccagentsta0_.AGENT_STATE as AGENT4_404_,
    ccagentsta0_.APPLICATION_CODE as APPLICAT5_404_, ccagentsta0_.CREATED_ON as CREATED6_404_, ccagentsta0_.CURRENT_ORDER as CURRENT7_404_,
    ccagentsta0_.CURRENT_TASK as CURRENT8_404_, ccagentsta0_.HELM_ID as HELM9_404_, ccagentsta0_.LAST_UPDATED as LAST10_404_, ccagentsta0_.LOCATION as LOCATION404_,
    ccagentsta0_.LOGGED_OUT as LOGGED12_404_, ccagentsta0_.SUPERVISOR_VZID as SUPERVISOR13_404_, ccagentsta0_.VENDOR_NAME as VENDOR14_404_
    from AGENT_STATE ccagentsta0_ where ccagentsta0_.VZ_ID='v790531'  and ccagentsta0_.ACTIVE='Y';
    Table Scan                                                       AGENT_STATE                                                2.366666667
    Table Scan                                                       AGENT_STATE                                                0.3666666667
    Table Scan                                                       AGENT_STATE                                                1.633333333
    Table Scan                                                       AGENT_STATE                                                       0.75
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                2.533333333
    Table Scan                                                       AGENT_STATE                                                0.5333333333
    Table Scan                                                       AGENT_STATE                                                       1.95
    Table Scan                                                       AGENT_STATE                                                        0.8
    Table Scan                                                       AGENT_STATE                                                0.2833333333
    Table Scan                                                       AGENT_STATE                                                1.983333333
    Table Scan                                                       AGENT_STATE                                                        2.5
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                1.883333333
    Table Scan                                                       AGENT_STATE                                                        0.9
    Table Scan                                                       AGENT_STATE                                                2.366666667
    But the explain plan shows the query is taking the index
    Explain plan output:-
    PLAN_TABLE_OUTPUT
    Plan hash value: 1946142815
    | Id  | Operation                   | Name            | Rows  | Bytes | Cost (%C
    PU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT            |                 |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| AGENT_STATE     |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  2 |   INDEX RANGE SCAN          | AGENT_STATE_IDX |   229 |       |     4
    (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("CCAGENTSTA0_"."ACTIVE"='Y')
       2 - access("CCAGENTSTA0_"."VZ_ID"='v790531')
    The values (VZ_ID) i have given are dummy values picked from the table. I dont get the actual values since the query is coming with bind variables. Please let me know your suggestion on this.
    Thanks,
    Mani

    Hi,
    But I am not getting what is the issue..its a simple select query and index is there on one of the leading columns (VZ_ID --- PK). Explain plan says its using its using Index and it only select fraction of rows from the table. Then why it is doing FTS. For Optimizer, why its like a query doing FTS.
    The rule-based optimizer would have  picked the plan with the index. The cost-based optimizer, however, is picking the plan with the lowest cost. Apparently, the lowest cost plan is the one with the full table scan. And the optimizer isn't necessarily wrong about this.
    Reading data from a table via index probes is only efficient when selecting a relatively small percentage of rows. For larger percentages, a full table scan is generally better.
    Consider a simple example: a query that selects from a table with biographies for all people on the planet. Suppose you are interested in all people from a certain country.
    select * from all_people where country='Vatican'
    would only return only 800 rows (as Vatican is an extremely small country with population of just 800 people). For this case, obviously, using an index would be very efficient.
    Now if we run this query:
    select * from all_people where contry = 'India',
    we'd be getting over a billion of rows. For this case, a full table scan would be several thousand times faster.
    Now consider the third case:
    select * from all_people where country = :b1
    What plan should the optimizer choose? The value of :b1 bind variable is generally not known during the parse time, it will be passed by the user when the query is already parsed, during run-time.
    In this case, one of two scenarios takes place: either the optimizer relies on some built-in default selectivities (basically, it takes a wild guess), or the optimizer postpones taking the final decision until the
    first time the query is run, 'peeks' the value of the bind, and optimizes the query for this case.
    In means, that if the first time the query is parsed, it was called with :b1 = 'India', a plan with a full table scan will be generated and cached for subsequent usage. And until the cursor is aged out of library cache
    or invalidated for some reason, this will be the plan for this query.
    If the first time it was called with :b1='Vatican', then an index-based plan will be picked.
    Either way, bind peeking only gives good results if the subsequent usage of the query is the same kind as the first usage. I.e. in the first case it will be efficient, if the query would always be run for countries with big popultions.
    And in the second case, if it's always run for countries with small populations.
    This mechanism is called 'bind peeking' and it's one of the most common causes of performance problems. In 11g, there are more sophisticated mechanisms, such a cardinality feedback, but they don't always work as expected.
    This mechanism is the most likely explanation for your issue. However, without proper diagnostic information we cannot be 100% sure.
    Best regards,
      Nikolay

Maybe you are looking for