Explain plan cost

Hi,
I have explain plan a sql using Enterprise manager. What does the cost indicates, if a query is having a cost of 119 and after tuning its cost is 16 what does it indicate. What is the relationship between time taken by a query to run and its cost.
Regards
Raj

Hi, Raj.
<What does the cost indicate.
In theory, the cost indicates how long a query should take if actually run. Small number means fast, bug number means slow. Its just an arbitrary number computed to indicate how long a query should take to run.
In practice, the cost has little meaning especially in 10g. Its becoming common enough for a high cost to actually run faster than a low cost that our DBA group pretty much ignores the cost.
Remember that the cost is computed before the query is actually run, against computed statistics and available system resources. The cost is an guess, and like all guesses is subject to reality checks.
Message was edited by:
riedelme

Similar Messages

  • 'Identical' schemas on different servers - different explain plan costs

    Hello,
    I have two servers, 1 development and 1 production. I have a query which produces wildly different explain plan costs on the two servers:
    The development server provides a cost of just over 800 and the production servers is over 100000. I have 2-3 different versions of the schema (These are data warehouse schemas) on both servers and the cost numbers are similar regardless of the version used. Whenever I run the query on development, it's around 800. On production the same query is over 100000.
    The data on both servers is (should be) identical - I used impdp and expdp to transfer the data between the servers. I have run:
    DBMS_STATS.GATHER_SCHEMA_STATS ('SCHEMAV26', cascade=>TRUE);
    on the production server after importing the data. As far as I can see, the indices are identical on both servers. The difference in the execution plan is one additional line:
    Filter Predicates CE.ID < 5
    Can anyone help me figure out why the explain plans are different? The servers have similar hardware specs, and are running the same version of Oracle (11.2.0.2.0)
    Thanks,
    Dan Scott
    http://danieljamesscott.org
    Edited by: danscott on Mar 4, 2011 11:43 AM

    Thanks for all the help/suggestions - as you've probably guessed, I'm a little new to all this.
    A little background first:
    We have an items table and events table. itemid is the primary key for the items table and a foreign key in the events table. The events table contains itemids, timestamps and data values (along with a few other IDs) The query I'm running is used to create a materialized view which provides statistics for each itemid to assist users in finding a particular itemid containing the data they're interested in. Generally, we create the view on the full list of itemids (and so the indices are not used, as expected). However, we occasionally run the query for a small number of itemids, and the index on events.itemid is used on one server, but not on the other.
    Here's the SQL (Apologies for the length).
    WITH ChartItems as (
      select distinct ci.itemid, ci.label, ci.category, ci.description,
             case
                when
                   (count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2) over (partition by ci.itemid) > 0)
                   AND
                   (count(distinct ce.value1num) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2num) over (partition by ci.itemid) > 0)
                then 'H'
                when
                   count(distinct ce.value1num) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2num) over (partition by ci.itemid) > 0
                then 'N'
                when
                   count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2) over (partition by ci.itemid) > 0
                then 'S'
                else
                    'X'
             end as value_type,
             -- The value column
             case
                when
                   (count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value1num) over (partition by ci.itemid) > 0)
                   and
                   (count(distinct ce.value2) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2num) over (partition by ci.itemid) > 0)
                then 'both'
                when
                   count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value1num) over (partition by ci.itemid) > 0
                then 'value1'
                when
                   count(distinct ce.value2) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2num) over (partition by ci.itemid) > 0
                then 'value2'
                else
                    'none'
             end as value_column
        from items ci,
             events ce
       where ce.itemid = ci.itemid
         and ci.itemid < 5
    , RawData as (
        select distinct ci.itemid, ci.label, ci.category, ci.description,
              ci.value_type, ci.value_column,
              count(*)
                over (partition by ci.itemid) as rows_num,
              count(distinct ce.subject_id)
                over (partition by ci.itemid) as subjects_num,
              avg(abs(cast(ce.realtime as date) - cast(ce.charttime as date)) * 24 * 60)
                over (partition by ci.itemid) as chart_vs_realtime_delay_mean,
              stddev(abs(cast(ce.realtime as date) - cast(ce.charttime as date)) * 24 * 60)
                over (partition by ci.itemid) as chart_vs_realtime_delay_stddev,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1uom)
                                over (partition by ci.itemid
                                      order by ce.value1uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value1uom)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value1uom)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value1_uom_num,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1uom)
                                over (partition by ci.itemid
                                      order by ce.value1uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value1_uom_has_nulls,
              first_value(ce.value1uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_uom_sample1,
              last_value(ce.value1uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_uom_sample2,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1)
                                over (partition by ci.itemid
                                      order by ce.value1 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value1)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value1)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value1_distinct_num,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1)
                                over (partition by ci.itemid
                                      order by ce.value1 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value1_has_nulls,
             first_value(ce.value1) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_sample1,
             last_value(ce.value1) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_sample2,
             min(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_min,
             max(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_max,
             avg(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_mean,
             min(ce.value1num)
                 over (partition by ci.itemid) as value1num_min,
             max(ce.value1num)
                 over (partition by ci.itemid) as value1num_max,
             avg(ce.value1num)
                 over (partition by ci.itemid) as value1num_mean,
             stddev(ce.value1num)
                 over (partition by ci.itemid) as value1num_stddev,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2uom)
                                over (partition by ci.itemid
                                      order by ce.value2uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value2uom)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value2uom)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value2_uom_num,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2uom)
                                over (partition by ci.itemid
                                      order by ce.value2uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value2_uom_has_nulls,
             first_value(ce.value2uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_uom_sample1,
             last_value(ce.value2uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_uom_sample2,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2)
                                over (partition by ci.itemid
                                      order by ce.value2 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value2)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value2)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value2_distinct_num,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2)
                                over (partition by ci.itemid
                                      order by ce.value2 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value2_has_nulls,
             first_value(ce.value2)
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN  UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_sample1,
             last_value(ce.value2)
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN  UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_sample2,
             min(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_min,
             max(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_max,
             avg(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_mean,
             min(ce.value2num)
                 over (partition by ci.itemid) as value2num_min,
             max(ce.value2num)
                 over (partition by ci.itemid) as value2num_max,
             avg(ce.value2num)
                 over (partition by ci.itemid) as value2num_mean,
             stddev(ce.value2num)
                 over (partition by ci.itemid) as value2num_stddev
        from ChartItems ci,
             events ce
       where ce.itemid = ci.itemid
    --   order by ci.itemid, ci.label
    select label, trim(lower(label)) label_lower, itemid, category, description,
           value_type, value_column,
           rows_num, subjects_num,
           round(chart_vs_realtime_delay_mean, 2) as chart_vs_realtime_delay_mean,
           round(chart_vs_realtime_delay_stddev, 2) as chart_vs_realtime_delay_stddev,
           value1_uom_num, value1_uom_has_nulls,
           value1_uom_sample1, value1_uom_sample2,
           value1_distinct_num, value1_has_nulls,
           value1_sample1, value1_sample2,
           value1_length_min, value1_length_max,
           round(value1_length_mean, 2) as value1_length_mean,
           round(value1num_min, 2) as value1num_min,
           round(value1num_max, 2) as value1num_max,
           round(value1num_mean, 2) as value1num_mean,
           round(value1num_stddev, 2) as value1num_stddev,
           value2_uom_num, value2_uom_has_nulls,
           value2_uom_sample1, value2_uom_sample2,
           value2_distinct_num, value2_has_nulls,
           value2_sample1, value2_sample2,
           value2_length_min, value2_length_max,
           round(value2_length_mean, 2) as value2_length_mean,
           round(value2num_min, 2) as value2num_min,
           round(value2num_max, 2) as value2num_max,
           round(value2num_mean, 2) as value2num_mean,
           round(value2num_stddev, 2) as value2num_stddev
      from RawData
    order by label, itemid;

  • Explain plan cost it increases after migrating Oracle 9.2.0 to 10.2.0.3

    Hi:
    Recently the migration was made, we are testing (I'm developer) the performance in some querys, but the majority is but slow,I do not understand the reason because the new equipment has processing capacity much more.
    for example , a report , in 9i it is executed in 45 segs, in 10G it is executed in 10 minutes!!!
    Another example: the next explain plan, it's the same query and the same parameters . In oracle 9i COST=799 In Oracle 10G COST=2337
    What happens? Any idea?
    In oracle 9i COST=799
    SELECT STATEMENT Hint=HINT: FIRST_ROWS 1 799
    SORT GROUP BY 1 74 799
    VIEW 1 74 786
    SORT GROUP BY 1 100 786
    TABLE ACCESS BY INDEX ROWID SMRPCMT 1 27 1
    NESTED LOOPS 1 100 774
    NESTED LOOPS 3 219 773
    NESTED LOOPS 3 177 772
    NESTED LOOPS 3 171 771
    NESTED LOOPS 3 123 770
    TABLE ACCESS BY INDEX ROWID SFBETRM 7 K 73 K 22
    INDEX RANGE SCAN SFBETRM_KEY_INDEX2 7 K 24
    TABLE ACCESS BY INDEX ROWID SGBSTDN 1 31 1
    INDEX UNIQUE SCAN PK_SGBSTDN 1
    TABLE ACCESS BY INDEX ROWID STVMAJR 1 16 1
    INDEX UNIQUE SCAN PK_STVMAJR 1
    TABLE ACCESS BY INDEX ROWID STVCAMP 1 2 1
    INDEX UNIQUE SCAN PK_STVCAMP 1
    INDEX RANGE SCAN SMRPATR_KEY_INDEX 1 14 1
    INDEX RANGE SCAN SMRPCMT_KEY_INDEX 1 1
    In Oracle 10G COST=2337
    SELECT STATEMENT Hint=FIRST_ROWS 1 2337
    SORT GROUP BY 1 66 2337
    VIEW 1 66 2336
    SORT GROUP BY 1 158 2336
    NESTED LOOPS 1 158 2335
    NESTED LOOPS 1 142 2334
    NESTED LOOPS 1 140 2333
    NESTED LOOPS 1 126 2332
    NESTED LOOPS 3 297 2330
    TABLE ACCESS BY INDEX ROWID SFBETRM 7 K 73 K 67
    INDEX RANGE SCAN SFBETRM_KEY_INDEX2 7 K 6
    TABLE ACCESS BY INDEX ROWID SGBSTDN 1 89 1
    INDEX UNIQUE SCAN PK_SGBSTDN 1 1
    TABLE ACCESS BY INDEX ROWID SMRPCMT 1 27 1
    INDEX RANGE SCAN SMRPCMT_KEY_INDEX 1 1
    INDEX RANGE SCAN SMRPATR_KEY_INDEX 1 14 1
    TABLE ACCESS BY INDEX ROWID STVCAMP 1 2 1
    INDEX UNIQUE SCAN PK_STVCAMP 1 1
    TABLE ACCESS BY INDEX ROWID STVMAJR 1 16 1
    INDEX UNIQUE SCAN PK_STVMAJR 1 1
    Thank you so much by advance.
    (sorry by my bad english)

    "Cost", as determined by the CBO, is a relative number, not an absolute number. A "cost" of 799 in 9i is not the same as a "cost" of 799 in 10g.
    The CBO is 10g is substantially different as compared to 9i. Performance issues encountered after the upgrade are not uncommon. Pl refer to these MOS Doc to help troubleshoot your issue -
    754931.1 - Cost Based Optimizer - Common Misconceptions and Issues - 10g and Above
    466181.1 - 10g Upgrade Companion
    I would also recommend you upgrade to 10.2.0.4, as it is the latest and most stable version of 10g.
    HTH
    Srini

  • Cost in explain plan vs elapsed time

    hi gurus,
    i have two tables with identical data volume(same database/schema/tablespace) and the only difference between these two is, one partitioned on a date filed.
    statistics are up to date.
    same query is executed against both tables, no partition pruning involved.
    select ....... from non-partitioned
    execution plan cost=92
    elapsed time=117612
    select ... from partitioned
    execution plan cost=3606
    elapsed time=19559
    though plan cost of query against non-partitioned is quite less than partitioned, elapsed time in v$sqlarea is higher than partitioned.
    what could be the reason please?
    thanks,
    charles
    Edited by: user570138 on May 6, 2010 6:54 AM

    user570138 wrote:
    if elapsed time value is very volatile(with the difference in explain plan cost) , then how can i compare the performance of the query?Note that the same query with same execution plan and same data can provide different execution times - and due to a number of factors. The primary one being that the first time the query is executed, it performs physical I/O and loads the data into the buffer cache - where the same subsequent query finds this data via cheaper and faster logical I/O in the cache.
    in my case i want to compare the performance change before and after table partitioning.Then look at the actual utilisation cost of the query and not (variant) elapsed execution time. The most expensive operation on a db platform is typically I/O. The more I/O, the slower the execution time.
    I/O can be decreased in a number of ways. The usual approach in the database environment is to provide better or alternative data structures, in order to create more optimal I/O paths.
    So instead of using a hash table and separate PK b+tree index, using an index organised table. Instead of a single large table with indexes, a partitioned table with local indexes. Instead of joining plain tables, joining tables that have been clustered. Etc.
    In most cases, when done correctly, these physical data structure changes do not even impact the application layer. The SQL and code in this layer should be blissfully unaware of the physical structure changes done to improve performance.
    Likewise, changes in the logical layer (data model changes) can also improve performance. This of course does impact the application layer - and requires a proper design and implementation in the physical layer. Often proper data modeling is overlooked and flaws in it is attempted to be fixed by hacking the physical layer.
    my aim is to measure the query performance improvements, if any, by partitioning an existing tableWhy not measure current I/O before an operation, and then after the operation - and use the difference to determine the amount of I/O performed by that operation?
    This can be implemented in a start/stop watch fashion (instead of measuring time, measuring I/O) using the v$sesstat virtual performance view.

  • Same query, same dataset, same ddl setup, but wildly different explain plan

    Hello o fountains of oracle knowledge!
    We have a problem that caused us a full stop when rolling out a new version of our system to a customer and a whole Sunday to boot.
    The scenario is as follows:
    1. An previous version database schema
    2. The current version database schema
    3. A migration script to migrate the old schema to the new
    So we perform the following migration:
    1. Export the previous version database schema
    2. Import into a new schema called schema_old
    3. Create a new schema called schema_new
    4. Run migration script which creates objects, copies data, creates indexes etc etc in schema_new
    The migration runs fine in all environments (development, test and production)
    In our development and test environments performance is stellar, on the customer production server the performance is terrible.
    This using the exact same export file (from the production environment) and performing the exact same steps with the exact same migration script.
    Database version is 10.2.0.1.0 EE on all databases. OS is Microsoft Windows Server 2003 EE SP2 on all servers.
    The system is not in any sense under a heavy load (we have tested with no other load than ourselves).
    Looking at the explain plan for a query that is run frequently and does not use bind variables we see wildly different explain plans.
    The explain plan cost on our development and test servers is estimated to *7* for this query and there are no full table scans.
    On the production server the cost is *8433* and there are two full table scans of which one is on the largest table.
    We have tried to run analyse on all objects with very little effect. The plan changed very slightly, but still includes the two full table scans on the problem server and the cost is still the same.
    All tables and indexes are identical (including storage options), created from the same migration script.
    I am currently at loss for where to look? What can be causing this? I assume this could be caused by some parameter that is set on the server, but I don't know what to look for.
    I would be very grateful for any pointers.
    Thanks,
    Håkon

    Thank you for your answer.
    We collected statistics only after we determined that the production server where not behaving according to expectations.
    In this case we used TOAD and the tool within to collect statistics for all objects. We used 'Analyze' and 'Compute Statistics' options.
    I am not an expert, so sorry if this is too naive an approach.
    Here is the query:SELECT count(0)  
    FROM score_result sr, web_scorecard sc, product p
    WHERE sr.score_final_decision like 'VENT%'  
    AND sc.CREDIT_APPLICATION_ID = sr.CREDIT_APPLICATION_ID  
    AND sc.application_complete='Y'   
    AND p.product = sc.web_product   
    AND p.inactive_product = '2' ;I use this as an example, but the problem exists for virtually all queries.
    The output from the 'good' server:
    | Id  | Operation                      | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT               |                       |     1 |    39 |     7   (0)|
    |   1 |  SORT AGGREGATE                |                       |     1 |    39 |            |
    |   2 |   NESTED LOOPS                 |                       |     1 |    39 |     7   (0)|
    |   3 |    NESTED LOOPS                |                       |     1 |    30 |     6   (0)|
    |   4 |     TABLE ACCESS BY INDEX ROWID| SCORE_RESULT          |     1 |    17 |     4   (0)|
    |   5 |      INDEX RANGE SCAN          | SR_FINAL_DECISION_IDX |     1 |       |     3   (0)|
    |   6 |     TABLE ACCESS BY INDEX ROWID| WEB_SCORECARD         |     1 |    13 |     2   (0)|
    |   7 |      INDEX UNIQUE SCAN         | WEB_SCORECARD_PK      |     1 |       |     1   (0)|
    |   8 |    TABLE ACCESS BY INDEX ROWID | PRODUCT               |     1 |     9 |     1   (0)|
    |   9 |     INDEX UNIQUE SCAN          | PK_PRODUCT            |     1 |       |     0   (0)|
    ---------------------------------------------------------------------------------------------The output from the 'bad' server:
    | Id  | Operation                 | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT          |                       |     1 |    32 |  8344   (3)|
    |   1 |  SORT AGGREGATE           |                       |     1 |    32 |            |
    |   2 |   HASH JOIN               |                       | 10887 |   340K|  8344   (3)|
    |   3 |    TABLE ACCESS FULL      | PRODUCT               |     6 |    42 |     3   (0)|
    |   4 |    HASH JOIN              |                       | 34381 |   839K|  8340   (3)|
    |   5 |     VIEW                  | index$_join$_001      | 34381 |   503K|  2193   (3)|
    |   6 |      HASH JOIN            |                       |       |       |            |
    |   7 |       INDEX RANGE SCAN    | SR_FINAL_DECISION_IDX | 34381 |   503K|   280   (3)|
    |   8 |       INDEX FAST FULL SCAN| SCORE_RESULT_PK       | 34381 |   503K|  1371   (2)|
    |   9 |     TABLE ACCESS FULL     | WEB_SCORECARD         |   489K|  4782K|  6137   (4)|
    ----------------------------------------------------------------------------------------I hope the formatting makes this readable.
    Stats (from SQL Developer), good table:NUM_ROWS     489716
    BLOCKS     27198
    AVG_ROW_LEN     312
    SAMPLE_SIZE     489716
    LAST_ANALYZED     15.12.2009
    LAST_ANALYZED_SINCE     15.12.2009Stats (from SQL Developer), bad table:
    NUM_ROWS     489716
    BLOCKS     27199
    AVG_ROW_LEN     395
    SAMPLE_SIZE     489716
    LAST_ANALYZED     17.12.2009
    LAST_ANALYZED_SINCE     17.12.2009I'm unsure what would cause the difference in average row length.
    I could obviously try to tune our sql-statements to work on the server not behaving better, but I would rather understand why they are different and make sure that we can expect similar behaviour between environments.
    Thank you again for trying to help me.
    Håkon
    Edited by: ergates on 17.des.2009 05:57
    Edited by: ergates on 17.des.2009 06:02

  • Tuning : Inconsistent result between Explain plan VS Execution Time

    Dear Experts,
    Need your suggestions belongs to contrary result between Explain plan VS Execution Time
    Environment :_
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Red Hat Enterprise Linux 5.4
    There's same query but : 1st query access Loan_Account, 2nd query access Loan_Account_Han
    *1st.*
    Query Access Via : Loan_account (Partition Type : Hash (5) with column Contract_Number)
    Explain Plan : cost: 4,432, bytes: 716, cardinality: 2
    Execution Time : 13 seconds
    *2nd.*
    Query Access Via : Loan_Account_Han (Partition Type : List(5) with column Loan_Status)
    Explain Plan : cost:188,447 bytes: 1,661,088, cardinality: 4,719
    Execution Time : 10 seconds
    Note :_
    all tables and all indexes belong to the table which included in query has been analyzed.
    my question :
    1. why it could become like this ? I even confusing with Jonathan Lewis Theory : Cost-Based Fundamental Book.
    with this result, I even not believed with the result from Explain Plan anymore.
    2. If analyze tables and indexes to update statistics which help CBO to choose the best path as part of Daily Performance Tuning,
    is there a way that could do in enviroment 24x7 ?
    Note : if original query is needed, I'll posting it here.
    Any help is very appreciated and thanks very much.
    Regards,
    Sigcle

    The DBMS_XPLAN.DISPLAY_CURSOR output
    Query no 1
    PLAN_TABLE_OUTPUT
    SQL_ID  bq7avs72xvmkv, child number 0
    SELECT /*+ gather_plan_statistics */ la.office_code, la.currency_code AS currency, mf.NAME multifinance_id,        la.contract_number, mc.customer_name,       
    dco.os_principal_on_schedule AS os_principal_on_schedule,        f_get_param_value (la.financing_type,                           'FinancingType'                  
           ) AS financing_type,        CASE la.financing_type           WHEN 11                    -- Asset Purchase              THEN NVL (tp.os_principal_cust,     
                      la.os_principal_cust                       )           WHEN 12                                            -- JF Channelling              THEN
    NVL (tp.os_principal_mf, la.os_principal)        END AS os_principal_actual,        CASE           WHEN dco.bi_collectibility_with_gp >= 3              THEN 0    
          ELSE CASE           WHEN la.financing_type = '12'              THEN   NVL (ia.accrue_interest_pl, 0)                   + NVL (ia.accrue_interest_npl, 0)    
          ELSE   NVL (ia.accrue_
    Plan hash value: 4011856754
    | Id  | Operation                                  | Name                           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |   1 |  TABLE ACCESS BY INDEX ROWID               | COLLECTIBILITY_CODE            |      3 |      1 |      3 |00:00:00.01 |       9 |      2 |       |       |          |
    |*  2 |   INDEX SKIP SCAN                          | UNQ_COLLECTIBILITY_CODE        |      3 |      1 |      3 |00:00:00.01 |       6 |      1 |       |       |          |
    |   3 |  TABLE ACCESS BY GLOBAL INDEX ROWID        | MF_SCH_INSTALLMENT             |    501 |      1 |    501 |00:00:00.03 |    2004 |    796 |       |       |          |
    |*  4 |   INDEX UNIQUE SCAN                        | UNQ_MF_SCH_INSTALLMENT1        |    501 |      1 |    501 |00:00:00.02 |    1503 |    377 |       |       |          |
    |   5 |  TABLE ACCESS BY GLOBAL INDEX ROWID        | MF_SCH_INSTALLMENT             |    333 |      1 |    333 |00:00:00.10 |    3220 |   1074 |       |       |          |
    |*  6 |   INDEX UNIQUE SCAN                        | UNQ_MF_SCH_INSTALLMENT1        |    333 |      1 |    333 |00:00:00.09 |    2887 |   1074 |       |       |          |
    |   7 |   TABLE ACCESS BY GLOBAL INDEX ROWID       | CUST_SCH_INSTALLMENT           |    168 |      1 |    167 |00:00:00.05 |    1464 |    495 |       |       |          |
    |*  8 |    INDEX UNIQUE SCAN                       | UNQ_CUST_SCH_INSTALLMENT1      |    168 |      1 |    167 |00:00:00.05 |    1297 |    495 |       |       |          |
    |   9 |  TABLE ACCESS BY GLOBAL INDEX ROWID        | MF_SCH_INSTALLMENT             |    333 |      1 |    333 |00:00:00.06 |    3167 |      0 |       |       |          |
    |* 10 |   INDEX UNIQUE SCAN                        | UNQ_MF_SCH_INSTALLMENT1        |    333 |      1 |    333 |00:00:00.06 |    2834 |      0 |       |       |          |
    |  11 |   TABLE ACCESS BY GLOBAL INDEX ROWID       | CUST_SCH_INSTALLMENT           |    168 |      1 |    167 |00:00:00.03 |    1447 |      0 |       |       |          |
    |* 12 |    INDEX UNIQUE SCAN                       | UNQ_CUST_SCH_INSTALLMENT1      |    168 |      1 |    167 |00:00:00.03 |    1280 |      0 |       |       |          |
    |  13 |  TABLE ACCESS BY GLOBAL INDEX ROWID        | MF_SCH_INSTALLMENT             |    333 |      1 |    333 |00:00:00.06 |    3167 |      0 |       |       |          |
    |* 14 |   INDEX UNIQUE SCAN                        | UNQ_MF_SCH_INSTALLMENT1        |    333 |      1 |    333 |00:00:00.06 |    2834 |      0 |       |       |          |
    |  15 |   TABLE ACCESS BY GLOBAL INDEX ROWID       | CUST_SCH_INSTALLMENT           |    168 |      1 |    167 |00:00:00.03 |    1447 |      0 |       |       |          |
    |* 16 |    INDEX UNIQUE SCAN                       | UNQ_CUST_SCH_INSTALLMENT1      |    168 |      1 |    167 |00:00:00.03 |    1280 |      0 |       |       |          |
    |  17 |  TABLE ACCESS BY GLOBAL INDEX ROWID        | MF_SCH_INSTALLMENT             |    333 |      1 |    333 |00:00:00.06 |    3167 |      0 |       |       |          |
    |* 18 |   INDEX UNIQUE SCAN                        | UNQ_MF_SCH_INSTALLMENT1        |    333 |      1 |    333 |00:00:00.06 |    2834 |      0 |       |       |          |
    |  19 |   TABLE ACCESS BY GLOBAL INDEX ROWID       | CUST_SCH_INSTALLMENT           |    168 |      1 |    167 |00:00:00.03 |    1447 |      0 |       |       |          |
    |* 20 |    INDEX UNIQUE SCAN                       | UNQ_CUST_SCH_INSTALLMENT1      |    168 |      1 |    167 |00:00:00.03 |    1280 |      0 |       |       |          |
    |* 21 |  TABLE ACCESS FULL                         | COLLECTIBILITY_CODE            |      3 |      1 |      3 |00:00:00.01 |      12 |      1 |       |       |          |
    |  22 |  NESTED LOOPS OUTER                        |                                |      1 |      2 |    501 |00:00:01.02 |   96112 |  20091 |       |       |          |
    |  23 |   NESTED LOOPS                             |                                |      1 |      2 |    501 |00:00:00.28 |   13445 |   5358 |       |       |          |
    |  24 |    NESTED LOOPS                            |                                |      1 |      2 |    501 |00:00:00.26 |   11441 |   5071 |       |       |          |
    |  25 |     NESTED LOOPS OUTER                     |                                |      1 |      2 |    501 |00:00:00.24 |    9433 |   5014 |       |       |          |
    |* 26 |      HASH JOIN                             |                                |      1 |      2 |    501 |00:00:00.03 |     329 |    325 |  1206K|  1206K|  341K (0)|
    |* 27 |       TABLE ACCESS FULL                    | CURRENCY                       |      1 |      1 |      1 |00:00:00.01 |       3 |      2 |       |       |          |
    |* 28 |       HASH JOIN                            |                                |      1 |     61 |    501 |00:00:00.02 |     326 |    323 |   868K|   868K|  947K (0)|
    |* 29 |        HASH JOIN                           |                                |      1 |     10 |     13 |00:00:00.01 |       7 |      6 |   947K|   947K| 1030K (0)|
    |  30 |         TABLE ACCESS BY INDEX ROWID        | MULTIFINANCE                   |      1 |      5 |      9 |00:00:00.01 |       2 |      2 |       |       |          |
    |* 31 |          INDEX RANGE SCAN                  | IDX_STATUS_MF                  |      1 |      5 |      9 |00:00:00.01 |       1 |      1 |       |       |          |
    |* 32 |         TABLE ACCESS FULL                  | AGREEMENT                      |      1 |     18 |     13 |00:00:00.01 |       5 |      4 |       |       |          |
    |  33 |        VIEW                                |                                |      1 |    110 |    501 |00:00:00.02 |     319 |    317 |       |       |          |
    |* 34 |         HASH JOIN RIGHT OUTER              |                                |      1 |    110 |    501 |00:00:00.02 |     319 |    317 |  1011K|  1011K|  317K (0)|
    |  35 |          TABLE ACCESS BY INDEX ROWID       | TENANT_PARAMETER               |      1 |      1 |      1 |00:00:00.01 |       2 |      2 |       |       |          |
    |* 36 |           INDEX UNIQUE SCAN                | PK_TENANT_PARAMETER            |      1 |      1 |      1 |00:00:00.01 |       1 |      1 |       |       |          |
    |* 37 |          TABLE ACCESS BY GLOBAL INDEX ROWID| LOAN_ACCOUNT                   |      1 |    110 |    501 |00:00:00.02 |     317 |    315 |       |       |          |
    |* 38 |           INDEX RANGE SCAN                 | IDX_STATUS_LA1                 |      1 |   4394 |   3025 |00:00:00.01 |      15 |     14 |       |       |          |
    |* 39 |      TABLE ACCESS BY INDEX ROWID           | TX_PAYMENT                     |    501 |      1 |      0 |00:00:00.16 |    9104 |   4689 |       |       |          |
    |* 40 |       INDEX RANGE SCAN                     | FK_TX_PAY_LOAN_ACCT            |    501 |     12 |   8799 |00:00:00.02 |    1038 |    207 |       |       |          |
    |  41 |     TABLE ACCESS BY INDEX ROWID            | DL_CL_OUTSTANDING              |    501 |      1 |    501 |00:00:00.02 |    2008 |     57 |       |       |          |
    |* 42 |      INDEX RANGE SCAN                      | IDXLO_CNUM                     |    501 |      1 |    501 |00:00:00.01 |    1507 |     37 |       |       |          |
    |* 43 |    TABLE ACCESS BY INDEX ROWID             | MF_CUSTOMER                    |    501 |      1 |    501 |00:00:00.02 |    2004 |    287 |       |       |          |
    |* 44 |     INDEX UNIQUE SCAN                      | MF_CUSTOMER_PK                 |    501 |      1 |    501 |00:00:00.01 |    1503 |     24 |       |       |          |
    |* 45 |   TABLE ACCESS BY INDEX ROWID              | TX_INTEREST_ACCRUE             |    501 |      1 |      0 |00:00:00.73 |   82667 |  14733 |       |       |          |
    |* 46 |    INDEX RANGE SCAN                        | FK_TX_INTEREST_ACCRUE_LOAN_ACC |    501 |     67 |  40581 |00:00:00.14 |   42084 |    451 |       |       |          |
    Predicate Information (identified by operation id):
       2 - access("XX"."COLLECTIBILITY_CODE"=:B1)
           filter("XX"."COLLECTIBILITY_CODE"=:B1)
       4 - access("LOAN_ACCOUNT_ID"=:B1 AND "INSTALLMENT_NUMBER"=1)
       6 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
       8 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      10 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      12 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      14 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      16 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      18 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      20 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'),:B3))
      21 - filter(TO_NUMBER("COL"."COLLECTIBILITY_CODE")=:B1)
      26 - access("from$_subquery$_016"."CURRENCY_CODE"="CURR"."CURRENCY_CODE")
      27 - filter(UPPER("CURR"."STATUS")='A')
      28 - access("from$_subquery$_016"."AGREEMENT_ID"="A"."AGREEMENT_ID")
      29 - access("A"."MULTIFINANCE_ID"="MF"."MULTIFINANCE_ID")
      31 - access("MF"."SYS_NC00052$"='A')
      32 - filter(UPPER("A"."STATUS")='A')
      34 - access("LA"."TENANT_ID"="TENANT_ID")
      36 - access("TENANT_PARAMETER_ID"=23)
      37 - filter((UPPER("LA"."LOAN_STATUS")='AC' OR ("LA"."CLOSED_DATE"=TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  UPPER("LA"."LOAN_STATUS")='CN')))
      38 - access("LA"."SYS_NC00118$"='A')
      39 - filter(("TP"."APPROVAL_DATE"=TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "TP"."DATA_SOURCE"=152 AND UPPER("TP"."APPROVAL_STATUS")='A'))
      40 - access("TP"."LOAN_ACCOUNT_ID"="from$_subquery$_016"."LOAN_ACCOUNT_ID")
      42 - access("DCO"."LOAN_CONTRACT_NUMBER"="from$_subquery$_016"."CONTRACT_NUMBER")
      43 - filter(UPPER("MC"."STATUS")='A')
      44 - access("from$_subquery$_016"."MF_CUSTOMER_ID"="MC"."MF_CUSTOMER_ID")
      45 - filter("IA"."ACCRUE_DATE"="XX"."PREV_DATE")
      46 - access("LA"."LOAN_ACCOUNT_ID"="IA"."LOAN_ACCOUNT_ID")
    The DBMS_XPLAN.DISPLAY_CURSOR output after : alter system flush buffer_cache; alter system flush shared_pool;
    Query no 2
    PLAN_TABLE_OUTPUT
    SQL_ID  cxmg4jfvr9pz0, child number 0
    SELECT /*+ gather_plan_statistics */ la.office_code, la.currency_code AS currency, mf.NAME multifinance_id,        la.contract_number, mc.customer_name,       
    dco.os_principal_on_schedule AS os_principal_on_schedule,        f_get_param_value (la.financing_type,                           'FinancingType'                  
           ) AS financing_type,        CASE la.financing_type           WHEN 11                    -- Asset Purchase              THEN NVL (tp.os_principal_cust,     
                      la.os_principal_cust                       )           WHEN 12                                            -- JF Channelling              THEN
    NVL (tp.os_principal_mf, la.os_principal)        END AS os_principal_actual,        CASE           WHEN dco.bi_collectibility_with_gp >= 3              THEN 0    
          ELSE CASE           WHEN la.financing_type = '12'              THEN   NVL (ia.accrue_interest_pl, 0)                   + NVL (ia.accrue_interest_npl, 0)    
          ELSE   NVL (ia.accrue_
    Plan hash value: 2072372033
    | Id  | Operation                            | Name                           | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT                     |                                |  4719 |  1622K|   188K (32)| 00:37:42 |       |       |
    |   1 |  TABLE ACCESS BY INDEX ROWID         | COLLECTIBILITY_CODE            |     1 |    12 |     3   (0)| 00:00:01 |       |       |
    |*  2 |   INDEX SKIP SCAN                    | UNQ_COLLECTIBILITY_CODE        |     1 |       |     2   (0)| 00:00:01 |       |       |
    |   3 |  TABLE ACCESS BY GLOBAL INDEX ROWID  | MF_SCH_INSTALLMENT             |     1 |    17 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |*  4 |   INDEX UNIQUE SCAN                  | UNQ_MF_SCH_INSTALLMENT1        |     1 |       |     2   (0)| 00:00:01 |       |       |
    |   5 |  TABLE ACCESS BY GLOBAL INDEX ROWID  | MF_SCH_INSTALLMENT             |     1 |    15 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |*  6 |   INDEX UNIQUE SCAN                  | UNQ_MF_SCH_INSTALLMENT1        |     1 |       |     2   (0)| 00:00:01 |       |       |
    |   7 |   TABLE ACCESS BY GLOBAL INDEX ROWID | CUST_SCH_INSTALLMENT           |     1 |    15 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |*  8 |    INDEX UNIQUE SCAN                 | UNQ_CUST_SCH_INSTALLMENT1      |     1 |       |     2   (0)| 00:00:01 |       |       |
    |   9 |  TABLE ACCESS BY GLOBAL INDEX ROWID  | MF_SCH_INSTALLMENT             |     1 |    15 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 10 |   INDEX UNIQUE SCAN                  | UNQ_MF_SCH_INSTALLMENT1        |     1 |       |     2   (0)| 00:00:01 |       |       |
    |  11 |   TABLE ACCESS BY GLOBAL INDEX ROWID | CUST_SCH_INSTALLMENT           |     1 |    15 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 12 |    INDEX UNIQUE SCAN                 | UNQ_CUST_SCH_INSTALLMENT1      |     1 |       |     2   (0)| 00:00:01 |       |       |
    |  13 |  TABLE ACCESS BY GLOBAL INDEX ROWID  | MF_SCH_INSTALLMENT             |     1 |    17 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 14 |   INDEX UNIQUE SCAN                  | UNQ_MF_SCH_INSTALLMENT1        |     1 |       |     2   (0)| 00:00:01 |       |       |
    |  15 |   TABLE ACCESS BY GLOBAL INDEX ROWID | CUST_SCH_INSTALLMENT           |     1 |    17 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 16 |    INDEX UNIQUE SCAN                 | UNQ_CUST_SCH_INSTALLMENT1      |     1 |       |     2   (0)| 00:00:01 |       |       |
    |  17 |  TABLE ACCESS BY GLOBAL INDEX ROWID  | MF_SCH_INSTALLMENT             |     1 |    17 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 18 |   INDEX UNIQUE SCAN                  | UNQ_MF_SCH_INSTALLMENT1        |     1 |       |     2   (0)| 00:00:01 |       |       |
    |  19 |   TABLE ACCESS BY GLOBAL INDEX ROWID | CUST_SCH_INSTALLMENT           |     1 |    17 |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 20 |    INDEX UNIQUE SCAN                 | UNQ_CUST_SCH_INSTALLMENT1      |     1 |       |     2   (0)| 00:00:01 |       |       |
    |* 21 |  TABLE ACCESS FULL                   | COLLECTIBILITY_CODE            |     1 |    12 |     3   (0)| 00:00:01 |       |       |
    |  22 |  NESTED LOOPS                        |                                |  4719 |  1622K|   188K (32)| 00:37:42 |       |       |
    |  23 |   NESTED LOOPS                       |                                |  4719 |  1460K|   183K (33)| 00:36:45 |       |       |
    |  24 |    NESTED LOOPS OUTER                |                                |  4719 |  1354K|   180K (33)| 00:36:07 |       |       |
    |  25 |     NESTED LOOPS OUTER               |                                |  4719 |  1225K|   130K (46)| 00:26:01 |       |       |
    |* 26 |      HASH JOIN                       |                                |  4719 |  1106K| 23591   (2)| 00:04:44 |       |       |
    |  27 |       TABLE ACCESS BY INDEX ROWID    | MULTIFINANCE                   |     5 |   130 |     2   (0)| 00:00:01 |       |       |
    |* 28 |        INDEX RANGE SCAN              | IDX_STATUS_MF                  |     5 |       |     1   (0)| 00:00:01 |       |       |
    |* 29 |       HASH JOIN                      |                                |  8494 |  1775K| 23589   (2)| 00:04:44 |       |       |
    |* 30 |        TABLE ACCESS FULL             | AGREEMENT                      |    18 |   360 |     3   (0)| 00:00:01 |       |       |
    |* 31 |        HASH JOIN                     |                                |  8494 |  1609K| 23585   (2)| 00:04:44 |       |       |
    |* 32 |         TABLE ACCESS FULL            | CURRENCY                       |     1 |     4 |     3   (0)| 00:00:01 |       |       |
    |  33 |         VIEW                         |                                |   212K|    38M| 23579   (2)| 00:04:43 |       |       |
    |* 34 |          HASH JOIN RIGHT OUTER       |                                |   212K|    32M| 23579   (2)| 00:04:43 |       |       |
    |  35 |           TABLE ACCESS BY INDEX ROWID| TENANT_PARAMETER               |     1 |    74 |     1   (0)| 00:00:01 |       |       |
    |* 36 |            INDEX UNIQUE SCAN         | PK_TENANT_PARAMETER            |     1 |       |     0   (0)| 00:00:01 |       |       |
    |  37 |           PARTITION LIST ALL         |                                |   212K|    17M| 23575   (2)| 00:04:43 |     1 |     5 |
    |* 38 |            TABLE ACCESS FULL         | LOAN_ACCOUNT_HAN               |   212K|    17M| 23575   (2)| 00:04:43 |     1 |     5 |
    |  39 |      TABLE ACCESS BY INDEX ROWID     | TX_INTEREST_ACCRUE             |     1 |    26 |   130K (46)| 00:26:01 |       |       |
    |  40 |       BITMAP CONVERSION TO ROWIDS    |                                |       |       |            |          |       |       |
    |  41 |        BITMAP AND                    |                                |       |       |            |          |       |       |
    |  42 |         BITMAP CONVERSION FROM ROWIDS|                                |       |       |            |          |       |       |
    |* 43 |          INDEX RANGE SCAN            | FK_TX_INTEREST_ACCRUE_LOAN_ACC |    67 |       |     0   (0)| 00:00:01 |       |       |
    |  44 |         BITMAP CONVERSION FROM ROWIDS|                                |       |       |            |          |       |       |
    |* 45 |          INDEX RANGE SCAN            | IDX_TX_INTEREST_ACCRUE         |    67 |       |    12 (100)| 00:00:01 |       |       |
    |* 46 |     TABLE ACCESS BY INDEX ROWID      | TX_PAYMENT                     |     1 |    28 |    11   (0)| 00:00:01 |       |       |
    |* 47 |      INDEX RANGE SCAN                | FK_TX_PAY_LOAN_ACCT            |    12 |       |     0   (0)| 00:00:01 |       |       |
    |  48 |    TABLE ACCESS BY INDEX ROWID       | DL_CL_OUTSTANDING              |     1 |    23 |     1   (0)| 00:00:01 |       |       |
    |* 49 |     INDEX RANGE SCAN                 | IDXLO_CNUM                     |     1 |       |     0   (0)| 00:00:01 |       |       |
    |* 50 |   TABLE ACCESS BY INDEX ROWID        | MF_CUSTOMER                    |     1 |    35 |     1   (0)| 00:00:01 |       |       |
    |* 51 |    INDEX UNIQUE SCAN                 | MF_CUSTOMER_PK                 |     1 |       |     0   (0)| 00:00:01 |       |       |
    Predicate Information (identified by operation id):
       2 - access("XX"."COLLECTIBILITY_CODE"=:B1)
           filter("XX"."COLLECTIBILITY_CODE"=:B1)
       4 - access("LOAN_ACCOUNT_ID"=:B1 AND "INSTALLMENT_NUMBER"=1)
       6 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
       8 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      10 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      12 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      14 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      16 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      18 - access("MFSCH"."LOAN_ACCOUNT_ID"=:B1 AND "MFSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      20 - access("CUSTSCH"."LOAN_ACCOUNT_ID"=:B1 AND "CUSTSCH"."INSTALLMENT_NUMBER"="F_NEXT_INSTALLMENT_NUMBER"(:B2,TO_DATE('
                  2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),:B3))
      21 - filter(TO_NUMBER("COL"."COLLECTIBILITY_CODE")=:B1)
      26 - access("A"."MULTIFINANCE_ID"="MF"."MULTIFINANCE_ID")
      28 - access(UPPER("STATUS")='A')
      29 - access("from$_subquery$_016"."AGREEMENT_ID"="A"."AGREEMENT_ID")
      30 - filter(UPPER("A"."STATUS")='A')
      31 - access("from$_subquery$_016"."CURRENCY_CODE"="CURR"."CURRENCY_CODE")
      32 - filter(UPPER("CURR"."STATUS")='A')
      34 - access("LA"."TENANT_ID"="TENANT_ID"(+))
      36 - access("TENANT_PARAMETER_ID"(+)=23)
      38 - filter((UPPER("LA"."LOAN_STATUS")='AC' OR "LA"."CLOSED_DATE"=TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
                  AND UPPER("LA"."LOAN_STATUS")='CN') AND UPPER("LA"."STATUS")='A')
      43 - access("LA"."LOAN_ACCOUNT_ID"="IA"."LOAN_ACCOUNT_ID"(+))
      45 - access("IA"."ACCRUE_DATE"(+)="XX"."PREV_DATE")
      46 - filter("TP"."APPROVAL_DATE"(+)=TO_DATE(' 2013-02-18 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "TP"."DATA_SOURCE"(+)=152
                  AND UPPER("TP"."APPROVAL_STATUS"(+))='A')
      47 - access("TP"."LOAN_ACCOUNT_ID"(+)="from$_subquery$_016"."LOAN_ACCOUNT_ID")
      49 - access("DCO"."LOAN_CONTRACT_NUMBER"="from$_subquery$_016"."CONTRACT_NUMBER")
      50 - filter(UPPER("MC"."STATUS")='A')
      51 - access("from$_subquery$_016"."MF_CUSTOMER_ID"="MC"."MF_CUSTOMER_ID")Typically both query running on 10-12 seconds (or same execution time).
    And I can't provide autotrace and tkprof output cause I cannot access via sqlplus.
    Thanks very much for the link HOW TO: Post a SQL statement tuning request - template posting
    Regards,
    Sigcle
    Edited by: SigCle on Feb 25, 2013 4:22 AM

  • What is the significance of "cost" column in explain plan

    Hi,
    Can anyone explain what is the meaning of the values which we get in the cost column in the explain plan..For Ex : Cost : 4500 . What does this value mean...and is it measured in which form of units...i mean seconds,nanoseconds etc

    kingfisher,
    Ok one more link for you but I shall quote the text also here,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i82005
    The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory.
    And few paragraphs down,
    13.4.1.3.3 Cost
    The cost represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work. So, the cost used by the query optimizer represents an estimate of the number of disk I/Os and the amount of CPU and memory used in performing an operation. The operation can be scanning a table, accessing rows from a table by using an index, joining two tables together, or sorting a row set. The cost of a query plan is the number of work units that are expected to be incurred when the query is executed and its result produced.
    The access path determines the number of units of work required to get data from a base table. The access path can be a table scan, a fast full index scan, or an index scan. During table scan or fast full index scan, multiple blocks are read from the disk in a single I/O operation. Therefore, the cost of a table scan or a fast full index scan depends on the number of blocks to be scanned and the multiblock read count value. The cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys. The cost of fetching rows using rowids depends on the index clustering factor. See "Assessing I/O for Blocks, not Rows".
    The join cost represents the combination of the individual access costs of the two row sets being joined, plus the cost of the join operation.
    Now I guess if you read this part, it should be pretty clear what cost is.Cost is an evlaution of the resource that is estimated by Oracle for every step incurred in the uery execution.There are no of steps and each may involve doing IO,consuming CPU and/or using memory.Cost is a factor which oracle uses combining all of these 3 (depending on the version) together to propose the work done or expected to be done in exeuting a query.So this actualy should represent the time spent exactly on each and every step.That's what the objective frm cost is.But at the moment,this is not there.There amy be a situation that cost is shown as very high but query is working fine.There are woraround which can bring the cost down for example using and tweaking optimizer_index_cost_adj parameter , we can propose oracle regarding our indexes and it may pick up lesser cost evaluation.What is mentioned is that this model is becoming more and more mature and in the next releases, we may see that cost is representing exactly the time that we would spend inthe query.At the moment,its not there.So as Chris mentioned,Tuning for low cost only is not a good way.
    I suggest you grab a copy of JL's book.He has explained it much better in his book.
    Hope I said some thing useful.
    Aman....

  • Explain me about cost in explain plan.

    Hi All,
    Please explain me about cost in explain plan.
    Below is my explan plans
    1. first plan showing cost is more but query is returing the result very fast (564 msecs ) --- local database.
    2. second plan showing less cost but result is very slow (14 secs) .
    1. local database.
    PLAN_TABLE_OUTPUT 
    | ID  | Operation                         | NAME                      | ROWS  | Bytes | COST  |
    |   0 | SELECT STATEMENT                  |                           |    17 |  2312 |    60 |
    |   1 |  SORT UNIQUE                      |                           |    17 |  2312 |    59 |
    |   2 |   COUNT STOPKEY                   |                           |       |       |       |
    |   3 |    NESTED LOOPS                   |                           |    17 |  2312 |    58 |
    |   4 |     MAT_VIEW ACCESS BY INDEX ROWID| MV_EDECO_ESONGS           |    17 |  1326 |    24 |
    |   5 |      INDEX RANGE SCAN             | IDX_ESNG_REG_TITLE        |    21 |       |     3 |
    |   6 |     TABLE ACCESS BY INDEX ROWID   | MV_EDECO_ESONGS_TERR_CTRY |     1 |    58 |     2 |
    |   7 |      INDEX UNIQUE SCAN            | IDX_ESNG_TERR_ESNG_ID_1   |     1 |       |     1 |
    PLAN_TABLE_OUTPUT
    | ID  | Operation                         | NAME                       | ROWS  | Bytes | COST  |
    |   0 | SELECT STATEMENT                  |                            |     9 |  1260 |    34 |
    |   1 |  SORT UNIQUE                      |                            |     9 |  1260 |    33 |
    |   2 |   COUNT STOPKEY                   |                            |       |       |       |
    |   3 |    NESTED LOOPS                   |                            |     9 |  1260 |    32 |
    |   4 |     MAT_VIEW ACCESS BY INDEX ROWID| MV_EDECO_ESONGS            |     9 |   720 |    14 |
    |   5 |      INDEX RANGE SCAN             | IDX_ESNG_REG_TITLE         |    11 |       |     3 |
    |   6 |     MAT_VIEW ACCESS BY INDEX ROWID| MV_EDECO_ESONGS_TERR_CTRY  |     1 |    60 |     2 |
    |   7 |      INDEX UNIQUE SCAN            | PK_EDESONGSTERCTRY_ESONGID |     1 |       |     1 |
    ------------------------------------------------------------------------------------------------Regards,
    Rajasekhar

    rajasekhar_n wrote:
    Hoek, I have cross checked the results both the querys are precessing same records and returing same result *(both are using DB links).*But you said the first query was on a local database?
    If you're using dblinks then it's not local is it! ?:|

  • Understanding the COST column of an explain plan

    Hello,
    I executed the following query, and obtained the corresponding explain plan:
    select * from isis.clas_rost where cour_off_# = 28
    Description COST Cardinality Bytes
    SELECT STATEMENT, GOAL = FIRST_ROWS               2     10     1540
    TABLE ACCESS BY INDEX ROWID     ISIS     CLAS_ROST     2     10     1540
    INDEX RANGE SCAN     ISIS     CLAS_ROST_N2     1 10     
    I don't understand how these cost values add up. What is the significance of the cost in each row of the explain plan output?
    By comparison, here is another plan output for the following query:
    select * from isis.clas_rost where clas_rost_# = 28
    Description COST Cardinality Bytes
    SELECT STATEMENT, GOAL = FIRST_ROWS               1     1     154
    TABLE ACCESS BY INDEX ROWID     ISIS     CLAS_ROST     1     1     154
    INDEX UNIQUE SCAN     ISIS     CLAS_ROST_U1     1 1     
    Thanks!

    For the most part, you probably want to ignore the cost column. The cardinality column is generally what you want to pay attention to.
    Ideally, the cost column is Oracle's estimate of the amount of work that will be required to execute a query. It is a unitless value that attempts to combine the cost of I/O and CPU (depending on the Oracle version and whether CPU costing is enabled) and to scale physical and logical I/O appropriately). As a unitless number, it doesn't really relate to something "real" like the expected number of buffer gets. It is also determined in part by initialization parameters,session settings, system statistics, etc. that may artificially increase or decrease the cost of certain operations.
    Beyond that, however, cost is problematic because it is only as accurate as the optimizer's estimates. If the optimizer's estimates are accurate, that implies that the cost is reasonably representative (in the sense that a query with a cost of 200 will run in less time than a query with a cost of 20000). But if you're looking at a query plan, it's generally because you believe there may be a problem which means that you are inherently suspicious that some of the optimizer's estimates are incorrect. If that's the case, you should generally distrust the cost.
    Justin

  • Explain Plan and  COST

    Hi,
    Is there any relationship between the cost factor in the explain plan and query execution time. I have come across situations in both ways, i.e. high cost and low execution time, low cost and high execution time. What parameters do we need to consider in tuning a query high cost or low cost?So my assumption is that the lower cost does not guarantee faster response time. Again i have have seen many people try to reduce the cost first.Can anyone help me on this issue please.
    Thanks-Bhaskar

    Cost is a metric that Oracle uses to estimate the runtime of a query. However, it is not something that you probably ought to pay a lot of attention to. If you are looking at the performance of a query, you are presumably operating on the assumption that the optimizer may have chosen an incorrect plan. If the optimizer chose an incorrect plan then by definition the cost is incorrect. If the cost is correct then, by definition, Oracle chose the correct plan.
    It makes much more sense to focus on things like the cardinality of each step (the number of rows each step in the plan is expected to return). If the cardinality estimates are correct (or reasonably close), then the optimizer probably chose the correct plan. If the cardinality estimates are way off, the optimizer probably chose a less than optimal plan. And when you find the first step where the cardinality estimates are wildly incorrect, you'll know where you need to start focusing your tuning efforts.
    Justin

  • What is the "cost" in an explain plan

    We are using Oracle 10 with the cost based optimizer in our PeopleSoft system. When we come across long running SQL statements we run explain plans. I have heard different explanations of what cost means. The latest I heard is disk access time and a cost of one being 15 milliseconds for one disk access. So if a select statement has a cost of, say 1000, then that means it is taking .015 X 1000 or 15 seconds to bring in all of the rows - Is this accurate.
    Thanks,
    Allen Cunningham
    DBA, Sonoma State University

    That question is not directly linked to Peoplesoft where you initially posted your thread.
    Anyway, the situation is not that simple as you said, have a look to the Jonathan Lewis' blog entry :
    http://jonathanlewis.wordpress.com/2006/12/11/cost-is-time/
    Nicolas.
    +<thread moved to Database General Forum: General Database Discussions

  • Cost in Explain Plan

    Hi all,
    Is the "COST" column in explain plan CPU or MEMORY usage cost? or both
    Thanks

    I am not sure this is also documented with same detail level.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/optimops.htm#sthref860 says:
    >
    The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory.
    >
    and http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/ex_plan.htm#sthref1110 says
    >
    Cost of the operation as estimated by the optimizer's query approach. Cost is not determined for table access operations. The value of this column does not have any particular unit of measurement; it is merely a weighted value used to compare costs of execution plans. The value of this column is a function of the CPU_COST and IO_COST columns.
    >
    Edited by: P. Forstmann on 6 avr. 2011 08:38

  • Explain plan - lower cost but higher response time in 11g compared to 10g

    Hello,
    I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
    In 11g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANALYZED NUM_ROWS
    11-08-2012 18:21:12 3413956
    Elapsed: 00:00:00.30
    In 10g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANAL NUM_ROWS
    07-NOV-12 3502160
    Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
    BTW - 'm running the queries directly on the server, so no network latency in play here.
    Thanks in advance
    aBBy.

    *11g Env:*
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    *10g Env:*
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    The query used is:
    explain plan for
    select
    NCP_DETAIL_ID ,
    NCP_ID ,
    STATUS_ID ,
    FIBER_NODE ,
    NODE_DESC ,
    GL ,
    FTA_ID ,
    OLD_BUS_ID ,
    VIRTUAL_NODE_IND ,
    SERVICE_DELIVERY_TYPE ,
    HHP_AUDIT_QTY ,
    COMMUNITY_SERVED ,
    CMTS_CARD_ID ,
    OPTICAL_TRANSMITTER ,
    OPTICAL_RECEIVER ,
    LASER_GROUP_ID ,
    UNIT_ID ,
    DS_SLOT ,
    DOWNSTREAM_PORT_ID ,
    DS_PORT_OR_MOD_RF_CHAN ,
    DOWNSTREAM_FREQ ,
    DOWNSTREAM_MODULATION ,
    UPSTREAM_PORT_ID ,
    UPSTREAM_PORT ,
    UPSTREAM_FREQ ,
    UPSTREAM_MODULATION ,
    UPSTREAM_WIDTH ,
    UPSTREAM_LOGICAL_PORT ,
    UPSTREAM_PHYSICAL_PORT ,
    NCP_DETAIL_COMMENTS ,
    ROW_CHANGE_IND ,
    STATUS_DATE ,
    STATUS_USER ,
    MODEM_COUNT ,
    NODE_ID ,
    NODE_FIELD_ID ,
    CREATE_USER ,
    CREATE_DT ,
    LAST_CHANGE_USER ,
    LAST_CHANGE_DT ,
    UNIT_ID_IP ,
    US_SLOT ,
    MOD_RF_CHAN_ID ,
    DOWNSTREAM_LOGICAL_PORT ,
    STATE
    from markethealth.NCP_DETAIL_TAB
    WHERE UNIT_ID = :B1
    ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
    This is the query used for Query 1.
    Stats differences are:
    1. Rownum differes by apprx - 90K more rows in 10g env
    2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
    3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g.

  • Explain Plan RULE o COST?

    Hello, I want to know the best way to execute the next query,
    the explain plan without hint the cost are show and are low but with the hint dosent shows statistics about the query
    SELECT *
    FROM encmov e,
    detdoc d
    WHERE e.cod_tip_com = d.cod_tip_com
    AND d.cod_tip_con = '16550'
    AND d.sec_com = e.sec_com
    AND d.cod_per >= 200601
    AND d.cod_tip_com <> '982'
    AND -e.cod_est NOT IN (2, 8)
    AND -e.cod_ofi = 2
    AND ROWNUM < 2
    SELECT /*+ RULE*/ *
    FROM encmov e,
    detdoc d
    WHERE e.cod_tip_com = d.cod_tip_com
    AND d.cod_tip_con = '16550'
    AND d.sec_com = e.sec_com
    AND d.cod_per >= 200601
    AND d.cod_tip_com <> '982'
    AND -e.cod_est NOT IN (2, 8)
    AND -e.cod_ofi = 2
    AND ROWNUM < 2
    ...... RULE or COST ??
    sorry by my english.
    Thanks.

    Also bear in mind that RULE is the 'old' way of Oracle determining the best optimization plan (and I think they've taken it out of 11g onwards).
    Rule based optimization basically has a list of rules that it compares the query against to determine what is the best way to execute it.
    Cost based optimization makes an intelligence decision based on statistics against the tables and the cardinality and selectivity of the data on the tables.
    IMHO, cost based should be used. If you are having performance issues and find it's quicker using rule based, then you need to track down the cause of the slowness with cost based and fix that because ultimately cost based will be able to go faster.
    ;)

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for