CBO performance

I'm trying to figure out how the CBO works and what are the parameters that I should change to get it work without "surprises". My Oracle version is 11.1.0.7, and this test was run in both single-instance and RAC. These run on Suse 10.
Here's the query:
SELECT * FROM
RAPIPAGO.RP_TRANSACCION_ITEM I
JOIN
RAPIPAGO.RP_TRANSACCION_ITEM_COMISION IT
ON I.ID_TRANSACCION_ITEM = IT.ID_TRANSACCION_ITEM
WHERE MES_PRESENTACION = '2009-05';
Or
SELECT * FROM
RAPIPAGO.RP_TRANSACCION_ITEM I,
RAPIPAGO.RP_TRANSACCION_ITEM_COMISION IT
WHERE I.ID_TRANSACCION_ITEM = IT.ID_TRANSACCION_ITEM AND MES_PRESENTACION = '2009-05';
Its cost is 156.175 and takes 46 seconds to complete.
If I use a hint to obligate the engine to use a combined index (ID_TRANSACCION_ITEM and MES_PRESENTACION, in that order), this is the result:
SELECT /*+ INDEX (I IX4_RP_TRANSACCION_ITEM) */ * FROM
RAPIPAGO.RP_TRANSACCION_ITEM I,
RAPIPAGO.RP_TRANSACCION_ITEM_COMISION IT
WHERE I.ID_TRANSACCION_ITEM = IT.ID_TRANSACCION_ITEM AND MES_PRESENTACION = '2009-05';
Its cost is 2.697.283 but takes only 1 second to complete...
This behavior is making me troubles in the production env as it is unpredictable and inefficient.
Is there any config that I can use to avoid or control this?
Thanks in advanced.

If MES_PRESENTACION is a VARCHAR2, Oracle's ability to get accurate cardinality estimates will be greatly affected. The optimizer expects by default, for example, that you have a DATE column with values of date '2008-01-01' through date '2009-08-01' that that represents 20 months, so any month will have roughly 5% of the data. If you store that data as a string, however, the optimizer's ability to predict how selective a filter will be is going to be dramatically decreased.
Can you generate the query plans using the DBMS_XPLAN package and include the filter and access predicates? When you do, can you enclose them in the \ tag to preserve white space?  DBMS_XPLAN provides a lot of information that can be useful.
Does the query really return 2.5 million rows?
How many rows are in RP_TRANSACCION_ITEM?
How many rows are in RP_TRANSACCION_ITEM_COMISION?
Which table is the MES_PRESENTACION column in?  How many rows have a MES_PRESENTACION value of '2009-05'?
Is there a histogram on the MES_PRESENTACION column?
Justin                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • CBO Performing Bad as compared to RBO any suggestions

    Hi
    I am migrating from RBO to CBO. I have taken the Stats on the schema tables and indexes. The response times in CBO are really Bad as compared to RBO. Can any one suggest where should I Start to make the performance better.
    Thanks

    In order to get a list of name and values of optimizer parameters that your session is using you can use 10053 event. here is an example;
    HR on 25/03/2006 19:55:33 at XE > alter session set events ‘10053 trace name on
    text forever, level 1′;
    Session altered.
    HR on 25/03/2006 19:55:51 at XE > SELECT TO_CHAR(sysdate,’DD-MON-YY HH24:MI:SS’)
    ”Start” FROM dual;
    Start
    25-MAR-06 19:55:59
    1 row selected.
    HR on 25/03/2006 19:55:59 at XE > ALTER SESSION SET EVENTS ‘10053 TRACE NAME CONTEXT OFF’;
    The file at User_Dump_Dest folder in your Os with .trc extention will be created.
    Hope this helps, kind regards.
    Tonguç

  • CBO: OWB Dimension Performance Isssue (DIMENSION_KEY=DIM_LEVEL_ID)

    Hi
    In my opinion the OWB Dimensions are very useful, but sometimes there are some Performance Issues.
    I am working with the OWB Dimensions quite a lot and with the big Dimensions ( > 100.000 rows) , i often get some Performance problems when OWB generates the code to Load (Merge Step) or Lookup these Dimensions.
    OWB Dimensions have a PK on DIMENSION_KEY and Level Surrogate IDs which are equal to the DIMENSION_KEY if the The Row is an Element of that Level (and not a Parent Hierarchic Element)
    I am hunting the Problem down to the Condition DIMENSION_KEY= (DETAIL_)LEVEL_SURROGATE_ID. The OWB does that to get only the Rows with (Detail-) Level Attributes.
    But it seems, that the CBO isn´t able to predicted the Cardinality right. The CBO always assume, that the Result Cardinality of that Condition is 1 row. So I assume that Conditon is the reason for the "bad" Execution Plans, the Execution Plan
    "NESTED LOOPS OUTER" With the Inline View with Cardinality = 1;
    Example:
    SELECT COUNT(*) FROM DIM_KONTO_TAB  WHERE DIMENSION_KEY= KONTO_ID;
    --2506194
    Explain Plan for:
    SELECT DIMENSION_KEY, KONTO_ID
    FROM DIM_KONTO_TAB where DIMENSION_KEY= KONTO_ID;
    +| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |+
    +| 0 | SELECT STATEMENT | | 1 | 12 | 12568 (3)| 00:00:01 | | |+
    +| 1 | PARTITION HASH ALL | | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
    +|* 2 | TABLE ACCESS STORAGE FULL| DIM_KONTO_TAB | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
    Predicate Information (identified by operation id):
    +2 - STORAGE("DIMENSION_KEY"="KONTO_ID")+
    filter("DIMENSION_KEY"="KONTO_ID")
    Or: For Loading an SCD2 Dimension:
    +|* 12 | FILTER | | | | | | Q1,01 | PCWC | |+
    +| 13 | NESTED LOOPS OUTER | | 328K| 3792M| 3968 (2)| 00:00:01 | Q1,01 | PCWP | |+
    +| 14 | PX BLOCK ITERATOR | | | | | | Q1,01 | PCWC | |+
    +| 15 | TABLE ACCESS STORAGE FULL | OWB$KONTO_STG_D35414 | 328K| 2136M| 27 (4)| 00:00:01 | Q1,01 | PCWP | |+
    +| 16 | VIEW | | 1 | 5294 | | | Q1,01 | PCWP | |+
    +|* 17 | TABLE ACCESS STORAGE FULL | DIM_KONTO_TAB | 1 | 247 | 489 (2)| 00:00:01 | Q1,01 | PCWP | |+
    I tried a lot:
    - statistiks are gathered often, with monitoring Informations and (Frequencey-)Histograms and the Conditions Colums
    - created extend Statistiks DBMS_STATS.CREATE_EXTENDED_STATS(USER, 'DIM_KONTO_TAB', '(DIMENSION_KEY, KONTO_ID)')
    - created combined idx one DIMENSION_KEY, LEVEL_SURROGATE_ID
    - red a lot
    - hinted the Querys in OWB ( but it seems the inline View is to complex to use a Hash Join)
    Next Step:
    -Tracing the Optimizer CBO Events.
    Does some one has an Idea how-to help the CBO to get the cardinality right?
    If you need more Information, please tell me.
    Thanks a lot.
    Moritz

    Hi Patrick,
    For a relational dimension, these values must be unique within the LEVEL. It is not required to be a numeric ID (although that follows the best practices of surrogate keys best).
    If you use the same sequence for the dimension you have insured that each entry in the entire dimension is unique. Which means that you can move your data as is into OLAP solutions. We will do this as well in the next major release.
    Hope that helps,
    Jean-Pierre

  • Report Performance with Bind Variable

    Getting some very odd behaviour with a report in APEX v 3.2.1.00.10
    I have a complex query that takes 5 seconds to return via TOAD, but takes from 5 to 10 minutes in an APEX report.
    I've narrowed it down to one particular bind. If I hard code the date in it returns in 6 seconds, but if I let the date be passed in from a parameter it takes 5+ minutes again.
    Relevant part of the query (an inline view) is:
    ,(select rglr_lect lect
    ,sum(tpm) mtr_tpm
    ,sum(enrols) mtr_enrols
    from ops_dash_meetings_report
    where meet_ev_date between to_date(:P35_END_DATE,'DD/MM/YYYY') - 363 and to_date(:P35_END_DATE,'DD/MM/YYYY')
    group by rglr_lect) RPV
    I've tried replacing the "to_date(:P35_END_DATE,'DD/MM/YYYY') - 363" with another item which is populated with the date required (and verified by checking session state). If I replace the :P35_END_DATE with an actual date the performance is fine again.
    The weird thing is that a trace file shows me exactly the same Explain Plan as the TOAD Explain where it runs in 5 seconds.
    Another odd thing is that another page in my application has the same inline view and doesn't hit the performance problem.
    The trace file did show some control characters (circumflex M) after each line of this report's query where these weren't anywhere else on the trace queries. I wondered if there was some sort of corruption in the source?
    No problems due to pagination as the result set is only 31 records and all being displayed.
    Really stumped here. Any advice or pointers would be most welcome.
    Jon.

    Don't worry about the Time column, the cost and cardinality are more important to see whther the CBO is making different decisions for whatever reason.
    Remember that the explain plan shows the expected execution plan and a trace shows the actual execution plan. So what you want to do is compare the query with bind variables from an APEX page trace to a trace from TOAD (or sqlplus or whatever). You can do this outside APEX like this...
    ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';Enter and run your SQL statement...;
    ALTER SESSION SET sql_trace=FALSE;This will create a a trace file in the directory returned by...
    SELECT value FROM v$parameter WHERE name = 'user_dump_dest' Which you can use tkprof to format.
    I am assuming that your not going over DB links or anything else slightly unusual?
    Cheers
    Ben

  • Disc Report Performance

    Hi Friends,
    I have two pc with same configuration/OS/discoverer version . These two machines are on same network and accessing same database. On one pc report comes in 2 min. where as on other pc it takes more than 15 min. to execute.
    is there any discoverer level or machine level setting that might be hampering the performance on second pc.
    Waiting for your response.
    Pankaj

    Pankaj.
    The following are notes from my website that I posted long ago for version 3.x.
    Still applies today for 4.x, 10.x for Disco Desktop registry changes.
    1. Database\QPPEnable
    type - DWORD, default 1
    To Do: Turn off query prediction.
    This can be done by specifying the following registry key:
    HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1\Database\QPPEnable
    It should be set to a DWORD value of 0 (zero). To re-enable query prediction at some later point in time, either remove the registry key or set it to 1.
    2. Database\QPPCBOEnforced
    type - DWORD, default 1
    To Do: Stop query prediction forcing the use of the cost-based optimizer.
    This can be done by specifying the following registry key:
    HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1\Database\QPPCBOEnforced
    It should be set to a DWORD value of 0 (zero) which means use of the Cost-based Optimizer (CBO) is not enforced. The CBO will follow the normal rules of the database server.
    3. Database\ObjectsAlwaysAccessible
    type - DWORD, default 0
    To Do: Stop validating that the tables/views folders refer to exist and that the user has access to them.
    This can be done by specifying the following registry key:
    HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1\Database\ObjectsAlwaysAccessible
    It should be set to a DWORD value > 0 (ie: 1) which means validation will not take place.
    -----------------------------------------------------------------------------------------------------------------------

  • How to resolve most of the Oracle SQL , PL/SQL Performance issues with help of quick Checklist/guidelines ?

    Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
    Checklist for Quick Performance  problem Resolution
    ·         get trace, code and other information for given PE case
              - Latest Code from Production env
              - Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
              - Program parameters & their frequently used values
              - Run Frequency of the program
              - existing Run-time/response time in Production
              - Business Purpose
    ·         Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
    ·         Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
    ·         Identify most time consuming operation(s) using Row Source Operation section
    ·         Study program parameter input directly mapped to SQL
    ·         Identify all Input bind parameters being used to SQL
    ·         Is SQL query returning large records for given inputs
    ·         what are the large tables and their respective columns being used to mapped with input parameters
    ·         which operation is scanning highest number of records in Row Source operation/Explain Plan
    ·         Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
    ·         Check the time consuming index on large table and measure Index Selectivity
    ·         Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
    ·         Is correct index being used for all large tables?
    ·         Is there any Full Table Scan on Large tables ?
    ·         Is there any unwanted Table being used in SQL ?
    ·         Evaluate Join condition on Large tables and their columns
    ·         Is FTS on large table b'cos of usage of non index columns
    ·         Is there any implicit or explicit conversion causing index not getting used ?
    ·         Statistics of all large tables are upto date ?
    Quick Resolution tips
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    2) Use Data Caching Technique/Options to cache static data
    3) Use Pipe Line Table Functions whenever possible
    4) Use Global Temporary Table, Materialized view to process complex records
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    8) Follow Oracle PL/SQL Best Practices
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    12) Review Join condition on existing query explain plan
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    14) Avoid applying SQL functions on index columns
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    Thanks
    Praful

    I understand you were trying to post something helpful to people, but sorry, this list is appalling.
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    No, use pure SQL.
    2) Use Data Caching Technique/Options to cache static data
    No, use pure SQL, and the database and operating system will handle caching.
    3) Use Pipe Line Table Functions whenever possible
    No, use pure SQL
    4) Use Global Temporary Table, Materialized view to process complex records
    No, use pure SQL
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    No, use pure SQL
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    Makes no sense.
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    What about using the execution trace?
    8) Follow Oracle PL/SQL Best Practices
    Which are?
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    You mean design your database and queries properly?  And table scanning is not always bad.
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    It depends if that is necessary or not.
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan.  There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
    12) Review Join condition on existing query explain plan
    Well, if you don't have your join conditions right then your query won't work, so that's obvious.
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    No.  Oracle recommends you do not use hints for query optimization (it says so in the documentation).  Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general.  Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
    14) Avoid applying SQL functions on index columns
    Why?  If there's a need for a function based index, then it should be used.
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    See 13.
    In short, there are no silver bullets for dealing with performance.  Each situation is different and needs to be evaluated on its own merits.

  • Performance issue on 1 SQL request

    Hi,
    We have a performance problem. We have 2 systems. PRD and QAS (QAS is a copy of PRD as of September 2nd)
    SQL request is identical.
    table structures is identical.
    indexes are identical.
    views are identical
    DB stats have all been recalculated on both systems
    initSID.ora values are almost identical. only memory related parameters (and SID) are different.
    Obviously, data is different
    For you info, view ZBW_VIEW_EKPO fetched its info from tables EIKP, LFA1, EKKO and EKPO.
    Starting on September 15th, a query that used to take 10 minutes started taking over 120 minutes.
    I compared explain plans on both system and they are really different:
    SQL request:
    SELECT
      "MANDT" , "EBELN" , "EBELP" , "SAISO" , "SAISJ" , "AEDAT" , "AUREL" , "LOEKZ" , "INCO2" ,
      "ZZTRANSPORT" , "PRODA" , "ZZPRDHA" , "ZZMEM_DATE" , "KDATE" , "ZZHERKL" , "KNUMV" , "KTOKK"
    FROM
      "ZBW_VIEW_EKPO"
    WHERE
      "MANDT" = :A0#
    Explain plan for PRD:
    SELECT STATEMENT ( Estimated Costs = 300,452 , Estimated #Rows = 0 )
            8 HASH JOIN
              ( Estim. Costs = 300,451 , Estim. #Rows = 4,592,525 )
              Estim. CPU-Costs = 9,619,870,571 Estim. IO-Costs = 299,921
              Access Predicates
                1 TABLE ACCESS FULL EIKP
                  ( Estim. Costs = 353 , Estim. #Rows = 54,830 )
                  Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 350
                  Filter Predicates
                7 HASH JOIN
                  ( Estim. Costs = 300,072 , Estim. #Rows = 4,592,525 )
                  Estim. CPU-Costs = 9,093,820,218 Estim. IO-Costs = 299,571
                  Access Predicates
                    2 TABLE ACCESS FULL LFA1
                      ( Estim. Costs = 63 , Estim. #Rows = 812 )
                      Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
                      Filter Predicates
                    6 HASH JOIN
                      ( Estim. Costs = 299,983 , Estim. #Rows = 4,592,525 )
                      Estim. CPU-Costs = 8,617,899,244 Estim. IO-Costs = 299,508
                      Access Predicates
                        3 TABLE ACCESS FULL EKKO
                          ( Estim. Costs = 2,209 , Estim. #Rows = 271,200 )
                          Estim. CPU-Costs = 561,938,609 Estim. IO-Costs = 2,178
                          Filter Predicates
                        5 TABLE ACCESS BY INDEX ROWID EKPO
                          ( Estim. Costs = 290,522 , Estim. #Rows = 4,592,525 )
                          Estim. CPU-Costs = 6,913,020,784 Estim. IO-Costs = 290,141
                            4 INDEX SKIP SCAN EKPO~Z02
                              ( Estim. Costs = 5,144 , Estim. #Rows = 4,592,525 )
                              Search Columns: 2
                              Estim. CPU-Costs = 789,224,817 Estim. IO-Costs = 5,101
                             Access Predicates Filter Predicates
    Explain plan for QAS:
    SELECT STATEMENT ( Estimated Costs = 263,249 , Estimated #Rows = 13,842,540 )
            7 HASH JOIN
              ( Estim. Costs = 263,249 , Estim. #Rows = 13,842,540 )
              Estim. CPU-Costs = 59,041,893,935 Estim. IO-Costs = 260,190
              Access Predicates
                1 TABLE ACCESS FULL LFA1
                  ( Estim. Costs = 63 , Estim. #Rows = 812 )
                  Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
                  Filter Predicates
                6 HASH JOIN
                  ( Estim. Costs = 263,113 , Estim. #Rows = 13,842,540 )
                  Estim. CPU-Costs = 57,640,387,953 Estim. IO-Costs = 260,127
                  Access Predicates
                    4 HASH JOIN
                      ( Estim. Costs = 2,127 , Estim. #Rows = 194,660 )
                      Estim. CPU-Costs = 513,706,489 Estim. IO-Costs = 2,100
                      Access Predicates
                        2 TABLE ACCESS FULL EIKP
                          ( Estim. Costs = 351 , Estim. #Rows = 54,830 )
                          Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 348
                          Filter Predicates
                        3 TABLE ACCESS FULL EKKO
                          ( Estim. Costs = 1,534 , Estim. #Rows = 194,660 )
                          Estim. CPU-Costs = 401,526,622 Estim. IO-Costs = 1,513
                          Filter Predicates
                    5 TABLE ACCESS FULL EKPO
                      ( Estim. Costs = 255,339 , Estim. #Rows = 3,631,800 )
                      Estim. CPU-Costs = 55,204,047,516 Estim. IO-Costs = 252,479
                      Filter Predicates
    One more bit of information, PRD was copied to TST about a month ago and this one is also slow.
    I did almost anything I could think of.

    > DB stats have all been recalculated on both systems
    > initSID.ora values are almost identical. only memory related parameters (and SID) are different.
    > Obviously, data is different
    Ok, so you say: the parameters are different, the data is different and the statistics are different.
    I'm surprised that you still expect the plans to be the same...
    > For you info, view ZBW_VIEW_EKPO fetched its info from tables EIKP, LFA1, EKKO and EKPO.
    We will need to see the view definition !
    > Starting on September 15th, a query that used to take 10 minutes started taking over 120 minutes.
    Oh - Sep. 15th - that explains it ... just kiddin'.
    Ok, so it appears to be obvious that from that day on, the execution plan for the query was changed.
    If you're on Oracle 10g you may look it up again and also recall the CBO stats that had been used back then.
    > I compared explain plans on both system and they are really different:
    >
    > SQL request:
    >
    SELECT
    >   "MANDT" , "EBELN" , "EBELP" , "SAISO" , "SAISJ" , "AEDAT" , "AUREL" , "LOEKZ" , "INCO2" ,
    >   "ZZTRANSPORT" , "PRODA" , "ZZPRDHA" , "ZZMEM_DATE" , "KDATE" , "ZZHERKL" , "KNUMV" , "KTOKK"
    > FROM
    >   "ZBW_VIEW_EKPO"
    > WHERE
    >   "MANDT" = :A0#
    Ok - basically you fetch all rows from this view as MANDT is usually not a selection criteria at all.
    > Explain plan for PRD:

    SELECT STATEMENT ( Estimated Costs = 300,452 , Estimated #Rows = 0 )
    >
    >         8 HASH JOIN
    >           ( Estim. Costs = 300,451 , Estim. #Rows = 4,592,525 )
    >           Estim. CPU-Costs = 9,619,870,571 Estim. IO-Costs = 299,921
    >           Access Predicates
    >
    >             1 TABLE ACCESS FULL EIKP
    >               ( Estim. Costs = 353 , Estim. #Rows = 54,830 )
    >               Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 350
    >               Filter Predicates
    >             7 HASH JOIN
    >               ( Estim. Costs = 300,072 , Estim. #Rows = 4,592,525 )
    >               Estim. CPU-Costs = 9,093,820,218 Estim. IO-Costs = 299,571
    >               Access Predicates
    >
    >                 2 TABLE ACCESS FULL LFA1
    >                   ( Estim. Costs = 63 , Estim. #Rows = 812 )
    >                   Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
    >                   Filter Predicates
    >                 6 HASH JOIN
    >                   ( Estim. Costs = 299,983 , Estim. #Rows = 4,592,525 )
    >                   Estim. CPU-Costs = 8,617,899,244 Estim. IO-Costs = 299,508
    >                   Access Predicates
    >
    >                     3 TABLE ACCESS FULL EKKO
    >                       ( Estim. Costs = 2,209 , Estim. #Rows = 271,200 )
    >                       Estim. CPU-Costs = 561,938,609 Estim. IO-Costs = 2,178
    >                       Filter Predicates
    >                     5 TABLE ACCESS BY INDEX ROWID EKPO
    >                       ( Estim. Costs = 290,522 , Estim. #Rows = 4,592,525 )
    >                       Estim. CPU-Costs = 6,913,020,784 Estim. IO-Costs = 290,141
    >
    >                         4 INDEX SKIP SCAN EKPO~Z02
    >                           ( Estim. Costs = 5,144 , Estim. #Rows = 4,592,525 )
    >                           Search Columns: 2
    >                           Estim. CPU-Costs = 789,224,817 Estim. IO-Costs = 5,101
    >                          Access Predicates Filter Predicates
    Ok, we've no restriction to the data, so Oracle chooses the access methods it thinks are best for large volumes of data - Full table scans and HASH JOINS. The index skip scan is quite odd - maybe this is due to one of the join conditions.
    > Explain plan for QAS:

    SELECT STATEMENT ( Estimated Costs = 263,249 , Estimated #Rows = 13,842,540 )
    >
    >         7 HASH JOIN
    >           ( Estim. Costs = 263,249 , Estim. #Rows = 13,842,540 )
    >           Estim. CPU-Costs = 59,041,893,935 Estim. IO-Costs = 260,190
    >           Access Predicates
    >
    >             1 TABLE ACCESS FULL LFA1
    >               ( Estim. Costs = 63 , Estim. #Rows = 812 )
    >               Estim. CPU-Costs = 7,478,316 Estim. IO-Costs = 63
    >               Filter Predicates
    >             6 HASH JOIN
    >               ( Estim. Costs = 263,113 , Estim. #Rows = 13,842,540 )
    >               Estim. CPU-Costs = 57,640,387,953 Estim. IO-Costs = 260,127
    >               Access Predicates
    >
    >                 4 HASH JOIN
    >                   ( Estim. Costs = 2,127 , Estim. #Rows = 194,660 )
    >                   Estim. CPU-Costs = 513,706,489 Estim. IO-Costs = 2,100
    >                   Access Predicates
    >
    >                     2 TABLE ACCESS FULL EIKP
    >                       ( Estim. Costs = 351 , Estim. #Rows = 54,830 )
    >                       Estim. CPU-Costs = 49,504,995 Estim. IO-Costs = 348
    >                       Filter Predicates
    >                     3 TABLE ACCESS FULL EKKO
    >                       ( Estim. Costs = 1,534 , Estim. #Rows = 194,660 )
    >                       Estim. CPU-Costs = 401,526,622 Estim. IO-Costs = 1,513
    >                       Filter Predicates
    >
    >                 5 TABLE ACCESS FULL EKPO
    >                   ( Estim. Costs = 255,339 , Estim. #Rows = 3,631,800 )
    >                   Estim. CPU-Costs = 55,204,047,516 Estim. IO-Costs = 252,479
    >                   Filter Predicates
    Ok, we see significantly different table sizes here, but at least this second plan leaves out the superfluous Index Skip Scan.
    How to move on from here?
    1. Check whether you've installed all the current patches. Not all bugs that are in the system are hit all the time, so it may very well be that after new CBO stats were calculated you just begin to hit one of it.
    2. Make sure that all parameter recommendations are implemented on the systems. This is crucial for the CBO.
    3. Provide a description of the Indexes and the view definition.
    The easiest would be: perform an Oracle CBO trace and provide a download link to it.
    regards,
    Lars

  • URGENT: Migrating from SQL to Oracle results in very poor performance!

    *** IMPORTANT, NEED YOUR HELP ***
    Dear, I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
    Environment at Customer Server Side:
    Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
    Data Storage: Em2
    DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
    Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
    Anton

    On the production system we have very poor DB performanceHave you identified the cause of the poor performance is not the queries and their plans being generated by the database?
    Do you know if some of the queries appear to take more time than what it used to be on old system? Did you analyze such queries to see what might be the problem?
    Are you running RBO or CBO?
    if stats are generated, how are they generated and how often?
    Did you see what autotrace and tkprof has to tell you about problem queries (if in fact such queries have been identified)?
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/sqltrace.htm#1052

  • SQL Area Usage, Performance Problem

    Hi,
    I'm a software engineer, not a DBA, with a lot of doubts about our production environment, but I think that I have found a problem at our production Database.
    At our development database when I execute the same SQL statement over than onces, I can see this behaviour (for example):
    * First execution 580 milisenconds.
    * Second execution 21 milisencons.
    As where I know, I understand that the compiled SQL statement is stored at the SQL Area, and by that reason the second execution is faster then the first one. Is that assumption correct?
    If it is correct, I have a problem on our production Database because it does not work as expected. I have done a lot of trials, and SQL statement executions do not reduce her time execution when I do consecutive SQL execution.
    Could you help me? I think that the parameter shared_pool_size value is too lower for our production server.
    Thanks in advance.
    Best Regards,
    Joan

    Just a comment about performance tuning and troubleshooting in general Joan.
    It is very dangerous to base your conclusions on observation only. Consider the following example:
    SQL> set timing on
    SQL> select count(*) from all_objects;
    COUNT(*)
    10296
    Elapsed: 00:00:18.38
    SQL>
    SQL> -- now using the magically warp speed hint
    SQL> select /*+ WARP_SPEED */ count(*) from all_objects;
    COUNT(*)
    10296
    Elapsed: 00:00:00.32
    SQL>
    From 18 seconds to less than half a second. It does look, based on pure observation of the results, that there is a WARP_SPEED hint in Oracle and it does increase performance dramatically. And we can also infer that this is an undocumented feature added by one of the Oracle kernel developers that is also a Star Trek fan. ;-)
    But if we turn on SQL tracing (as suggested), we will see that the first SELECT did a lot of physical I/O. The 2nd SELECT did the same work, but without having to do the very expensive physical I/O - as the data blocks it need to hit (again) was now in memory (inside Oracle's buffer cache).
    It had nothing to do with an invalid and non-existing CBO hint called WARP_SPEED.
    The critical bit is KNOWING exactly what you are measuring when using this type of approach. If you do not know that, you are in no position to determine a sound and valid conclusion.
    Side note on shared pool size - one of the worse mistakes can be to increase it. It can cause incredible damage to performance on an instance that deals with bindless/non-sharable SQL as the pool that Oracle needs to use to determine if a soft parse is possible gets to be huge.. without the benefit of being able to soft parse and forced to hard parse anyway. And that hard parse also now added to the size of the pool.

  • Query rewrite by CBO

    Hello,
    Some days back, I came across a blog entry in which author concluded that when a = b and b = c, oracle does not conclude as a = c. He also provided a test case to prove his point. The URL is [http://sandeepredkar.blogspot.com/2009/09/query-performance-join-conditions.html]
    Now, I thought that that can not be true. So I executed his test case (on 10.2.04) and the outcome indeed proved his point. Initially, I thought it might be due to absense of PK-FK relationship. But even after adding the PK-FK relationship, there was no change in the outcome. Although, when I modified the subquery with list of values, both the queries performed equally. I tried asking the author on his blog but it seems he has not yet seen my comment.
    I am pasting his test case below. Can somebody please help me understand why CBO does not/can not use optimal plan here?
    SQL> create table cu_all (custid number, addr varchar2(200), ph number, cano number, acctype varchar2(10));
    Table created.
    SQL> create table ca_receipt (custid number, caamt number, cadt date, totbal number);
    Table created.
    SQL>
    SQL> insert into cu_all
      2  select     lvl,
      3          dbms_random.string('A',30),
      4          round(dbms_random.value(1,100000)),
      5          round(dbms_random.value(1,10000)),
      6          dbms_random.string('A',10)
      7  from       (select level "LVL" from dual connect by level <=200000);
    200000 rows created.
    SQL> insert into ca_receipt
      2  select     round(dbms_random.value(1,10000)),
      3          round(dbms_random.value(1,100000)),
      4          sysdate - round(dbms_random.value(1,100000)),
      5          round(dbms_random.value(1,100000))
      6  from       (select level "LVL" from dual connect by level <=500000);
    500000 rows created.
    SQL> create unique index pk_cu_all_ind on cu_all(custid);
    Index created.
    SQL> create index ind2_cu_all on cu_all(CANO);
    Index created.
    SQL> create index ind_ca_receipt_custid on ca_receipt(custid);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'CU_ALL', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user,'CA_RECEIPT', cascade=>true);
    PL/SQL procedure successfully completed.
    Now let us execute the query with trace on. This is the similar query which was provided to me.
    SQL> set autot trace
    SQL> SELECT     ca.*, cu.*
      2  FROM ca_receipt CA,
      3       cu_all CU
      4  WHERE       CA.CUSTID = CU.CUSTID
      5  AND         CA.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
      6                CONNECT BY PRIOR CUSTID = CANO)
      7  ORDER BY ACCTYPE DESC;
    289 rows selected.
    Execution Plan
    Plan hash value: 3186098611
    | Id  | Operation                           | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                       |  1000 | 81000 |   504   (2)| 00:00:07 |
    |   1 |  SORT ORDER BY                      |                       |  1000 | 81000 |   504   (2)| 00:00:07 |
    |*  2 |   HASH JOIN                         |                       |  1000 | 81000 |   503   (2)| 00:00:07 |
    |   3 |    NESTED LOOPS                     |                       |       |       |            |          |
    |   4 |     NESTED LOOPS                    |                       |  1000 | 26000 |   112   (1)| 00:00:02 |
    |   5 |      VIEW                           | VW_NSO_1              |    20 |   100 |    21   (0)| 00:00:01 |
    |   6 |       HASH UNIQUE                   |                       |    20 |   180 |            |          |
    |*  7 |        CONNECT BY WITH FILTERING    |                       |       |       |            |          |
    |   8 |         TABLE ACCESS BY INDEX ROWID | CU_ALL                |     1 |     9 |     2   (0)| 00:00:01 |
    |*  9 |          INDEX UNIQUE SCAN          | PK_CU_ALL_IND         |     1 |       |     1   (0)| 00:00:01 |
    |  10 |         NESTED LOOPS                |                       |       |       |            |          |
    |  11 |          CONNECT BY PUMP            |                       |       |       |            |          |
    |  12 |          TABLE ACCESS BY INDEX ROWID| CU_ALL                |    20 |   180 |    21   (0)| 00:00:01 |
    |* 13 |           INDEX RANGE SCAN          | IND2_CU_ALL           |    20 |       |     1   (0)| 00:00:01 |
    |* 14 |      INDEX RANGE SCAN               | IND_CA_RECEIPT_CUSTID |    50 |       |     2   (0)| 00:00:01 |
    |  15 |     TABLE ACCESS BY INDEX ROWID     | CA_RECEIPT            |    50 |  1050 |    52   (0)| 00:00:01 |
    |  16 |    TABLE ACCESS FULL                | CU_ALL                |   200K|    10M|   389   (1)| 00:00:05 |
    Predicate Information (identified by operation id):
       2 - access("CA"."CUSTID"="CU"."CUSTID")
       7 - access("CANO"=PRIOR "CUSTID")
       9 - access("CUSTID"=2353)
      13 - access("CANO"=PRIOR "CUSTID")
      14 - access("CA"."CUSTID"="CUSTID")
    Statistics
              1  recursive calls
              0  db block gets
           2249  consistent gets
             25  physical reads
              0  redo size
          11748  bytes sent via SQL*Net to client
            729  bytes received via SQL*Net from client
             21  SQL*Net roundtrips to/from client
              7  sorts (memory)
              0  sorts (disk)
            289  rows processed
    If you look at the query, it seems to be normal one.
    But the problem is here-
    Query is having two tables CA and CU. From the inner CU table query, it fetches records and joins with CA table an CA table Joins with CU table using the same column.
    Here the inner query joins with CA table and cardinality of the query gets changed. So it is opting FTS when joining to CU table again.
    This is causing the performance bottleneck. So to resolve the issue, I have change the joining condition.
    Now if we check, following is the proper execution plan. Also the consistents gets have been reduced to 797 against 2249 in original query.
    SQL> SELECT     ca.*, cu.*
      2  FROM ca_receipt CA,
      3       cu_all CU
      4  WHERE       CA.CUSTID = CU.CUSTID
      5  AND         CU.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
      6                CONNECT BY PRIOR CUSTID = CANO)
      7  ORDER BY ACCTYPE DESC;
    289 rows selected.
    Execution Plan
    Plan hash value: 3713271440
    | Id  | Operation                           | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                       |  1000 | 81000 |   133   (2)| 00:00:02 |
    |   1 |  SORT ORDER BY                      |                       |  1000 | 81000 |   133   (2)| 00:00:02 |
    |   2 |   NESTED LOOPS                      |                       |       |       |            |          |
    |   3 |    NESTED LOOPS                     |                       |  1000 | 81000 |   132   (1)| 00:00:02 |
    |   4 |     NESTED LOOPS                    |                       |    20 |  1200 |    42   (3)| 00:00:01 |
    |   5 |      VIEW                           | VW_NSO_1              |    20 |   100 |    21   (0)| 00:00:01 |
    |   6 |       HASH UNIQUE                   |                       |    20 |   180 |            |          |
    |*  7 |        CONNECT BY WITH FILTERING    |                       |       |       |            |          |
    |   8 |         TABLE ACCESS BY INDEX ROWID | CU_ALL                |     1 |     9 |     2   (0)| 00:00:01 |
    |*  9 |          INDEX UNIQUE SCAN          | PK_CU_ALL_IND         |     1 |       |     1   (0)| 00:00:01 |
    |  10 |         NESTED LOOPS                |                       |       |       |            |          |
    |  11 |          CONNECT BY PUMP            |                       |       |       |            |          |
    |  12 |          TABLE ACCESS BY INDEX ROWID| CU_ALL                |    20 |   180 |    21   (0)| 00:00:01 |
    |* 13 |           INDEX RANGE SCAN          | IND2_CU_ALL           |    20 |       |     1   (0)| 00:00:01 |
    |  14 |      TABLE ACCESS BY INDEX ROWID    | CU_ALL                |     1 |    55 |     1   (0)| 00:00:01 |
    |* 15 |       INDEX UNIQUE SCAN             | PK_CU_ALL_IND         |     1 |       |     0   (0)| 00:00:01 |
    |* 16 |     INDEX RANGE SCAN                | IND_CA_RECEIPT_CUSTID |    50 |       |     2   (0)| 00:00:01 |
    |  17 |    TABLE ACCESS BY INDEX ROWID      | CA_RECEIPT            |    50 |  1050 |    52   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       7 - access("CANO"=PRIOR "CUSTID")
       9 - access("CUSTID"=2353)
      13 - access("CANO"=PRIOR "CUSTID")
      15 - access("CU"."CUSTID"="CUSTID")
      16 - access("CA"."CUSTID"="CU"."CUSTID")
    Statistics
              1  recursive calls
              0  db block gets
            797  consistent gets
              1  physical reads
              0  redo size
          11748  bytes sent via SQL*Net to client
            729  bytes received via SQL*Net from client
             21  SQL*Net roundtrips to/from client
              7  sorts (memory)
              0  sorts (disk)
            289  rows processed

    user503699 wrote:
    Hello,
    Some days back, I came across a blog entry in which author concluded that when a = b and b = c, oracle does not conclude as a = c. He also provided a test case to prove his point. The URL is [http://sandeepredkar.blogspot.com/2009/09/query-performance-join-conditions.html]
    Now, I thought that that can not be true. So I executed his test case (on 10.2.04) and the outcome indeed proved his point. Initially, I thought it might be due to absense of PK-FK relationship. But even after adding the PK-FK relationship, there was no change in the outcome. Although, when I modified the subquery with list of values, both the queries performed equally. I tried asking the author on his blog but it seems he has not yet seen my comment.I see that Jonathan provided a helpful reply to you while I was in the process of setting up a test case.
    Is it possible that the optimizer is correct? What if... the optimizer transformed the SQL statement? What if... the original SQL statement actually executes faster than the modified SQL statement? What if... the autotrace plans do not match the plans shown on that web page?
    The first execution with the original SQL statement:
    ALTER SESSION SET EVENTS '10053 trace name context forever, level 1';
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_case';
    SET AUTOTRACE TRACE
    SELECT     ca.*, cu.*
    FROM ca_receipt CA,
         cu_all CU
    WHERE       CA.CUSTID = CU.CUSTID
    AND         CA.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
                  CONNECT BY PRIOR CUSTID = CANO)
    ORDER BY ACCTYPE DESC;
    Execution Plan
    Plan hash value: 2794552689
    | Id  | Operation                          | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                       |  1001 | 81081 |  1125   (2)| 00:00:01 |
    |   1 |  SORT ORDER BY                     |                       |  1001 | 81081 |  1125   (2)| 00:00:01 |
    |   2 |   NESTED LOOPS                     |                       |  1001 | 81081 |  1123   (1)| 00:00:01 |
    |   3 |    NESTED LOOPS                    |                       |  1001 | 26026 |   114   (2)| 00:00:01 |
    |   4 |     VIEW                           | VW_NSO_1              |    20 |   100 |    22   (0)| 00:00:01 |
    |   5 |      HASH UNIQUE                   |                       |    20 |   180 |            |          |
    |*  6 |       CONNECT BY WITH FILTERING    |                       |       |       |            |          |
    |   7 |        TABLE ACCESS BY INDEX ROWID | CU_ALL                |     1 |    19 |     2   (0)| 00:00:01 |
    |*  8 |         INDEX UNIQUE SCAN          | PK_CU_ALL_IND         |     1 |       |     1   (0)| 00:00:01 |
    |   9 |        NESTED LOOPS                |                       |       |       |            |          |
    |  10 |         CONNECT BY PUMP            |                       |       |       |            |          |
    |  11 |         TABLE ACCESS BY INDEX ROWID| CU_ALL                |    20 |   180 |    22   (0)| 00:00:01 |
    |* 12 |          INDEX RANGE SCAN          | IND2_CU_ALL           |    20 |       |     1   (0)| 00:00:01 |
    |  13 |     TABLE ACCESS BY INDEX ROWID    | CA_RECEIPT            |    50 |  1050 |    52   (0)| 00:00:01 |
    |* 14 |      INDEX RANGE SCAN              | IND_CA_RECEIPT_CUSTID |    50 |       |     2   (0)| 00:00:01 |
    |  15 |    TABLE ACCESS BY INDEX ROWID     | CU_ALL                |     1 |    55 |     1   (0)| 00:00:01 |
    |* 16 |     INDEX UNIQUE SCAN              | PK_CU_ALL_IND         |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       6 - access("CANO"=PRIOR "CUSTID")
       8 - access("CUSTID"=2353)
      12 - access("CANO"=PRIOR "CUSTID")
      14 - access("CA"."CUSTID"="$nso_col_1")
      16 - access("CA"."CUSTID"="CU"."CUSTID")
    Statistics
              1  recursive calls
              0  db block gets
            232  consistent gets
              7  physical reads
              0  redo size
           2302  bytes sent via SQL*Net to client
            379  bytes received via SQL*Net from client
              5  SQL*Net roundtrips to/from client
              5  sorts (memory)
              0  sorts (disk)
             52  rows processedThe second SQL statement which was modified:
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_case2';
    SELECT     ca.*, cu.*
    FROM ca_receipt CA,
         cu_all CU
    WHERE       CA.CUSTID = CU.CUSTID
    AND         CU.CUSTID IN (SELECT CUSTID FROM cu_all START WITH custid = 2353
                  CONNECT BY PRIOR CUSTID = CANO)
    ORDER BY ACCTYPE DESC;
    Execution Plan
    Plan hash value: 497148844
    | Id  | Operation                           | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                       |  1001 | 81081 |   136   (3)| 00:00:01 |
    |   1 |  SORT ORDER BY                      |                       |  1001 | 81081 |   136   (3)| 00:00:01 |
    |   2 |   TABLE ACCESS BY INDEX ROWID       | CA_RECEIPT            |    50 |  1050 |    52   (0)| 00:00:01 |
    |   3 |    NESTED LOOPS                     |                       |  1001 | 81081 |   134   (2)| 00:00:01 |
    |   4 |     NESTED LOOPS                    |                       |    20 |  1200 |    43   (3)| 00:00:01 |
    |   5 |      VIEW                           | VW_NSO_1              |    20 |   100 |    22   (0)| 00:00:01 |
    |   6 |       HASH UNIQUE                   |                       |    20 |   180 |            |          |
    |*  7 |        CONNECT BY WITH FILTERING    |                       |       |       |            |          |
    |   8 |         TABLE ACCESS BY INDEX ROWID | CU_ALL                |     1 |    19 |     2   (0)| 00:00:01 |
    |*  9 |          INDEX UNIQUE SCAN          | PK_CU_ALL_IND         |     1 |       |     1   (0)| 00:00:01 |
    |  10 |         NESTED LOOPS                |                       |       |       |            |          |
    |  11 |          CONNECT BY PUMP            |                       |       |       |            |          |
    |  12 |          TABLE ACCESS BY INDEX ROWID| CU_ALL                |    20 |   180 |    22   (0)| 00:00:01 |
    |* 13 |           INDEX RANGE SCAN          | IND2_CU_ALL           |    20 |       |     1   (0)| 00:00:01 |
    |  14 |      TABLE ACCESS BY INDEX ROWID    | CU_ALL                |     1 |    55 |     1   (0)| 00:00:01 |
    |* 15 |       INDEX UNIQUE SCAN             | PK_CU_ALL_IND         |     1 |       |     0   (0)| 00:00:01 |
    |* 16 |     INDEX RANGE SCAN                | IND_CA_RECEIPT_CUSTID |    50 |    |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       7 - access("CANO"=PRIOR "CUSTID")
       9 - access("CUSTID"=2353)
      13 - access("CANO"=PRIOR "CUSTID")
      15 - access("CU"."CUSTID"="$nso_col_1")
      16 - access("CA"."CUSTID"="CU"."CUSTID")
    Statistics
              1  recursive calls
              0  db block gets
            162  consistent gets
              0  physical reads
              0  redo size
           2302  bytes sent via SQL*Net to client
            379  bytes received via SQL*Net from client
              5  SQL*Net roundtrips to/from client
              5  sorts (memory)
              0  sorts (disk)
             52  rows processed
    ALTER SESSION SET EVENTS '10053 trace name context off';The question might be asked, does the final SQL statement actually executed look the same as the original? Slightly reformatted:
    The first SQL statement:
    SELECT
      CA.*,
      CU.*
    FROM
      CA_RECEIPT CA,
      CU_ALL CU
    WHERE
      CA.CUSTID = CU.CUSTID
      AND CA.CUSTID IN (
        SELECT
          CUSTID
        FROM
          CU_ALL
        START WITH
          CUSTID = 2353
        CONNECT BY PRIOR
          CUSTID = CANO)
    ORDER BY
      ACCTYPE DESC;
    Final Transformation:
    SELECT
      "CA"."CUSTID" "CUSTID",
      "CA"."CAAMT" "CAAMT",
      "CA"."CADT" "CADT",
      "CA"."TOTBAL" "TOTBAL",
      "CU"."CUSTID" "CUSTID",
      "CU"."ADDR" "ADDR",
      "CU"."PH" "PH",
      "CU"."CANO" "CANO",
      "CU"."ACCTYPE" "ACCTYPE"
    FROM
      (SELECT DISTINCT
        "CU_ALL"."CUSTID" "$nso_col_1"
      FROM
        "TESTUSER"."CU_ALL" "CU_ALL"
      WHERE
        "CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID"
      CONNECT BY
        "CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID") "VW_NSO_1",
      "TESTUSER"."CA_RECEIPT" "CA",
      "TESTUSER"."CU_ALL" "CU"
    WHERE
      "CA"."CUSTID"="VW_NSO_1"."$nso_col_1"
      AND "CA"."CUSTID"="CU"."CUSTID"
    ORDER BY
      "CU"."ACCTYPE" DESC;The second SQL statement:
    SELECT
      CA.*,
      CU.*
    FROM
      CA_RECEIPT CA,
      CU_ALL CU
    WHERE
      CA.CUSTID = CU.CUSTID
      AND CU.CUSTID IN (
        SELECT
          CUSTID
        FROM
          CU_ALL
        START WITH
          CUSTID = 2353
        CONNECT BY PRIOR
          CUSTID = CANO)
    ORDER BY
      ACCTYPE DESC;
    Final Transformation:
    SELECT
      "CA"."CUSTID" "CUSTID",
      "CA"."CAAMT" "CAAMT",
      "CA"."CADT" "CADT",
      "CA"."TOTBAL" "TOTBAL",
      "CU"."CUSTID" "CUSTID",
      "CU"."ADDR" "ADDR",
      "CU"."PH" "PH",
      "CU"."CANO" "CANO",
      "CU"."ACCTYPE" "ACCTYPE"
    FROM
      (SELECT DISTINCT
        "CU_ALL"."CUSTID" "$nso_col_1"
      FROM
        "TESTUSER"."CU_ALL" "CU_ALL"
      WHERE
        "CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID"
      CONNECT BY
        "CU_ALL"."CANO"=PRIOR "CU_ALL"."CUSTID") "VW_NSO_1",
      "TESTUSER"."CA_RECEIPT" "CA",
      "TESTUSER"."CU_ALL" "CU"
    WHERE
      "CA"."CUSTID"="CU"."CUSTID"
      AND "CU"."CUSTID"="VW_NSO_1"."$nso_col_1"
    ORDER BY
      "CU"."ACCTYPE" DESC;Now, let's take a look at performance, flushing the buffer cache to force physical reads:
    SET AUTOTRACE OFF
    SET TIMING ON
    SET AUTOTRACE TRACEONLY STATISTICS
    SET ARRAYSIZE 100
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    SELECT
      CA.*,
      CU.*
    FROM
      CA_RECEIPT CA,
      CU_ALL CU
    WHERE
      CA.CUSTID = CU.CUSTID
      AND CA.CUSTID IN (
        SELECT
          CUSTID
        FROM
          CU_ALL
        START WITH
          CUSTID = 2353
        CONNECT BY PRIOR
          CUSTID = CANO)
    ORDER BY
      ACCTYPE DESC;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    SELECT
      CA.*,
      CU.*
    FROM
      CA_RECEIPT CA,
      CU_ALL CU
    WHERE
      CA.CUSTID = CU.CUSTID
      AND CU.CUSTID IN (
        SELECT
          CUSTID
        FROM
          CU_ALL
        START WITH
          CUSTID = 2353
        CONNECT BY PRIOR
          CUSTID = CANO)
    ORDER BY
      ACCTYPE DESC;The output:
    /* (with AND CA.CUSTID IN...) */
    52 rows selected.
    Elapsed: 00:00:00.64
    Statistics
              0  recursive calls
              0  db block gets
            232  consistent gets
            592  physical reads
              0  redo size
           2044  bytes sent via SQL*Net to client
            346  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              5  sorts (memory)
              0  sorts (disk)
             52  rows processed
    /* (with AND CU.CUSTID IN...) */
    52 rows selected.
    Elapsed: 00:00:00.70
    Statistics
              0  recursive calls
              0  db block gets
            162  consistent gets
            712  physical reads
              0  redo size
           2044  bytes sent via SQL*Net to client
            346  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              5  sorts (memory)
              0  sorts (disk)
             52  rows processedThe original SQL statement completed in 0.64 seconds, and the second completed in 0.70 seconds.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • CBO not picking correct indexes or doing Full Scans

    Database version - 10.2.0.4
    OS : Solaris 5.8
    Storage: SAN
    Application : PeopleSoft Financials
    DB size : 450 gb
    DB server : 12 CPU ( 900 Mghz each ), 36 GB RAM
    ASMM - sga_target_size = 5 gb
    Locally managed tablespaces - MANUAL
    - db_file_multiblock_read_count - not set explicitly in spfile ( implicitly defaulted to 128 )
    - other optimizer related parameters are set to default
    - system_statistics - CPUSPEEDNW=456.282722513089, IOSEEKTIM =10, IOTFRSPEED=4096     
    - dictionary object system stats were last gather in Nov 09
    - stats on schema objs are gathered every night using custom script, but I have to say there are some histograms on some tables which were gathered by PS admins
    begin
    dbms_stats.gather_schema_stats(
    ownname=> 'SCHEMANM' ,
    cascade=> DBMS_STATS.AUTO_CASCADE,
    estimate_percent=> DBMS_STATS.AUTO_SAMPLE_SIZE,
    degree=> 10,
    no_invalidate=> DBMS_STATS.AUTO_INVALIDATE,
    granularity=> 'AUTO',
    method_opt=> 'FOR ALL COLUMNS SIZE 1',
    options=> 'GATHER STALE');
    end;
    Details :
    We are experiencing erratic database performance. It was upgraded from 9i to 10g along with PS software upgrade. A job that runs on one day in 12 hrs(that is high in itself) would take more than 24 hrs another day.
    We have Test and Staging envs on other servers which do not have performance issues of production magnitude. The test, staging dbs are clones of prod, created by ftp'ing files from one location to another, not sure if that makes any difference but just pointing out.
    On Prod db some symptoms which baffle me are :
    1) sql executing for over 40 minutes, check the plan and it is using an "incorrect" index, idx1. Hint sql with "correct" index, idx2, and it runs in few seconds. result set <400 rows. Cost with hint with idx2 is HIGHER. This scenario is still understandable as CBO is likely to pick a lower cost.
    - But why is there so much discrepancy in cost and response time?
    - what is influencing the CBO to pick an index which is obviously not the fastest response time?
    2) sql plan shows FTS , execution time , runs forever . But a hint with index in this case shows the cost is LOWER and response time is in seconds, so why is CBO not even evaluating this index path? Because as I understand the CBO, it will always pick the lower cost plan.
    What is influencing the CBO. Is it system stats? or any other database parameter? or the hardware ? the large SGA?? histogram ?? Stats gathering?? is there is any known issue with DBMS_STATS.AUTO_SAMPLE_SIZE and cardinaility?
    Where should I put my focus?
    What am I looking for ?
    I do have Jonathan Lewis's Cost-Based Oracle Fundamentals which I have just started so perhaps I will be enlightened but while I read it, it would be of immense help to get some advice from forum gurus.
    At this time I am not posting any exec plan or code, but I can do so if required.
    Thanks,
    JC

    Picking up just a couple of points - the ones about: why can performance change so much from one day to the next, and how can the optimizer miss a plan which (when hinted) shows a lower cost.
    NOTE: These are only suggestions that may allow you to look at critical code and recognise the pattern of behaviour
    Performance change:
    You are using "gather stale" which means tables won't get new statistics is the volume of change since the last collection is "small". But peoplesoft tables can be big, so some tables may need a lot of data to change (which might take several days or weeks) before they get new stats. The 10g optimizer is able to compare the high-values stored with the predicate values supplied in queries, and decide that since the predicate is higher than the known highest value, it should scale down its estimates of data returned. After a few days when the stats haven't changed, the optimizer suddenly gets to the point where the estimated data size is much too small and the plan changes to something that you can see is bad. The typical example here is a date column that increases with time, and a query on "recent data" - the optimizer basically says: "you want 9th Feb - the high value says 28th Jan, there's virtually no data". Keeping date and sequence-based high-values up to date is important. (See dbms_stats.set_column_stats).
    Lower cost plans available:
    Sometimes this is a bug. Sometimes it's an unlucky side effect. When a query has a large number of tables in a single query block, the optimizer can only examine a limited subset of the possible join orders (for example, a 10-table join has over 3million join orders, and the default limit is 2,000 checked). Because of this limit, Oracle starts working through a few joins orders in a methodical manner, then does a "big jump" to a different set of join orders where it checks a few more join orders, then does another "big jump". It may be that it never gets to your nice plan with the lower cost because it doesn't have time. However, by putting in the hint, you may make some of the orders the optimizer would have examined drop out of the "methodical list" - so it examines a few more in one sequence, or possibly "jumps" to a different sequence that it would not othewise have reached. (I wrote a related blog item some time ago that might make this clearer, even though it's not about exactly the same phenomenon: http://jonathanlewis.wordpress.com/2006/11/03/table-order/ ).
    Your recent comment about dynamic sampling is correct, by the way - by default (which for your version is is level 2 as a system parameter) it will only apply for tables which don't have any statistics. It's only for higher levels, or when explicitly hinted for tables, that it can apply to tables with existing statistics.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • 8.1.7.4 Performance Issue

    Hi, I'm not sure I'm posting this question in the right place, but here is better than no where. We recently upgraded all our DB environments (in the last month) from 8.0.6 to 8.1.7.4 and we took performance hit. SQL that used to do rather well is doing very poorly. We're doing full table scans instead of using indexes. Here's one example of a SQL statement that is doing badly:
    SELECT ALL COMPANY.ED_NO, COMPANY.CURR_IND, COMPANY.NAME, COMPANY.CITY,
    COMPANY.STATE, COMPANY.ZIP, COMPANY.ACCT_TYPE, COMPANY.VERSION_NO,
    COMPANY.AUDIT_DATE, COMPANY.AUDIT_USER, CONTRACT.CONTRACT_NO,
    EDITORNO.SOURCE, EDITORNO.ACCT_AUDIT_DATE, EDITORNO.ACCT_AUDIT_USER,
    COMPANY.SEARCH_KEY, EDITORNO.DONT_PUB_IND
    FROM COMPANY, CONTRACT, EDITORNO, TACS_CONTRACT
    WHERE (COMPANY.SEARCH_KEY LIKE 'DAWN%' OR COMPANY.SEARCH_KEY LIKE 'DAWN%')
    AND COMPANY.ED_NO = CONTRACT.ED_NO(+) AND COMPANY.ED_NO = EDITORNO.ED_NO(+)
    AND COMPANY.ED_NO = TACS_CONTRACT.ED_NO(+) AND TACS_CONTRACT.PUB_CODE(+) = '01'
    AND CONTRACT.PUB_CODE(+) = '01' AND COMPANY.CURR_IND = '1'
    ORDER BY COMPANY.NAME
    The explain on this is:
    SELECT STATEMENT Cost = 1380
    2.1 SORT ORDER BY
    3.1 HASH JOIN OUTER
    4.1 HASH JOIN OUTER
    5.1 HASH JOIN OUTER
    6.1 TABLE ACCESS FULL COMPANY
    6.2 TABLE ACCESS FULL EDITORNO
    5.2 TABLE ACCESS FULL TACS_CONTRACT
    4.2 TABLE ACCESS FULL CONTRACT
    Low cost but our database has never done hash joins very well. This can take up to 3 minutes to return a result.
    If we disable the hash joins then we get:
    SELECT STATEMENT Cost = 2546
    2.1 SORT ORDER BY
    3.1 MERGE JOIN OUTER
    4.1 MERGE JOIN OUTER
    5.1 MERGE JOIN OUTER
    6.1 SORT JOIN
    7.1 TABLE ACCESS FULL COMPANY
    6.2 SORT JOIN
    7.1 TABLE ACCESS FULL TACS_CONTRACT
    5.2 SORT JOIN
    6.1 TABLE ACCESS FULL CONTRACT
    4.2 SORT JOIN
    5.1 TABLE ACCESS FULL EDITORNO
    This query runs in about the same about of time as the one above (3 mins).
    So we go the hint route and add a hint:
    SELECT /*+ INDEX (company company_ie6) USE_NL(contract) USE_NL(editorno) USE_NL(tacs_contract)*/
    ALL COMPANY.ED_NO, COMPANY.CURR_IND, COMPANY.NAME, COMPANY.CITY,
    COMPANY.STATE, COMPANY.ZIP, COMPANY.ACCT_TYPE, COMPANY.VERSION_NO,
    COMPANY.AUDIT_DATE, COMPANY.AUDIT_USER, CONTRACT.CONTRACT_NO,
    EDITORNO.SOURCE, EDITORNO.ACCT_AUDIT_DATE, EDITORNO.ACCT_AUDIT_USER,
    COMPANY.SEARCH_KEY, EDITORNO.DONT_PUB_IND
    FROM COMPANY, CONTRACT, EDITORNO, TACS_CONTRACT
    WHERE (COMPANY.SEARCH_KEY LIKE 'DAWN%' OR COMPANY.SEARCH_KEY LIKE 'DAWN%')
    AND COMPANY.ED_NO = CONTRACT.ED_NO(+) AND COMPANY.ED_NO = EDITORNO.ED_NO(+)
    AND COMPANY.ED_NO = TACS_CONTRACT.ED_NO(+) AND TACS_CONTRACT.PUB_CODE(+) = '01'
    AND CONTRACT.PUB_CODE(+) = '01' AND COMPANY.CURR_IND = '1'
    ORDER BY COMPANY.NAME;
    Here is the explain on this:
    SELECT STATEMENT Cost = 50743
    2.1 SORT ORDER BY
    3.1 CONCATENATION
    4.1 NESTED LOOPS OUTER
    5.1 NESTED LOOPS OUTER
    6.1 NESTED LOOPS OUTER
    7.1 TABLE ACCESS BY INDEX ROWID COMPANY
    8.1 INDEX RANGE SCAN COMPANY_IE6 NON-UNIQUE
    7.2 TABLE ACCESS BY INDEX ROWID TACS_CONTRACT
    8.1 INDEX RANGE SCAN TACS_CONTRACT_IE1 NON-UNIQUE
    6.2 TABLE ACCESS BY INDEX ROWID CONTRACT
    7.1 INDEX UNIQUE SCAN CONTRACT_PK UNIQUE
    5.2 TABLE ACCESS BY INDEX ROWID EDITORNO
    6.1 INDEX UNIQUE SCAN EDITORNO_PK UNIQUE
    4.2 NESTED LOOPS OUTER
    5.1 NESTED LOOPS OUTER
    6.1 NESTED LOOPS OUTER
    7.1 TABLE ACCESS BY INDEX ROWID COMPANY
    8.1 INDEX RANGE SCAN COMPANY_IE6 NON-UNIQUE
    7.2 TABLE ACCESS BY INDEX ROWID EDITORNO
    8.1 INDEX UNIQUE SCAN EDITORNO_PK UNIQUE
    6.2 TABLE ACCESS BY INDEX ROWID CONTRACT
    7.1 INDEX UNIQUE SCAN CONTRACT_PK UNIQUE
    5.2 TABLE ACCESS BY INDEX ROWID TACS_CONTRACT
    6.1 INDEX RANGE SCAN TACS_CONTRACT_IE1 NON-UNIQUE
    This query runs in a few seconds. So why does the query with the worst cost run the best? I'm concerned that we are going to alter our production application to add hints and I'm not even sure how to evaluate those hints because "Cost" no longer seems as reliable as before. Is anyone else experiencing this?
    Thank you for any help you can provide.
    Dawn
    [email protected]

    You can ignore the cost= part of an explain statement. This is something used internally by Oracle when calculating explain plans and doesn't indicate which plan is better. I don't know why it's included in the output except to confuse people. Really? This indicator (while not perfect) has always worked pretty well for me in the past. I think I may have been wrong about this after reading the 8.1.7 documentation. I'd seen other messages saying to ignore the cost of explain
    plans before and I took those posts as being right.
    Anyway here's what the 8.1.7 documentation says about analyzing tables. Maybe you should try analyzing your tables using the
    dbms_stats package mentioned below. There's also another package called dbms_utility.analyze_schema that we use and haven't
    had any problems with it.
    From Oracle 8i Designing and Tuning for Performance Ch. 4:
    The CBO consists of the following steps:
    1.The optimizer generates a set of potential plans for the SQL statement based on its available access paths and hints.
    2.The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes,
    and partitions accessed by the statement.
    The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost
    of each possible access method and join order based on the estimated computer resources, including (but not limited to) I/O and memory, that are required to
    execute the statement using the plan.
    Serial plans with greater costs take more time to execute than those with smaller costs. When using a parallel plan, however, resource use is not directly related
    to elapsed time.
    3.The optimizer compares the costs of the plans and chooses the one with the smallest cost.
    To maintain the effectiveness of the CBO, you must gather statistics and keep them current. Gather statistics on your objects using either of the following:
    For releases prior to Oracle8i, use the ANALYZE statement.
    For Oracle8i releases, use the DBMS_STATS package.
    For table columns which contain skewed data (i.e., values with large variations in number of duplicates), you must collect histograms.
    The resulting statistics provide the CBO with information about data uniqueness and distribution. Using this information, the CBO is able to compute plan costs with a
    high degree of accuracy. This enables the CBO to choose the best execution plan based on the least cost.
    See Also:
    For detailed information on gathering statistics, see Chapter 8, "Gathering Statistics".

  • Is there any performance difference in the order of columns referencing index?

    I wish to find out if there is any performance difference or efficiency in specifying those columns referencing index(es) first in the WHERE clause of SQL statements. That is, whether the order of columns referencing the index is important???.
    E.g. id is the column that is indexed
    SELECT * FROM a where a.id='1' and a.name='John';
    SELECT * FROM a where a.name='John' and a.id='1';
    Is there any differences in terms of efficiency of the 2 statements??
    Please advise. Thanks.

    There is no difference between the two statements under either the RBO or the CBO.
    sql>create table a as select * from all_objects;
    Table created.
    sql>create index a_index on a(object_id);
    Index created.
    sql>analyze table a compute statistics;
    Table analyzed.
    sql>select count(*)
      2    from a
      3   where object_id = 1
      4     and object_name = 'x';
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)
    sql>select count(*)
      2    from a
      3   where object_name = 'x'   
      4     and object_id = 1;
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)

  • Bloom filter and performance

    I have the following query:
    SELECT 
                    obst.obst_id obstructionId
                   ,oost.comment_clob CommentClob
                   ,chp1.ptcar_no StartPtcar
                   ,chp2.ptcar_no EndPtcar
                   ,oost.track_code Track
                   ,oost.start_date StartPeriod
                   ,oost.end_date EndPeriod
                   ,oost.doc_no RelaasId
                   ,obst.status_code Status
            FROM   T1 obst
                 , T2 oost
                 , T3 chp1
                 , T3 chp2
              where obst.oost_id      = oost.oost_id
              and oost.chp_id_start = chp1.chp_id
              and oost.chp_id_end   = chp2.chp_id
              and obst.obs_stat     = 2
       and add_months(obst.mod_date,13) >= :ld_curr_date
       and oost.start_date between :ld_from_date and :ld_to_date         
              and exists (select 1
                            from T4  obsv
                               , T5  veh
                            where  obsv.veh_id = veh.veh_id
                            and (:ln_opr_id is null
                                   OR veh.opr_id = :ln_opr_id)
                            and  obsv.obst_id = obst.obst_id
                            and  veh.ac_number between :ln_min_number and :ln_max_number
                            and  obsv.start_date > obst.start_date
             order by obst.obst_id;
    Giving the following execution plan where a bloom filter has been used.
    | Id  | Operation                           | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    |   0 | SELECT STATEMENT                    |                         |      1 |        |     10 |00:02:13.09 |     128K|  94376 |
    |   1 |  SORT ORDER BY                      |                         |      1 |      8 |     10 |00:02:13.09 |     128K|  94376 |
    |   2 |   NESTED LOOPS                      |                         |      1 |        |     10 |00:02:13.06 |     128K|  94376 |
    |   3 |    NESTED LOOPS                     |                         |      1 |      8 |     10 |00:02:13.06 |     128K|  94376 |
    |   4 |     NESTED LOOPS                    |                         |      1 |      8 |     10 |00:02:13.06 |     128K|  94376 |
    |*  5 |      HASH JOIN SEMI                 |                         |      1 |      8 |     10 |00:02:13.06 |     128K|  94376 |
    |   6 |       JOIN FILTER CREATE            | :BF0000                 |      1 |        |   1469 |00:00:02.06 |   12430 |     76 |
    |   7 |        NESTED LOOPS                 |                         |      1 |        |   1469 |00:00:00.17 |   12430 |     76 |
    |   8 |         NESTED LOOPS                |                         |      1 |    307 |   5522 |00:00:00.11 |   10545 |     52 |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| T2                      |      1 |    316 |   5522 |00:00:00.07 |    3383 |     46 |
    |* 10 |           INDEX RANGE SCAN          | T2_OOST_START_DATE_I    |      1 |   1033 |   8543 |00:00:00.03 |      33 |      3 |
    |* 11 |          INDEX RANGE SCAN           | T1_OBST_OOST_DK_I       |   5522 |      1 |   5522 |00:00:00.08 |    7162 |      6 |
    |* 12 |         TABLE ACCESS BY INDEX ROWID | T1                      |   5522 |      1 |   1469 |00:00:00.13 |    1885 |     24 |
    |  13 |       VIEW                          | VW_SQ_1                 |      1 |  64027 |   1405 |00:00:07.82 |     115K|  94300 |
    |* 14 |        FILTER                       |                         |      1 |        |   1405 |00:00:07.82 |     115K|  94300 |
    |  15 |         JOIN FILTER USE             | :BF0000                 |      1 |  64027 |   1405 |00:00:07.82 |     115K|  94300 |
    |  16 |          PARTITION REFERENCE ALL    |                         |      1 |  64027 |  64027 |00:01:48.22 |     115K|  94300 |
    |* 17 |           HASH JOIN                 |                         |     52 |  64027 |  64027 |00:02:03.37 |     115K|  94300 |
    |  18 |            TABLE ACCESS FULL        | T4                      |     52 |  64027 |  64027 |00:00:00.34 |    1408 |    280 |
    |* 19 |            TABLE ACCESS FULL        | T5                      |     41 |    569K|   5555K|00:02:08.32 |     114K|  94020 |
    |  20 |      TABLE ACCESS BY INDEX ROWID    | T3                      |     10 |      1 |     10 |00:00:00.01 |      22 |      0 |
    |* 21 |       INDEX UNIQUE SCAN             | T3_CHP_PK               |     10 |      1 |     10 |00:00:00.01 |      12 |      0 |
    |* 22 |     INDEX UNIQUE SCAN               | T3_CHP_PK               |     10 |      1 |     10 |00:00:00.01 |      12 |      0 |
    |  23 |    TABLE ACCESS BY INDEX ROWID      | T3                      |     10 |      1 |     10 |00:00:00.01 |      10 |      0 |
    Predicate Information (identified by operation id):
       5 - access("ITEM_1"="OBST"."OBST_ID")
           filter("ITEM_2">"OBST"."START_DATE")
       9 - filter(("OOST"."CHP_ID_START" IS NOT NULL AND "OOST"."CHP_ID_END" IS NOT NULL))
      10 - access("OOST"."START_DATE">=:LD_FROM_DATE AND "OOST"."START_DATE"<=:LD_TO_DATE)
      11 - access("OBST"."OOST_ID"="OOST"."OOST_ID")
      12 - filter(("OBST"."OBS_STAT"=2 AND ADD_MONTHS(INTERNAL_FUNCTION("OBST"."MOD_DATE"),13)>=:LD_CURR_DATE))
      14 - filter((:LN_MIN_ac_number<=:ln_max_number AND TO_DATE(:LD_FROM_DATE)<=TO_DATE(:LD_TO_DATE)))
      17 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")
      19 - filter(((:LN_OPR_ID IS NULL OR "VEH"."OPR_ID"=:LN_OPR_ID) AND "VEH"."ac_number">=:LN_MIN_ac_number AND
                  "VEH"."ac_number"<=:ln_max_number))
      21 - access("OOST"."CHP_ID_START"="CHP1"."CHP_ID")
      22 - access("OOST"."CHP_ID_END"="CHP2"."CHP_ID")
    This query completes in a not acceptable response time of more than 2 minutes and this is why it has been handled to me for possible improvement.
    As you might have already pointed it out from the above execution plan, the operation number 19 is the most time consuming operation (TABLE ACCESS FULL of T5). The query without the EXISTS part represents our data. This part of the query is very fast. It gives about 1500 rows. However, when we want to limit those 1500 records to only records satisfying the exists clause (10 records), the query starts not performing very well because of this full table scan on T5 (5,5 millions).
    The ideal situation would have been to :
    (a) first join table T5(alias veh) with the table T4(alias obs) using the join condition (predicate 17) limiting the number of record from T5 table
    (b) and then filter from T5 table (predicate 19)
    But in this case the CBO is starting by a costly full scan of the entire T5 table (5,5 millions of rows operation 19) before sending this enormous amount of data to the HASH JOIN operation 17 which finally throw away the majority of T5 rows not satisfying the join condition : access("OBSV"."VEH_ID"="VEH"."VEH_ID")
    I have already verified that the table statistics and their join column statistics are up-to-date and are reflecting the reality. But failed to know how to let the CBO automatically doing what I want it to do i.e. Join first and then filter.
    When I discussed with the client we agreed to re-factor a little bit this query so that it is now completing in few seconds.
    The new plan is not using bloom filter and the veh table is accessed via its unique index on VEH_ID (18 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")) before applying the filter operation. Exactly as I wanted it to be in the original query
    Have you any idea on how to change this join method/order so that the full table scan is post poned? of course without refactoring the query
    Do you think that bloom filter is the culprit here?
    Thanks in advance
    Mohamed Houri
    PS : and  veh.ac_number between :ln_min_number and :ln_max_number
    These two imput parameters are beyound the known low and high value. But even when I used min and max value taken from the table (min = low_value from stats and max = high value from stat), the plan didn't changed

    Jonathan,
    I got the plan I was looking for.
    It is not exactly the same as the execution plan that corresponds to the re-factored query but it is what I wanted to have.
    SELECT  /*+ gather_plan_statistics */
                   obst.obst_id obstructionId
                   ,oost.comment_clob CommentClob
                   ,chp1.ptcar_no StartPtcar
                   ,chp2.ptcar_no EndPtcar
                   ,oost.track_code Track
                   ,oost.start_date StartPeriod
                   ,oost.end_date EndPeriod
                   ,oost.doc_no RelaasId
                   ,obst.status_code Status
            FROM   T1 obst
                 , T2 oost
                 , T3 chp1
                 , T3 chp2
            where obst.oost_id = oost.oost_id
              and oost.chp_id_start = chp1.chp_id
              and oost.chp_id_end = chp2.chp_id
              and obst.obs_stat = 2 -- only reeel       
                    and exists (select /*+ no_unnest push_subq */ 1
                            from T4 obsv
                               , T5 veh
                            where obsv.veh_id = veh.veh_id
                              and (null is null
                                   OR veh.opr_id = null
                              and  obsv.obst_id = obst.obst_id
                              and veh.ac_number between 1 and 99999
                              and obsv.start_date > obst.start_date
              and oost.start_date between to_date ('20/11/2013','DD/MM/YYYY') and to_date ('27/11/2013','DD/MM/YYYY')
              and add_months(obst.mod_date,13) >= sysdate
             order by obst.obst_id
    | Id  | Operation                                 | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
    |   0 | SELECT STATEMENT                          |                         |      1 |        |      6 |00:00:00.56 |
    |   1 |  SORT ORDER BY                            |                         |      1 |    254 |      6 |00:00:00.56 |
    |*  2 |   HASH JOIN                               |                         |      1 |    254 |      6 |00:00:00.11 |
    |   3 |    TABLE ACCESS FULL                      | T3                      |      1 |   2849 |   2849 |00:00:00.01 |
    |*  4 |    HASH JOIN                              |                         |      1 |    254 |      6 |00:00:00.11 |
    |   5 |     TABLE ACCESS FULL                     | T3                      |      1 |   2849 |   2849 |00:00:00.01 |
    |   6 |     NESTED LOOPS                          |                         |      1 |        |      6 |00:00:00.10 |
    |   7 |      NESTED LOOPS                         |                         |      1 |    254 |   5012 |00:00:00.09 |
    |*  8 |       TABLE ACCESS BY INDEX ROWID         | T2                      |      1 |    262 |   5012 |00:00:00.06 |
    |*  9 |        INDEX RANGE SCAN                   | T2_OOST_START_DATE_I    |      1 |    857 |   7722 |00:00:00.01 |
    |* 10 |       INDEX RANGE SCAN                    | T1_OBST_OOST_DK_I       |   5012 |      1 |   5012 |00:00:00.03 |
    |* 11 |      TABLE ACCESS BY INDEX ROWID          | T1                      |   5012 |      1 |      6 |00:00:00.48 |
    |  12 |       NESTED LOOPS                        |                         |   1277 |        |      6 |00:00:00.46 |
    |  13 |        NESTED LOOPS                       |                         |   1277 |      2 |      6 |00:00:00.46 |
    |  14 |         PARTITION REFERENCE ALL           |                         |   1277 |      4 |      6 |00:00:00.46 |
    |* 15 |          TABLE ACCESS BY LOCAL INDEX ROWID| T4                      |  66380 |      4 |      6 |00:00:00.43 |
    |* 16 |           INDEX RANGE SCAN                | T4_OBSV_OBST_FK_I       |  66380 |     86 |      6 |00:00:00.28 |
    |* 17 |         INDEX UNIQUE SCAN                 | T5_VEH_PK               |      6 |      1 |      6 |00:00:00.01 |
    |* 18 |        TABLE ACCESS BY GLOBAL INDEX ROWID | T5                      |      6 |      1 |      6 |00:00:00.01 |
    Predicate Information (identified by operation id):
       2 - access("OOST"."CHP_ID_END"="CHP2"."CHP_ID")
       4 - access("OOST"."CHP_ID_START"="CHP1"."CHP_ID")
       8 - filter(("OOST"."CHP_ID_START" IS NOT NULL AND "OOST"."CHP_ID_END" IS NOT NULL))
       9 - access("OOST"."START_DATE">=TO_DATE(' 2013-11-20 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "OOST"."START_DATE"<=TO_DATE(' 2013-11-27
                  00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      10 - access("OBST"."OOST_ID"="OOST"."OOST_ID")
      11 - filter(("OBST"."OBS_STAT"=2 AND ADD_MONTHS(INTERNAL_FUNCTION("OBST"."MOD_DATE"),13)>=SYSDATE@! AND  IS NOT NULL))
      15 - filter("OBSV"."START_DATE">:B1)
      16 - access("OBSV"."OBST_ID"=:B1)
      17 - access("OBSV"."VEH_ID"="VEH"."VEH_ID")
      18 - filter(("VEH"."ac_number">=1 AND "VEH"."ac_number"<=99999))
    Spot how the predicate 17 and 18 are now in the position I wanted them to be i.e probe first the T5 unique index using the join condition and then filter the table T5
    But, there is always a but, why am I obliged to hint?
    Where did I got something wrong (statistics, optimizer parameter, etc...) so that the CBO did not opted for this execution plan without a hint?
    Thanks
    Mohamed Houri

  • RBO X CBO

    Hi everybody.
    I would like to understand where what we are missing here.
    We have several installations running 10g, and it's default value for OPTIMIZER_MODE is ALL_ROWS. Well, if there are no statistcs for the application tables, the RDBMS uses by default RULE BASED OPTIMIZER. Ok.
    After the statistics are generated (oracle job, automatic), the RDBMS turns to use COST BASED OPTIMIZER. I can understand that.
    The problem is: why do several queries run much slower when using CBO? When we analyze the execution plan, we see the wrong indexes being used.
    The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.
    Why does this happen? Shouldn't CBO, after the statistics are generated, find out the best execution plan possible? I really can't use CBO on our sites, because performance is so much worse...
    Thanks in advance.
    Carlos Inglez

    Hi Carlos,
    The solution I have for now is set OPTIMIZER_MODE=RULE. Then everything runs smoothly again.It's almost always an issue with CBO parms or CBO statistics.
    There are several issues in 10g CBO, and here are my notes:
    http://www.dba-oracle.com/t_slow_performance_after_upgrade.htm
    Oracle has improved the cost-based Oracle optimizer in 9.0.5 and again in 10g, so you need to take a close look at your environmental parameter settings (init.ora parms) and your optimizer statistics.
    - Check optimizer parameters - Ensure that you are using the proper optimizer_mode (default is all_rows) and check optimal settings for optimizer_index_cost_adj (lower from the default of 100) and optimizer_index_caching (set to a higher value than the default).
    - Re-set optimizer costing - Consider unsetting your CPU-based optimizer costing (the 10g default, a change from 9i). CPU costing is best of you see CPU in your top-5 timed events in your STATSPACK/AWR report, and the 10g default of optimizercost_model=cpu will try to minimize CPU by invoking more full scans, especially in tablespaces with large blocksizes. To return to your 9i CBO I/O-based costing, set the hidden parameter "_optimizer_cost_model"=io
    - Verify deprecated parameters - you need to set optimizer_features_enable = 10.2.0.2 and optimizer_mode = FIRST_ROWS_n (or ALL_ROWS for a warehouse, but remove the 9i CHOOSE default).
    - Verify quality of CBO statistics - Oracle 10g does automatic statistics collection and your original customized dbms_stats job (with your customized parameters) will be overlaid. You may also see a statistics deficiency (i.e. not enough histograms) causing performance issues. Re-analyze object statistics using dbms_stats and make sure that you collect system statistics.
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

Maybe you are looking for