EXPLAIN PLANS

Hi,
There are two procedures with similar sql statements hitting the same set of tables.The queries are almost similar with slight modifications.One procedure is taking 2 hours to complete whereas the other is taking 5 minutes to complete.The explain plans are different for each of these sql statements.Is there a way where i can force the first procedure to use the same explain plan as that of the other sql statement present in the other procedure.Is there a way to tell the optimiser to use the same execution plan as that of the other ?
database version 10.2.0.3
Edited by: user589320 on Feb 11, 2009 12:30 PM

The queries are almost similar Therein lies the difference.
(you'll need to post the queries and explain plans)

Similar Messages

  • Query tunning in Oracle using Explain Plan

    Adding to my below question: I have now modified the query and the path shownby 'Explain plan' has reduced. The 'Time' column of plan_table is also showing much lesser value. However, some people are suggesting me to consider the time required by the query to execute on Toad. Will it be practical? Please help!!
    Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...
    Edited by: 885901 on Sep 20, 2011 2:10 AM

    885901 wrote:
    Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...how fast is fast enough?

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Cpu time is not getting displayed in explain plan

    Hi All,
    I am trying to analyze one query using explain plan .like below
    1) explain plan for
    SELECT /*+ parallel(tsp,8) use_hash( tsp tp) */ count(1)
    FROM router tp,
    receiver tsp
    WHERE tp.rs = tsp.rp
    AND creater_date >=to_date('04032009000000','ddmmyyyyhh24miss')
    and tsp.XVF is not null
    and tp.XVF is not null
    and tp.role_name='BR';
    2)@$ORACLE_HOME/rdbms/admin/utlxpls.sql
    But i am getting only following columns in result .
    | Id | Operation | Name | Rows | Bytes | Cost |
    No Cpu time preset .
    How can i extimate CPU time ?
    Pls help
    Thanks

    am_73798 wrote:
    I am trying to analyze one query using explain plan .like below
    But i am getting only following columns in result .
    | Id | Operation | Name | Rows | Bytes | Cost |
    No Cpu time preset .
    How can i extimate CPU time ?You need to mention your database version (4-digits, e.g. 9.2.0.8).
    In Oracle 9i CPU costing is disabled by default, you need to gather WORKLOAD system statistics to enable the CPU costing.
    In 10g CPU costing is enabled by default and uses default NOWORKLOAD system statistics if no WORKLOAD system statistics have been gathered. It can only be disabled by setting an undocumented parameter or by changing the OPTIMIZER_FEATURES_ENABLE parameter back to 9i compatibility.
    You can check the status of your system statistics by running the following query in SQL*Plus:
    column sname format a20
    column pname format a20
    column pval2 format a20
    select
    sname
    , pname
    , pval1
    , pval2
    from
    sys.aux_stats$;Can you show us the actual (complete) output you get from "utlxpls.sql"? Use the \ tag to preserve formatting here:
    \output
    \will show asoutput
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Not Understanding the filter in Explain Plan - filter(NULL IS NOT NULL)

    Hi All,
    Request your help in understanding the below scenario. (I am not aware of teh application and table details. Just trying to help my friend)
    SQL> conn
    Enter user-name: [email protected]
    Enter password:
    Connected.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE    10.2.0.3.0      Production
    TNS for Linux: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    --Checking the count in PO_LINES
    SQL> select count(*) from po_lines;
      COUNT(*)
             0
    --PO_LINES is a synonym
    SQL> select object_type,owner from dba_objects where object_name = 'PO_LINES';
    OBJECT_TYPE         OWNER
    SYNONYM             APPS
    --The synonym is pointing to PO.PO_LINES_ALL
    SQL> select * from user_synonyms where synonym_name = 'PO_LINES';
    SYNONYM_NAME                   TABLE_OWNER                    TABLE_NAME                     DB_LINK
    PO_LINES                       PO                             PO_LINES_ALL
    --But when counting PO.PO_LINES_ALL I am getting different result
    SQL> select count(*) c from po.po_lines_all;
             C
          8828
    --Explain plan of teh original query is
    SQL> explain plan for
      2  select
      3  * from po_lines;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT   |              |     1 |   252 |     0   (0)|
    |*  1 |  FILTER            |              |       |       |            |
    |   2 |   TABLE ACCESS FULL| PO_LINES_ALL |  8796 |  2164K|   106   (4)|
    Predicate Information (identified by operation id):
       1 - filter(NULL IS NOT NULL)
    --Now the object PO.PO_LINES_ALL is TABLE, not an mview.
    SQL> select object_type,owner from dba_objects where object_name = 'PO_LINES_ALL';
    OBJECT_TYPE         OWNER
    TABLE               POSeek your help in understanding what is happening here.
    Thanks in Advance,
    jeneesh

    Next time, prefix with APPS. when you show us the explain plan:
    SQL> explain plan for
      2  select
      3  * from apps.po_lines;  -- added the prefix of owner.Just like you prefixed with PO. when you showed us the query on PO_LINES_ALL. It ensures that you are using the synonym which you showed us.
    Btw. PO_LINES_ALL, could still be a VIEW given your overview of the situation.
    Anyway a filter "NULL IS NOT NULL" is indicative that the optimizer performed something called semantic query optimization (SQO).
    SQO is the process of deducing new predicates based upon a) existing predicates in your query (which there is none), b) added predicates to your query (eg. by a VPD policy function), and c) declared constraints on the tables invovled in your query.
    A typical example of when a "NOT is NOT NULL" predicate will show up is when for instance in the EMP table there is a declared constraint on EMPNO like this:
    check(EMPNO > 0)And your query would hold a predicate that is inconsistent with the constraint, for instance like this:
    select *
    from EMP
    where EMPNO <= 0Oracle will deduce that EMPNO cannot be both greater than zero (constraint) as well as smaller than or equal to zero (your query predicate), and will transform the query into:
    select *
    from EMP
    where EMPNO <= 0
      and NULL is NOT NULLThus preventing accessing the EMP table all together, and immediately returning this query with no data found.
    Edited by: Toon Koppelaars on Mar 15, 2010 7:17 AM

  • Filter(NULL IS NOT NULL) in Explain Plan ??

    Hi All,
    Can someone please explain what this explain plan statement means? I see a filter(NULL IS NOT NULL) as the first statement - could not figure out why it came up so from googling.
    My Query Used:
    EXPLAIN PLAN FOR
    MERGE INTO summary_bysrccd
    USING
    (SELECT LAST_DAY(TRUNC(to_timestamp(os.requestdatetime, 'yyyymmddhh24:mi:ss.ff4'))) AS SUMMARY_DATE,
    os.acctnum,
    ol.sourcecode AS sourcecode,
    ol.sourcename AS sourcename,
    count(1) cnt_articleview
    FROM article_views os , master_sourcecode ol
    where os.sourcecode = ol.sourcecode
    AND os.acctnum IS NOT NULL
    AND ol.sourcecode IS NOT NULL
    AND os.requestdatetime IS NOT NULL
    AND UPPER(os.success_ind) = 'S'
         AND (
              ('INCR'  = 'FULL'
              AND  (get_date_timestamp(os.requestdatetime) BETWEEN TO_DATE('23-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('27-AUG-2011 23:59:59','DD-MON-YYYY HH24:MI:SS')
              AND   os.entry_CreatedDate BETWEEN TO_DATE('22-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('28-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS')
              OR ('INCR' = 'FULL'
              AND os.entry_createddate BETWEEN TO_DATE('23-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('27-AUG-2011 23:59:59','DD-MON-YYYY HH24:MI:SS') )
    group by LAST_DAY(TRUNC(to_timestamp(os.requestdatetime, 'yyyymmddhh24:mi:ss.ff4'))),
    os.acctnum,ol.sourcecode,ol.sourcename) mrg_query
    ON (ods_av_summary_bysrccd.acctnum = mrg_query.acctnum AND
    ods_av_summary_bysrccd.summary_date=mrg_query.summary_date AND
    ods_av_summary_bysrccd.sourcecode=mrg_query.sourcecode)
    WHEN NOT MATCHED THEN
    INSERT (SUMMARY_date,ACCTNUM,SOURCECODE,SOURCENAME,CNT_ARTICLEVIEW,ENTRY_LASTUPDATEDDATE)
    VALUES(mrg_query.summary_date,mrg_query.acctnum,mrg_query.sourcecode,mrg_query.sourcename,
    mrg_query.cnt_articleview,sysdate)
    WHEN MATCHED THEN
    UPDATE SET ods_av_summary_bysrccd.cnt_articleview=
    CASE WHEN NVL('INCR','INCR') = 'FULL' THEN mrg_query.cnt_articleview
    ELSE ods_av_summary_bysrccd.cnt_articleview+mrg_query.cnt_articleview
    END,
    ods_av_summary_bysrccd.entry_lastupdateddate=sysdate;My Explain Plan:
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 268591246
    | Id  | Operation                                 | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | MERGE STATEMENT                           |                           |     1 |   456 |       |     3   (0)| 00:00:01 |       |       |
    |   1 |  MERGE                                    | ODS_AV_SUMMARY_BYSRCCD    |       |       |       |            |          |       |       |
    |   2 |   VIEW                                    |                           |       |       |       |            |          |       |       |
    |   3 |    NESTED LOOPS OUTER                     |                           |     1 |   417 |       |     3   (0)| 00:00:01 |       |       |
    |   4 |     VIEW                                  |                           |     1 |   360 |       |     5 (100)| 00:00:01 |       |       |
    |   5 |      SORT GROUP BY                        |                           |     1 |    73 |   595M|            |          |       |       |
    PLAN_TABLE_OUTPUT
    |*  6 |       FILTER                              |                           |       |       |       |            |          |       |       |
    |*  7 |        HASH JOIN                          |                           |  6975K|   485M|  3944K| 17594   (1)| 00:03:32 |       |       |
    |   8 |         TABLE ACCESS FULL                 | ODS_MASTER_SOURCECODE     | 84021 |  2953K|       |   273   (1)| 00:00:04 |       |       |
    |*  9 |         TABLE ACCESS BY GLOBAL INDEX ROWID| ODS_ARTICLE_VIEWS         |  7007K|   247M|       |   826   (0)| 00:00:10 |    33 |    33 |
    |* 10 |          INDEX FULL SCAN                  | IDX_AV_ACCTNUM            |    25M|       |       |    26   (0)| 00:00:01 |       |       |
    |  11 |     TABLE ACCESS BY GLOBAL INDEX ROWID    | ODS_AV_SUMMARY_BYSRCCD    |     1 |    57 |       |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 12 |      INDEX UNIQUE SCAN                    | ODS_AV_SUMMARY_BYSRCCD_PK |     1 |       |       |     2   (0)| 00:00:01 |       |       |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       6 - filter(NULL IS NOT NULL)
       7 - access("OS"."SOURCECODE"="OL"."SOURCECODE")
       9 - filter("OS"."REQUESTDATETIME" IS NOT NULL AND "OS"."ENTRY_CREATEDDATE">=TO_DATE(' 2011-08-23 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "OS"."ENTRY_CREATEDDATE"<=TO_DATE(' 2011-08-27 23:59:59', 'syyyy-mm-dd hh24:mi:ss') AND UPPER("OS"."SUCCESS_IND")='S')
      10 - filter("OS"."ACCTNUM" IS NOT NULL)
      12 - access("ODS_AV_SUMMARY_BYSRCCD"."SUMMARY_DATE"(+)=INTERNAL_FUNCTION("MRG_QUERY"."SUMMARY_DATE") AND
                  "ODS_AV_SUMMARY_BYSRCCD"."ACCTNUM"(+)="MRG_QUERY"."ACCTNUM" AND "ODS_AV_SUMMARY_BYSRCCD"."SOURCECODE"(+)="MRG_QUERY"."SOURCECODE")
    Note
    PLAN_TABLE_OUTPUT
       - dynamic sampling used for this statement

    Hi Toon,
    Thanks for the quick resolution. I went back and verified the table's colunm details and it has a NOT NULL constraint.
    Regards,
    Chaitanya
    P.S: Is it ok if I ask you for some help regarding a production issue I have been encountering since 15 days but haev no clear resolution yet about what/why is the reason (the said issue is neither uniform nor regular - its affecting some modules and happening on some days - i shall give the full details if you are willing to have a look) - i shall start a new post or email you directly - yur convenience.

  • Explain plan results are different in SQL Developer than SQL Plus

    My Environment:
    SQL Developer 1.0.0.15.27
    Platform where SQL Developer is running: Windows XP 2002 SP2
    Oracle Database and Client 9.2.0.7
    Optimizer_mode: FIRST_ROWS
    I have the following SQL statement:
    SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE :b2 = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > :b1
    When I run an Explain in SQL Developer I get the following access path (which is the one I really want):
    SELECT STATEMENT                          TABLE ACCESS(BY INDEX ROWID) FEDLINK.AU_COMPANY          NESTED LOOPS                                   INDEX(RANGE SCAN)
    FEDLINK.UX2_TEMP_AU_COMPANY
              INDEX(RANGE SCAN) FEDLINK.PX1_COMPANY
    However, when I execute the statement with sql_trace turned on and use tkprof to generate the actual access path, the statement executes as follows (which is WAY more expensive):
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 3.58 6.68 28136 29232 0 0
    total 3 3.58 6.69 28136 29232 0 0
    Misses in library cache during parse: 1
    Optimizer goal: FIRST_ROWS
    Parsing user id: 979 (FEDLINK) (recursive depth: 1)
    Rows Row Source Operation
    0 NESTED LOOPS
    0 TABLE ACCESS FULL AU_COMPANY
    0 INDEX RANGE SCAN UX2_TEMP_AU_COMPANY (object id 49783)
    Notice the FULL access of au_company.
    I understand that SQL Developer has nothing to do with why the statement executed the way it did, but why is the Explain in SQL Developer different than the actual execution plan?
    Added note....when I run the explain in SQL Plus it is the same as the actual execution. Here is the explain from SQL Plus:
    explain plan for SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE '1' = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > '01-MAY-2006';
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 76 | 2597 |
    | 1 | NESTED LOOPS | | 2 | 76 | 2597 |
    | 2 | TABLE ACCESS FULL | AU_COMPANY | 2 | 42 | 2595 |
    | 3 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 1 | 17 | 2
    Thanks,
    Brenda

    The explain is different (full scan of au_company in SQL Plus / index access in SQL Developer) even when I use variables in SQL Plus. Here is the output for SQL Plus using variables instead of literals:
    SQL> variable b1 varchar2
    SQL> variable b2 char
    SQL> explain plan for SELECT a1.comp_id
    2 FROM temp_au_company a0, au_company a1
    3 WHERE :b2 = a0.temp_emp_code
    4 AND a0.comp_id = a1.comp_id
    5 AND a0.sls_terr_code != a1.sls_terr_code
    6 AND a1.last_mdfy_date > :b1
    7 /
    Explained.
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 3184 | 118K| 2995 |
    | 1 | HASH JOIN | | 3184 | 118K| 2995 |
    | 2 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 3187 | 54179 | 3 |
    | 3 | TABLE ACCESS FULL | AU_COMPANY | 24009 | 492K| 2983 |
    Any other ideas? They should be the same.
    Brenda

  • Performance problem: Query explain plan changes in pl/sql vs. literal args

    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.

    AmandaSoosai wrote:
    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.Best wild guess based on what you've posted is a bind variable mismatch (your column is declared as a NUMBER data type and your bind variable is declared as a VARCHAR for example). Unless you have histograms on the columns in question ... which, if you're using bind variables is usually a really bad idea.
    A basic illustration of my guess
    http://blogs.oracle.com/optimizer/entry/how_do_i_get_sql_executed_from_an_application_to_uses_the_same_execution_plan_i_get_from_sqlplus

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Same query, same dataset, same ddl setup, but wildly different explain plan

    Hello o fountains of oracle knowledge!
    We have a problem that caused us a full stop when rolling out a new version of our system to a customer and a whole Sunday to boot.
    The scenario is as follows:
    1. An previous version database schema
    2. The current version database schema
    3. A migration script to migrate the old schema to the new
    So we perform the following migration:
    1. Export the previous version database schema
    2. Import into a new schema called schema_old
    3. Create a new schema called schema_new
    4. Run migration script which creates objects, copies data, creates indexes etc etc in schema_new
    The migration runs fine in all environments (development, test and production)
    In our development and test environments performance is stellar, on the customer production server the performance is terrible.
    This using the exact same export file (from the production environment) and performing the exact same steps with the exact same migration script.
    Database version is 10.2.0.1.0 EE on all databases. OS is Microsoft Windows Server 2003 EE SP2 on all servers.
    The system is not in any sense under a heavy load (we have tested with no other load than ourselves).
    Looking at the explain plan for a query that is run frequently and does not use bind variables we see wildly different explain plans.
    The explain plan cost on our development and test servers is estimated to *7* for this query and there are no full table scans.
    On the production server the cost is *8433* and there are two full table scans of which one is on the largest table.
    We have tried to run analyse on all objects with very little effect. The plan changed very slightly, but still includes the two full table scans on the problem server and the cost is still the same.
    All tables and indexes are identical (including storage options), created from the same migration script.
    I am currently at loss for where to look? What can be causing this? I assume this could be caused by some parameter that is set on the server, but I don't know what to look for.
    I would be very grateful for any pointers.
    Thanks,
    Håkon

    Thank you for your answer.
    We collected statistics only after we determined that the production server where not behaving according to expectations.
    In this case we used TOAD and the tool within to collect statistics for all objects. We used 'Analyze' and 'Compute Statistics' options.
    I am not an expert, so sorry if this is too naive an approach.
    Here is the query:SELECT count(0)  
    FROM score_result sr, web_scorecard sc, product p
    WHERE sr.score_final_decision like 'VENT%'  
    AND sc.CREDIT_APPLICATION_ID = sr.CREDIT_APPLICATION_ID  
    AND sc.application_complete='Y'   
    AND p.product = sc.web_product   
    AND p.inactive_product = '2' ;I use this as an example, but the problem exists for virtually all queries.
    The output from the 'good' server:
    | Id  | Operation                      | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT               |                       |     1 |    39 |     7   (0)|
    |   1 |  SORT AGGREGATE                |                       |     1 |    39 |            |
    |   2 |   NESTED LOOPS                 |                       |     1 |    39 |     7   (0)|
    |   3 |    NESTED LOOPS                |                       |     1 |    30 |     6   (0)|
    |   4 |     TABLE ACCESS BY INDEX ROWID| SCORE_RESULT          |     1 |    17 |     4   (0)|
    |   5 |      INDEX RANGE SCAN          | SR_FINAL_DECISION_IDX |     1 |       |     3   (0)|
    |   6 |     TABLE ACCESS BY INDEX ROWID| WEB_SCORECARD         |     1 |    13 |     2   (0)|
    |   7 |      INDEX UNIQUE SCAN         | WEB_SCORECARD_PK      |     1 |       |     1   (0)|
    |   8 |    TABLE ACCESS BY INDEX ROWID | PRODUCT               |     1 |     9 |     1   (0)|
    |   9 |     INDEX UNIQUE SCAN          | PK_PRODUCT            |     1 |       |     0   (0)|
    ---------------------------------------------------------------------------------------------The output from the 'bad' server:
    | Id  | Operation                 | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT          |                       |     1 |    32 |  8344   (3)|
    |   1 |  SORT AGGREGATE           |                       |     1 |    32 |            |
    |   2 |   HASH JOIN               |                       | 10887 |   340K|  8344   (3)|
    |   3 |    TABLE ACCESS FULL      | PRODUCT               |     6 |    42 |     3   (0)|
    |   4 |    HASH JOIN              |                       | 34381 |   839K|  8340   (3)|
    |   5 |     VIEW                  | index$_join$_001      | 34381 |   503K|  2193   (3)|
    |   6 |      HASH JOIN            |                       |       |       |            |
    |   7 |       INDEX RANGE SCAN    | SR_FINAL_DECISION_IDX | 34381 |   503K|   280   (3)|
    |   8 |       INDEX FAST FULL SCAN| SCORE_RESULT_PK       | 34381 |   503K|  1371   (2)|
    |   9 |     TABLE ACCESS FULL     | WEB_SCORECARD         |   489K|  4782K|  6137   (4)|
    ----------------------------------------------------------------------------------------I hope the formatting makes this readable.
    Stats (from SQL Developer), good table:NUM_ROWS     489716
    BLOCKS     27198
    AVG_ROW_LEN     312
    SAMPLE_SIZE     489716
    LAST_ANALYZED     15.12.2009
    LAST_ANALYZED_SINCE     15.12.2009Stats (from SQL Developer), bad table:
    NUM_ROWS     489716
    BLOCKS     27199
    AVG_ROW_LEN     395
    SAMPLE_SIZE     489716
    LAST_ANALYZED     17.12.2009
    LAST_ANALYZED_SINCE     17.12.2009I'm unsure what would cause the difference in average row length.
    I could obviously try to tune our sql-statements to work on the server not behaving better, but I would rather understand why they are different and make sure that we can expect similar behaviour between environments.
    Thank you again for trying to help me.
    Håkon
    Edited by: ergates on 17.des.2009 05:57
    Edited by: ergates on 17.des.2009 06:02

  • How to change the explain plan for currently running query?

    Hi All,
    I am using Oracle enterprise 9i edition. I have a query which frames dynamically and running in the database. I noticed a table with 31147758 rows
    in the query which has no indexes and taking more time to process. I tried to create an INdex on that table. I know the query is already running with a FULL table scan. Is it possible to change the explain plan for the current running query to consider the INDEX?
    [code]
    SELECT /*+ USE_HASH (c,e,b,a) */
    d.att_fcc extrt_prod_dim_id,
    d.att_fcc compr_prod_dim_id,
      a.glbl_uniq_id glbl_uniq_id,
      to_date(c.dit_code,'RRRRMMDD')STRT_DT,
      (to_date(c.dit_code,'RRRRMMDD')+150)END_DT,
      a.pat_nbr pat_id,
      a.rxer_id       rxer_id,
      e.rxer_geog_id  rxer_geog_id,
      a.pharmy_id pharmy_id,
      a.pscr_pack_id pscr_pack_id,
      a.dspnsd_pack_id dspnsd_pack_id,
      DENSE_RANK () OVER (PARTITION BY a.pat_nbr ORDER BY c.dit_code) daterank,
      COUNT( DISTINCT d.att_fcc ) OVER (PARTITION BY a.pat_nbr, c.dit_code) event_cnt
      DENSE_RANK () OVER (PARTITION BY a.pat_nbr,
    d.att_fcc
      ORDER BY c.dit_code) prodrank,
      DENSE_RANK () OVER (PARTITION BY a.pat_nbr,
    d.att_fcc
      ORDER BY c.dit_code DESC) stoprank
      FROM
      pd_dimitems c,
       pd_pack_attribs   d ,
        lrx_tmp_rxer_geog e,
        lrx_tmp_pat_daterank p,
        lrx_tmp_valid_fact_link     a
        WHERE c.dit_id = a.tm_id
        AND   e.rxer_id = a.rxer_id
        AND   a.glbl_uniq_id = p.glbl_uniq_id
        AND   p.daterank > 1
      AND   a.pscr_pack_id = d.att_dit_id
    [/code]
    The table lrx_tmp_pat_daterank is having that 31147758 rows. So I am wondering how to make the query to use the newly created index on the table?

    Why do you think using Indexes will improve the performance of the query? How many rows this query is returning? Optimizer might chose a Full table scan when it finds out that Index plan might not be useful. Why are you using /*+ USE_HASH (c,e,b,a) */ hint? This Hint will force oracle to use Full table scan instead of using the index. Try removing it and see if the plan changes.
    Regards,

  • Explain plan before and after ??

    is there a way to find out what sql plan was before and what sqlp plan is right now ?
    i know we can look at dba_hist_sql_plan but not sure how to put it all togeather...
    i am on 10.2.0.3
    i have a sql that running slow right now, i have the sql_id for it now...looks like its runing
    slow and last week it was fine...need to know what the explain plan was last week ??

    If you are lincenced to use AWR and history tables, you can do this
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_AWR('&SQL_ID')); ---to get the SQL execution plan as it was run from history
    For the current one, you know how to get it
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('&SQL_ID'));
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY('&SQL_ID'));
    You can also query dba_hist_sql_plan and format the results.

  • What are the permissions needed to run explain plans via sql develeper?

    Are the permissions the same in Sql Developer to run explain plans like they are when you run them via sql*plus?

    Yes same permission because the explain plan does not tie to the tools.

  • What are the privileges required for explain plan in Oracle 11g database

    I am facing the problem in doing a explain plan for a view in Oracle 11g database. When I select from the view like this:
    select * from zonewisearpu
    It does a select on the view but when I give explain plan like
    explain plan for
    select * from zonewisearpu
    I get the error like insufficient privileges.
    Please let me know if things are getting missed out as I guess system level privileges are required to execute this.
    I hope, my question is clear.
    It’s a humble request to revert urgently if possible as I need to complete a task and do not know the way out.
    Regards

    975148 wrote:
    Thanks for your reply. I have found out that an explain plan is possible on the user's own objects and is not possible on the granted objects from a different schema. For eg, if I do a explain plan on a view querying on tables from a different view, it would not allow the explain plan to proceed. This could mean that explain plan needs different privileges than just a select.
    Requesting for a revert to this.
    Here is a simple test case that I have perfomed
    SQL> create user test1 identified by test1;
    User created.
    SQL> create user test2 identified by test1;
    User created.
    SQL> grant connect, resource to test1,test2;
    Grant succeeded.
    SQL> create table test1.tab1 as select * from v$session;
    Table created.
    SQL> connect test2/test1
    Conencted.
    SQL> show user
    USER is "TEST2"
    SQL>
    SQL> explain plan for
      2  select sid,serial#,status,username from test1.tab1 where username<> '';
    Explained.
    SQL>
    So, as can be seen I am able to do a explain plan from user test2 for tables belong to user test1.
    As far as privileges are concerned, following is the list
    SQL> select * from dba_role_privs where grantee in ('TEST1','TEST2') order by 1;
    GRANTEE                        GRANTED_ROLE                   ADM DEF
    TEST1                          CONNECT                        NO  YES
    TEST1                          RESOURCE                       NO  YES
    TEST2                          CONNECT                        NO  YES
    TEST2                          RESOURCE                       NO  YES
    SQL>
    SQL> select grantee,owner,table_name,privilege from dba_tab_privs where grantee in ('TEST1','TEST2') order by 1;
    GRANTEE    OWNER      TABLE_NAME           PRIVILEGE
    TEST2      TEST1      TAB1                 SELECT
    SQL>
    SQL>  select * from dba_sys_privs where grantee in ('TEST1','TEST2') order by 1;
    GRANTEE    PRIVILEGE                      ADM
    TEST1      UNLIMITED TABLESPACE           NO
    TEST2      UNLIMITED TABLESPACE           NO
    SQL>

  • What is the significance of "cost" column in explain plan

    Hi,
    Can anyone explain what is the meaning of the values which we get in the cost column in the explain plan..For Ex : Cost : 4500 . What does this value mean...and is it measured in which form of units...i mean seconds,nanoseconds etc

    kingfisher,
    Ok one more link for you but I shall quote the text also here,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#i82005
    The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory.
    And few paragraphs down,
    13.4.1.3.3 Cost
    The cost represents units of work or resource used. The query optimizer uses disk I/O, CPU usage, and memory usage as units of work. So, the cost used by the query optimizer represents an estimate of the number of disk I/Os and the amount of CPU and memory used in performing an operation. The operation can be scanning a table, accessing rows from a table by using an index, joining two tables together, or sorting a row set. The cost of a query plan is the number of work units that are expected to be incurred when the query is executed and its result produced.
    The access path determines the number of units of work required to get data from a base table. The access path can be a table scan, a fast full index scan, or an index scan. During table scan or fast full index scan, multiple blocks are read from the disk in a single I/O operation. Therefore, the cost of a table scan or a fast full index scan depends on the number of blocks to be scanned and the multiblock read count value. The cost of an index scan depends on the levels in the B-tree, the number of index leaf blocks to be scanned, and the number of rows to be fetched using the rowid in the index keys. The cost of fetching rows using rowids depends on the index clustering factor. See "Assessing I/O for Blocks, not Rows".
    The join cost represents the combination of the individual access costs of the two row sets being joined, plus the cost of the join operation.
    Now I guess if you read this part, it should be pretty clear what cost is.Cost is an evlaution of the resource that is estimated by Oracle for every step incurred in the uery execution.There are no of steps and each may involve doing IO,consuming CPU and/or using memory.Cost is a factor which oracle uses combining all of these 3 (depending on the version) together to propose the work done or expected to be done in exeuting a query.So this actualy should represent the time spent exactly on each and every step.That's what the objective frm cost is.But at the moment,this is not there.There amy be a situation that cost is shown as very high but query is working fine.There are woraround which can bring the cost down for example using and tweaking optimizer_index_cost_adj parameter , we can propose oracle regarding our indexes and it may pick up lesser cost evaluation.What is mentioned is that this model is becoming more and more mature and in the next releases, we may see that cost is representing exactly the time that we would spend inthe query.At the moment,its not there.So as Chris mentioned,Tuning for low cost only is not a good way.
    I suggest you grab a copy of JL's book.He has explained it much better in his book.
    Hope I said some thing useful.
    Aman....

  • How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?

    1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
    When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed  in the stat plan  ?
    2. Does rowsource statistics gives some kind of  understanding of Extended stats ?

    You can get Row Source Statistics only *after* the SQL has been executed.  An Explain Plan midway cannot give you row source statistics.
    To get row source statistics either set STATISTICS_LEVEL='ALL'  in the session that executes theSQL OR use the Hint "gather_plan_statistics"  in the SQL being executed.
    Then use dbms_xplan.display_cursor
    Hemant K Chitale

Maybe you are looking for

  • Problem installing Oracle 9i on a Debian 64bits (LD_ASSUME_KERNEL)

    Hi, I'm trying to install Oracle 9i on a Debian 64 bits server. When I launch runInstaller, the installation process stops at 18% "copying naeet.o". I had the same problem when I installed Oracle 9i on a RedHat 4, but I solved it by setting this env

  • Wifi is still greyed out after quite a few weeks.

    I first contacted O2 online and tried everything with them.  They said to contact Apple as it sounded like a software issue.  I made an appointment with the Apple store.  The guy who dealt with me said that because my warranty had now run out (still

  • Past App Purchases

    I spent over $100 on apps.. and then my iTouch got stolen right from my house. And also my computer crashed.. so either way, I lost all of my apps. I have an iPhone now, but when I try to download the apps I've already purchased, it doesn't give me t

  • Examples of scripts and smartforms in ECC 6

    Dear All, I request you all there, to send me few examples or tutorials which has screen shots in them for creation of SAP scripts and the same example for smartform. As i am looking in ECC 6 there is lot of difference to that of before versions of S

  • Invoice block but no QM info record

    Dear All , Our client does not want to maintain Q info reccord through QM procurement activation . However He just wants to ensure that the invoice is blocked if quality has not released the materials .is this posible in standard SAP ? Kindly let me