Explain plan issue

Oracle 11.2.0.1
Windows
SQL> show user;
USER is "SCOTT"
SQL> set autotrace on;
ERROR:
ORA-06575: Package or function DBMS_XPLAN is in an invalid state
SP2-0611: Error enabling EXPLAIN report
SQL> connect hr/hr
Connected.
SQL> set autotrace on;
SQL>
Why Scott user is getting above error and how do I correct this please help me.
Thanks.

Thank you so much sir. You are 100% correct.
SQL> select object_type,object_name
  2  from user_objects
  3  where object_name like '%PLAN%';
OBJECT_TYPE         OBJECT_NAME
PACKAGE             DBMS_XPLAN
LIBRARY             DBMS_XPLAN_LIB
TABLE               PLAN_TABLE
SQL> drop table plan_table purge;
Table dropped.
SQL> drop package dbms_xplan;
Package dropped.
SQL> select object_type,object_name
  2  from user_objects
  3  where object_name like '%PLAN%';
OBJECT_TYPE         OBJECT_NAME
LIBRARY             DBMS_XPLAN_LIB
SQL> drop library dbms_xplan_lib;
Library dropped.
SQL> set autotrace on;
SQL> explain plan for select * from emp;
Explained.
SQL> SET LINESIZE 130
SQL> SET PAGESIZE 0
SQL> SELECT *
  2  FROM   TABLE(DBMS_XPLAN.DISPLAY);
Plan hash value: 3956160932
and I got the plan.
Thanks again.

Similar Messages

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Filter(NULL IS NOT NULL) in Explain Plan ??

    Hi All,
    Can someone please explain what this explain plan statement means? I see a filter(NULL IS NOT NULL) as the first statement - could not figure out why it came up so from googling.
    My Query Used:
    EXPLAIN PLAN FOR
    MERGE INTO summary_bysrccd
    USING
    (SELECT LAST_DAY(TRUNC(to_timestamp(os.requestdatetime, 'yyyymmddhh24:mi:ss.ff4'))) AS SUMMARY_DATE,
    os.acctnum,
    ol.sourcecode AS sourcecode,
    ol.sourcename AS sourcename,
    count(1) cnt_articleview
    FROM article_views os , master_sourcecode ol
    where os.sourcecode = ol.sourcecode
    AND os.acctnum IS NOT NULL
    AND ol.sourcecode IS NOT NULL
    AND os.requestdatetime IS NOT NULL
    AND UPPER(os.success_ind) = 'S'
         AND (
              ('INCR'  = 'FULL'
              AND  (get_date_timestamp(os.requestdatetime) BETWEEN TO_DATE('23-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('27-AUG-2011 23:59:59','DD-MON-YYYY HH24:MI:SS')
              AND   os.entry_CreatedDate BETWEEN TO_DATE('22-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('28-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS')
              OR ('INCR' = 'FULL'
              AND os.entry_createddate BETWEEN TO_DATE('23-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('27-AUG-2011 23:59:59','DD-MON-YYYY HH24:MI:SS') )
    group by LAST_DAY(TRUNC(to_timestamp(os.requestdatetime, 'yyyymmddhh24:mi:ss.ff4'))),
    os.acctnum,ol.sourcecode,ol.sourcename) mrg_query
    ON (ods_av_summary_bysrccd.acctnum = mrg_query.acctnum AND
    ods_av_summary_bysrccd.summary_date=mrg_query.summary_date AND
    ods_av_summary_bysrccd.sourcecode=mrg_query.sourcecode)
    WHEN NOT MATCHED THEN
    INSERT (SUMMARY_date,ACCTNUM,SOURCECODE,SOURCENAME,CNT_ARTICLEVIEW,ENTRY_LASTUPDATEDDATE)
    VALUES(mrg_query.summary_date,mrg_query.acctnum,mrg_query.sourcecode,mrg_query.sourcename,
    mrg_query.cnt_articleview,sysdate)
    WHEN MATCHED THEN
    UPDATE SET ods_av_summary_bysrccd.cnt_articleview=
    CASE WHEN NVL('INCR','INCR') = 'FULL' THEN mrg_query.cnt_articleview
    ELSE ods_av_summary_bysrccd.cnt_articleview+mrg_query.cnt_articleview
    END,
    ods_av_summary_bysrccd.entry_lastupdateddate=sysdate;My Explain Plan:
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 268591246
    | Id  | Operation                                 | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | MERGE STATEMENT                           |                           |     1 |   456 |       |     3   (0)| 00:00:01 |       |       |
    |   1 |  MERGE                                    | ODS_AV_SUMMARY_BYSRCCD    |       |       |       |            |          |       |       |
    |   2 |   VIEW                                    |                           |       |       |       |            |          |       |       |
    |   3 |    NESTED LOOPS OUTER                     |                           |     1 |   417 |       |     3   (0)| 00:00:01 |       |       |
    |   4 |     VIEW                                  |                           |     1 |   360 |       |     5 (100)| 00:00:01 |       |       |
    |   5 |      SORT GROUP BY                        |                           |     1 |    73 |   595M|            |          |       |       |
    PLAN_TABLE_OUTPUT
    |*  6 |       FILTER                              |                           |       |       |       |            |          |       |       |
    |*  7 |        HASH JOIN                          |                           |  6975K|   485M|  3944K| 17594   (1)| 00:03:32 |       |       |
    |   8 |         TABLE ACCESS FULL                 | ODS_MASTER_SOURCECODE     | 84021 |  2953K|       |   273   (1)| 00:00:04 |       |       |
    |*  9 |         TABLE ACCESS BY GLOBAL INDEX ROWID| ODS_ARTICLE_VIEWS         |  7007K|   247M|       |   826   (0)| 00:00:10 |    33 |    33 |
    |* 10 |          INDEX FULL SCAN                  | IDX_AV_ACCTNUM            |    25M|       |       |    26   (0)| 00:00:01 |       |       |
    |  11 |     TABLE ACCESS BY GLOBAL INDEX ROWID    | ODS_AV_SUMMARY_BYSRCCD    |     1 |    57 |       |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 12 |      INDEX UNIQUE SCAN                    | ODS_AV_SUMMARY_BYSRCCD_PK |     1 |       |       |     2   (0)| 00:00:01 |       |       |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       6 - filter(NULL IS NOT NULL)
       7 - access("OS"."SOURCECODE"="OL"."SOURCECODE")
       9 - filter("OS"."REQUESTDATETIME" IS NOT NULL AND "OS"."ENTRY_CREATEDDATE">=TO_DATE(' 2011-08-23 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "OS"."ENTRY_CREATEDDATE"<=TO_DATE(' 2011-08-27 23:59:59', 'syyyy-mm-dd hh24:mi:ss') AND UPPER("OS"."SUCCESS_IND")='S')
      10 - filter("OS"."ACCTNUM" IS NOT NULL)
      12 - access("ODS_AV_SUMMARY_BYSRCCD"."SUMMARY_DATE"(+)=INTERNAL_FUNCTION("MRG_QUERY"."SUMMARY_DATE") AND
                  "ODS_AV_SUMMARY_BYSRCCD"."ACCTNUM"(+)="MRG_QUERY"."ACCTNUM" AND "ODS_AV_SUMMARY_BYSRCCD"."SOURCECODE"(+)="MRG_QUERY"."SOURCECODE")
    Note
    PLAN_TABLE_OUTPUT
       - dynamic sampling used for this statement

    Hi Toon,
    Thanks for the quick resolution. I went back and verified the table's colunm details and it has a NOT NULL constraint.
    Regards,
    Chaitanya
    P.S: Is it ok if I ask you for some help regarding a production issue I have been encountering since 15 days but haev no clear resolution yet about what/why is the reason (the said issue is neither uniform nor regular - its affecting some modules and happening on some days - i shall give the full details if you are willing to have a look) - i shall start a new post or email you directly - yur convenience.

  • Oracle not using its own explain plan

    When I run a simple select query on an indexed column on a large (30 million records) table oracle creates a plan using the indexed column and at a cost of 4. However, what it actually does is do a table scan (I can see this in the 'Long Operations' tab in OEM).
    The funny thing is that I have the same query in a ADO application and when the query is run from there, the same plan is created but no table scan is done - and the query returns in less than a second. However, with the table scan it is over a minute.
    When run through SQL plus Oracle creates a plan including the table scan at a cost of 19030.
    In another (dot net) application I used the: "Alter session set optimizer_index_caching=100" and "Alter session set optimizer_index_cost_adj=10" to try to force the optimizer to use the index. It creates the expected plan, but still does the table scan.
    The query is in the form of:
    "Select * from tab where indexedcol = something"
    Im using Oracle 9i 9.2.0.1.0
    Any ideas as I'm completely at a loss?

    Hello
    It sounds to me like this has something to do with bind variable peeking which was introduced in 9i. If the predicate is
    indexedcolumn = :bind_variablethe first time the query is parsed by oracle, it will "peek" at the value in the bind variable and see what it is and will generate an execution plan based on this. That same plan will be used for matching SQL.
    If you use a litteral, it will generate the plan based on that, and will generate a separate plan for each litteral you use (depending on the value of the cursor_sharing initialisation parameter).
    This can cause there to be a difference between the execution plan seen when issuing EXPLAIN PLAN FOR, and the actual exectuion plan used when the query is run.
    Have a look at the following example:
    tylerd@DEV2> CREATE TABLE dt_test_bvpeek(id number, col1 number)
      2  /
    Table created.
    Elapsed: 00:00:00.14
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_bvpeek
      4  SELECT
      5      rownum,
      6      CASE
      7          WHEN MOD(rownum, 5) IN (0,1,2,3) THEN
      8              1
      9          ELSE
    10              MOD(rownum, 5)
    11          END
    12      END
    13  FROM
    14      dual
    15  CONNECT BY
    16      LEVEL <= 100000
    17  /
    100000 rows created.
    Elapsed: 00:00:00.81
    tylerd@DEV2> select count(*), col1 from dt_test_bvpeek group by col1
      2  /
      COUNT(*)       COL1
         80000          1
         20000          4
    2 rows selected.
    Elapsed: 00:00:00.09
    tylerd@DEV2> CREATE INDEX dt_test_bvpeek_i1 ON dt_test_bvpeek(col1)
      2  /
    Index created.
    Elapsed: 00:00:00.40
    tylerd@DEV2> EXEC dbms_stats.gather_table_stats( ownname=>USER,-
    tabname=>'DT_TEST_BVPEEK',-
    method_opt=>'FOR ALL INDEXED COLUMNS SIZE 254',-
    cascade=>TRUE -
    );PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.73
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = 1
      8  /
    Explained.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2611346395
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                | 78728 |   538K|    82  (52)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DT_TEST_BVPEEK | 78728 |   538K|    82  (52)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=1)
    13 rows selected.
    Elapsed: 00:00:00.06The execution plan for col1=1 was chosen because oracle was able to see that based on the statistics, col1=1 would result in most of the rows from the table being returned.
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = 4
      8  /
    Explained.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 3223879139
    | Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                   | 21027 |   143K|    74  (21)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| DT_TEST_BVPEEK    | 21027 |   143K|    74  (21)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | DT_TEST_BVPEEK_I1 | 21077 |       |    29  (28)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("COL1"=4)
    14 rows selected.
    Elapsed: 00:00:00.04This time, the optimiser was able to see that col1=4 would result in far fewer rows so it chose to use an index. Look what happens however when we use a bind variable with EXPLAIN PLAN FOR - especially the number of rows the optimiser estimates to be returned from the table
    tylerd@DEV2> var an_col1 NUMBER
    tylerd@DEV2> exec :an_col1:=1;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    tylerd@DEV2>
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = :an_col1
      8  /
    Explained.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2611346395
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                | 49882 |   340K|   100  (60)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DT_TEST_BVPEEK | 49882 |   340K|   100  (60)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=TO_NUMBER(:AN_COL1))
    13 rows selected.
    Elapsed: 00:00:00.04
    tylerd@DEV2>
    tylerd@DEV2> exec :an_col1:=4;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.01
    tylerd@DEV2>
    tylerd@DEV2> EXPLAIN PLAN FOR
      2  SELECT
      3      *
      4  FROM
      5      dt_test_bvpeek
      6  WHERE
      7      col1 = :an_col1
      8  /
    Explained.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
      2  /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2611346395
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                | 49882 |   340K|   100  (60)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DT_TEST_BVPEEK | 49882 |   340K|   100  (60)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1"=TO_NUMBER(:AN_COL1))
    13 rows selected.
    Elapsed: 00:00:00.07For both values of the bind variable, the optimiser has no idea what the value will be so it has to make a calculation based on a formula which results in it estimating that the query will return roughly half of the rows in the table, and so it chooses a full scan.
    Now when we actually run the query, the optimiser can take advantage of bind variable peeking and have a look at the value the first time round and base the execution plan on that:
    tylerd@DEV2> exec :an_col1:=1;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      *
      3  FROM
      4      dt_test_bvpeek
      5  WHERE
      6      col1 = :an_col1
      7  /
    80000 rows selected.
    Elapsed: 00:00:10.98
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    9t52uyyq67211
    1 row selected.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '9t52uyyq67211'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   FULL                           DT_TEST_BVPEEK
    2 rows selected.
    Elapsed: 00:00:00.03It saw that the bind variable value was 1 and that this would return most of the rows in the table so it chose a full scan.
    tylerd@DEV2> exec :an_col1:=4
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      *
      3  FROM
      4      dt_test_bvpeek
      5  WHERE
      6      col1 = :an_col1
      7  /
    20000 rows selected.
    Elapsed: 00:00:03.50
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    9t52uyyq67211
    1 row selected.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '9t52uyyq67211'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   FULL                           DT_TEST_BVPEEK
    2 rows selected.
    Elapsed: 00:00:00.01Even though the value of the bind variable changed, the optimiser saw that it already had a cached version of the sql statement along with an execution plan, so it used that rather than regenerating the plan. We can check the reverse of this by causing the statement to be invalidated and re-parsed - there's lots of ways, but I'm just going to rename the table:
    Elapsed: 00:00:00.03
    tylerd@DEV2> alter table dt_test_bvpeek rename to dt_test_bvpeek1
      2  /
    Table altered.
    Elapsed: 00:00:00.01
    tylerd@DEV2>
    20000 rows selected.
    Elapsed: 00:00:04.81
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    6ztnn4fyt6y5h
    1 row selected.
    Elapsed: 00:00:00.00
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '6ztnn4fyt6y5h'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   BY INDEX ROWID                 DT_TEST_BVPEEK1
    INDEX                          RANGE SCAN                     DT_TEST_BVPEEK_I1
    3 rows selected.
    80000 rows selected.
    Elapsed: 00:00:10.61
    tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
      2  /
    PREV_SQL_ID
    6ztnn4fyt6y5h
    1 row selected.
    Elapsed: 00:00:00.01
    tylerd@DEV2> SELECT
      2      operation,
      3      options,
      4      object_name
      5  FROM
      6      v$sql_plan
      7  WHERE
      8      sql_id = '6ztnn4fyt6y5h'
      9  /
    OPERATION                      OPTIONS                        OBJECT_NAME
    SELECT STATEMENT
    TABLE ACCESS                   BY INDEX ROWID                 DT_TEST_BVPEEK1
    INDEX                          RANGE SCAN                     DT_TEST_BVPEEK_I1
    3 rows selected.This time round, the optimiser peeked at the bind variable the first time the statement was exectued and found it to be 4, so it based the execution plan on that and chose an index range scan. When the statement was executed again, it used the plan it had already executed.
    HTH
    David

  • Hi when i run this in toad for explain plan  its showing an error ORA_22905

    hi when i run this query in toad for explain plan its showing an error ORA_22905 cannot accesss rows from an non nested table item can anyone help me out
    SELECT
    DISTINCT SERVICE_TBL.SERVICE_ID , SERVICE_TBL.CON_TYPE, SERVICE_TBL.S_DESC || '(' || SERVICE_TBL.CON_TYPE || ')' AS SERVICE_DESC ,SERVICE_TBL.CON_STAT
    FROM
    TABLE(:B1 )SERVICE_TBL
    WHERE
    CON_NUM = :B2
    thanks

    Note the name of this forum is SQL Developer *(Not for general SQL/PLSQL questions)* (so for issues with the SQL Developer tool). Please post these questions under the dedicated SQL And PL/SQL forum.
    Regards,
    K.

  • Oem explain plan produced doesn't correspond to explain plan with tkprof

    Hello all,
    I am running OEM on a 10.2.0.3 database and I have a very slow query with cognos 8 BI on my data warehouse.
    I went to the dbconsole through OEM and connected to target database I went to look at the query execution and then got an explain plan.
    Then I did a trace file and ran it throught tkprof.
    When I look at both produced explain plan, I can see the tree looks the same exept the corresponding values. In OEM, it says I am going throug 18000 rows and in the tkprof result it says more like millions of records.
    As anybody had these kind of results?
    Shall I have more confidence in TKprof then OEM?
    It is very important to me since I am being chalanged by an external DBA.

    I would recommend you to get Christian Antogini´s book "Troublshooting Oracle Performance", (http://www.antognini.ch/top/) which explains everything you need to know when analyzing Oracle SQL Performance and Explain Plans.
    If you properly trace your SQL Statement, you will get "STAT" lines in the trace file. These stat lines show you the actual number of rows processed per row source operation. Explain plan per default does only show you the "estimated" row counts for the row source operations no matter whether you use "explain=" in the tkprof call or OEM to get the explain plan. Tkprof reads the STAT lines from the trace and outputs a section which is similar to an execution plan but contains the "real" number of rows.
    However, if you want to troubleshoot Oracle Performance, I would recommend you to run the statement with the hint /*+ GATHER_PLAN_STATISTICS */ or set statistics_level=ALL in your session (not database-wide!).
    If you do, you can get an excellent execution plan with DBMS_XPLAN.DISPLAY_CURSOR containing both estimated rows as well as actual rows combined with the "number of starts" for e.g. nested loop joins.
    Get the book, read it, and you will be able to discuss this issue with your external dba in a professional way. ;-)
    Regards,
    Martin
    www.ora-solutions.net

  • Missing images in explain plan export 2.1.1

    Hi,
    the export of explain plan is creating an images folder, but without images in there so the html result is very very hard to read, without the images. XP SP3.
    Juergen

    I'm seeing the same issue. If one of the devs don't pick it up in here, you may want to submit a SR.

  • How to proceed further once the explain plan and trace files are generated?

    Hi Friends,
    I need to improve the performance of on of the views that i am working on.
    As suggested in the thread - http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0 , i gave generated the explain plan and the trace file.
    From the explain plan, we can see the expensive operations for the query.
    Can any one please tell, how to proceed further from here on i.e. how to make this expensive operations less expensive?
    For ex: FULL TABLE SCAN might be an expensive operation when the table has indexes.In such cases, how can we avoid such operations to make query faster?
    Regards,
    Sreekanth Munagala.

    Hi Veena,
    An earlier post by you regarding P45 is as below
    Starter report P45(3) / P46 efiling for UK
    from my understanding though i have not worked on GB Payroll you have said that you deleted IT 65 details of leaver,however there must be clusters generated in system from where the earlier data needs to be deleted and may be that is why you are facing the issue.
    In Indian payroll when we execute text file for efiling of tax after challan mapping all the data compiles and sits in PCL cluster and therefore we are unable to generate form 16 with proper output,here we delete the clusters and rerun again the mappings and then check form 16.
    Hope this might help you,Experts have suggested you earlier also,they may correct me for this.
    Salil

  • Explain plan different for same query

    Hi all,
    I have a query, which basically selects some columns from a remote database view. The query is as follows:
    select * from tab1@remotedb, tab2@remotedb
    where tab1.cash_id = tab2.id
    and tab1.date = '01-JAN-2003'
    and tab2.country_code = 'GB';
    Now, i am working on two environments, one is production and other is development. Production environment has following specification:
    1. Remotedb = Oracle9i, Linux OS
    2. Database on which query is running = Oracle10g, Linux OS
    Development environment has following specification:
    1. Remotedb = Oracle10g, Windows OS
    2. Database on which query is running = Oracle10g, Linux OS
    Both databases in development and production environments are on different machines.
    when i execute the above query on production, i see full table scans on both tables in execution plan(TOAD), but when i execute the query in development, i see that both remote database tables are using index.
    Why am i getting different execution plans on both databases? is there is any difference of user rights/priviliges or there is a difference of statistics on both databases. I have checked the statistics for both tables on Production and Development databases, they are updated.
    This issue is creating a performance disaster in our Production system. Any kind of help or knowledge sharing is appreciated.
    Thank you and Best Regards.

    We ran into a similar situation yesterday morning, though our implementation was easier than yours. Different plans in development and production though both systems were 10gR2 at the time. Production was doing a Merge Join Cartesian (!) instead of nested loop joins. Our DBA figured out that the production stats had been locked for some tables preventing stat refresh; she unlocked them and re-analyzed so which fixed our problem.
    Of some interest was discovering that I got different execution plans from the same UPDATE via EXPLAIN PLAN and SQL*PLUS AUTOTRACE in development. Issue appears to have been bind peeking. Converting bind variables to constants yielded the AUTOTRACE plan, as did turning bind peeking off while using the bind variables. CURSOR_SHARING was set to EXACT too.
    Message was edited by:
    riedelme

  • Explain plan shows 300T of TempSpc and 999 hours - tuning request

    Hi,
    I have a query which obtains summary statistics. There is an items table which contains the dictionary of items which can be recorded. There is an events table which contains the item ID, timestamp and a value. The query summarizes the data for each item. e.g. Mean, stddev, sample values, length. I have trimmed the query down as simply as possible and am having a problem with large temp space and runtime estimates in the explain plan. Here is the query:
    WITH ChartItems AS (
        SELECT
          ci.itemid,
          ci.label,
          ci.category,
          ci.description,
          COUNT(*),
          COUNT(distinct subject_id)
        FROM mimic2v26.d_chartitems ci
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid
        --WHERE ci.itemid = 51
        GROUP BY ci.itemid,
                 ci.label,
                 ci.category,
                 ci.description
    --select * from ChartItems;
    , RawData AS (
        SELECT DISTINCT
          ci.itemid,
          ci.label,
          ci.category,
          ci.description,
          last_value(ce.value1) ignore nulls over (partition BY ci.itemid order by
          ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS
          value1_last
        FROM ChartItems ci
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid
    select * from RawData;Which gives this explain plan:
    Plan hash value: 642453121
    | Id  | Operation                    | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |                |  4811G|  1238T|       |  3686M (13)|999:59:59 |
    |   1 |  VIEW                        |                |  4811G|  1238T|       |  3686M (13)|999:59:59 |
    |   2 |   HASH UNIQUE                |                |  4811G|   258T|       |  3686M (13)|999:59:59 |
    |   3 |    WINDOW BUFFER             |                |  4811G|   258T|       |  3686M (13)|999:59:59 |
    |   4 |     SORT GROUP BY            |                |  4811G|   258T|   317T|  3686M (13)|999:59:59 |
    |*  5 |      HASH JOIN               |                |  4811G|   258T|  4943M|    25M (90)| 85:14:28 |
    |   6 |       VIEW                   | VW_DAG_0       |   152M|  3199M|       |  1366K  (1)| 04:33:18 |
    |   7 |        HASH GROUP BY         |                |   152M|  4216M|  5839M|  1366K  (1)| 04:33:18 |
    |*  8 |         HASH JOIN            |                |   152M|  4216M|       |   147K  (2)| 00:29:36 |
    |   9 |          TABLE ACCESS FULL   | D_CHARTITEMS   |  4832 | 96640 |       |     7   (0)| 00:00:01 |
    |  10 |          INDEX FAST FULL SCAN| CHARTEVENTS_O2 |   196M|  1683M|       |   147K  (1)| 00:29:25 |
    |  11 |       TABLE ACCESS FULL      | CHARTEVENTS    |   196M|  6922M|       |   616K  (1)| 02:03:19 |
    Predicate Information (identified by operation id):
       5 - access("CE"."ITEMID"="ITEM_2")300T of temp space! Ouch.
    TKPROF output (I let the query run for a short while. I let it run for ages earlier, but wasn't tracing the session. Should I let it run for longer?):
    TKPROF: Release 11.2.0.3.0 - Development on Tue Jul 10 16:54:27 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Trace file: /oracle/11gr2/app/oracle/diag/rdbms/mimic2/mimic2/trace/mimic2_ora_6507.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    WITH ChartItems AS (
        SELECT
          ci.itemid,
          ci.label,
          ci.category,
          ci.description,
          COUNT(*),
          COUNT(distinct subject_id)
        FROM mimic2v26.d_chartitems ci
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid
        GROUP BY ci.itemid,
                 ci.label,
                 ci.category,
                 ci.description
    , RawData AS (
        SELECT DISTINCT
          ci.itemid,                                                                                                                                                                                           
          ci.label,                                                                                                                                                                                            
          ci.category,                                                                                                                                                                                         
          ci.description,                                                                                                                                                                                      
          last_value(ce.value1) ignore nulls over (partition BY ci.itemid order by                                                                                                                             
          ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS                                                                                                                            
          value1_last                                                                                                                                                                                          
        FROM ChartItems ci                                                                                                                                                                                     
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid                                                                                                                                                 
    select * from RawData                                                                                                                                                                                      
    call     count       cpu    elapsed       disk      query    current        rows                                                                                                                           
    Parse        1      0.02       0.02          0          0          0           0                                                                                                                           
    Execute      1      0.00       0.00          0          0          0           0                                                                                                                           
    Fetch        1    582.40     712.23     705238     737351          0           0                                                                                                                           
    total        3    582.43     712.25     705238     737351          0           0                                                                                                                           
    Misses in library cache during parse: 1                                                                                                                                                                    
    Optimizer mode: ALL_ROWS                                                                                                                                                                                   
    Parsing user id: 85 
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             0          0          0  VIEW  (cr=0 pr=0 pw=0 time=184 us cost=3686387254 size=1361581801333882 card=4811243114254)
             0          0          0   HASH UNIQUE (cr=0 pr=0 pw=0 time=180 us cost=3686387254 size=283863343740986 card=4811243114254)
             0          0          0    WINDOW BUFFER (cr=0 pr=0 pw=0 time=74 us cost=3686387254 size=283863343740986 card=4811243114254)
             0          0          0     SORT GROUP BY (cr=0 pr=0 pw=0 time=59 us cost=3686387254 size=283863343740986 card=4811243114254)
    178073889  178073889  178073889      HASH JOIN  (cr=737351 pr=705238 pw=124635 time=476372451 us cost=25572251 size=283863343740986 card=4811243114254)
       6211631    6211631    6211631       VIEW  VW_DAG_0 (cr=613768 pr=581413 pw=35805 time=286546567 us cost=1366486 size=3354399576 card=152472708)
       6211631    6211631    6211631        HASH GROUP BY (cr=613768 pr=581413 pw=35805 time=281271878 us cost=1366486 size=4421708532 card=152472708)
    196182740  196182740  196182740         HASH JOIN  (cr=613768 pr=545608 pw=0 time=557485047 us cost=147987 size=4421708532 card=152472708)
          4832       4832       4832          TABLE ACCESS FULL D_CHARTITEMS (cr=18 pr=16 pw=0 time=13666 us cost=7 size=96640 card=4832)
    196182740  196182740  196182740          INDEX FAST FULL SCAN CHARTEVENTS_O2 (cr=613750 pr=545592 pw=0 time=148428499 us cost=147046 size=1765644660 card=196182740)(object id 96501)
      10683044   10683044   10683044       TABLE ACCESS FULL CHARTEVENTS (cr=123583 pr=123825 pw=0 time=25175510 us cost=616507 size=7258761380 card=196182740)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      db file sequential read                         2        0.01          0.01
      db file scattered read                          3        0.02          0.03
      direct path read                             5265        0.75         77.13
      asynch descriptor resize                        1        0.00          0.00
      direct path write temp                      15077        0.71         45.54
      direct path read temp                        2387        0.29         13.54
      SQL*Net break/reset to client                   1        0.66          0.66
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 7jby2dxrpkkm9 Plan Hash: 1388734953
    select SYS_CONTEXT('USERENV','SESSIONID')
    from
    dual
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          0          0           1
    total        3      0.00       0.01          0          0          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 85 
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  FAST DUAL  (cr=0 pr=0 pw=0 time=4 us cost=2 size=0 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.02       0.04          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch        2    582.40     712.23     705238     737351          0           1
    total        6    582.43     712.27     705238     737351          0           1
    Misses in library cache during parse: 2
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       4        0.00          0.00
      db file sequential read                         2        0.01          0.01
      db file scattered read                          3        0.02          0.03
      direct path read                             5265        0.75         77.13
      asynch descriptor resize                        1        0.00          0.00
      direct path write temp                      15077        0.71         45.54
      direct path read temp                        2387        0.29         13.54
      SQL*Net break/reset to client                   1        0.66          0.66
      SQL*Net message from client                     3       14.99         15.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        2  user  SQL statements in session.
        0  internal SQL statements in session.
        2  SQL statements in session.
    Trace file: /oracle/11gr2/app/oracle/diag/rdbms/mimic2/mimic2/trace/mimic2_ora_6507.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
           2  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           2  SQL statements in trace file.
           2  unique SQL statements in trace file.
       24245  lines in trace file.
         728  elapsed seconds in trace file.Optimizer parameters:
    NAME                                               TYPE        VALUE                                                                                               
    optimizer_capture_sql_plan_baselines               boolean     FALSE                                                                                               
    optimizer_dynamic_sampling                         integer     2                                                                                                   
    optimizer_features_enable                          string      11.2.0.3                                                                                            
    optimizer_index_caching                            integer     0                                                                                                   
    optimizer_index_cost_adj                           integer     100                                                                                                 
    optimizer_mode                                     string      ALL_ROWS                                                                                            
    optimizer_secure_view_merging                      boolean     TRUE                                                                                                
    optimizer_use_invisible_indexes                    boolean     FALSE                                                                                               
    optimizer_use_pending_statistics                   boolean     FALSE                                                                                               
    optimizer_use_sql_plan_baselines                   boolean     TRUE                                                                                                 Can anyone help me figure out what's wrong?
    This is a data warehouse, with up to date statistics. The DB is version 11.2.0.3.0
    Thanks,
    Dan

    rp0428 wrote:
    >
    I trimmed the query down to the simplest that I could which still exhibited the problem
    >
    The DISTINCT isn't the issue. The issue is that the query you posted isn't necessary and doesn't appear to be represented in the plan that you posted.
    That makes it hard to tell where the cardinality misestimates are coming from. How many records does the actual first query (the one with the group by) return? You have a SELECT * commented out - how many rows?No, it's not necessary, but then it isn't the full query. The estimates are still wrong when I pass the count columns through to the second query. The full query contains many more analytic functions in the second query, and some additions to the first. It could probably be cleaned up a little, but the current structure makes logical sense. Should I change the aggregates to analytics and move them into the second query? In any case, there definitely seems to be something wrong with the plan.
    I don't know what you mean when you say "doesn't appear to be represented in the plan that you posted".
    I just checked the first query and while the temp space has dropped to a reasonable amount, the number of rows in the hash join is still way off (I didn't notice before because I was looking at the temp space). The rows for d_chartitems and the index on chartevents are correct:
    Plan hash value: 1249235674
    | Id  | Operation               | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT        |                |   152M|    35G|       |  1366K  (1)| 04:33:18 |
    |   1 |  VIEW                   |                |   152M|    35G|       |  1366K  (1)| 04:33:18 |
    |   2 |   WINDOW SORT           |                |   152M|  4216M|  5839M|  1366K  (1)| 04:33:18 |
    |*  3 |    HASH JOIN            |                |   152M|  4216M|       |   147K  (2)| 00:29:36 |
    |   4 |     TABLE ACCESS FULL   | D_CHARTITEMS   |  4832 | 96640 |       |     7   (0)| 00:00:01 |
    |   5 |     INDEX FAST FULL SCAN| CHARTEVENTS_O2 |   196M|  1683M|       |   147K  (1)| 00:29:25 |
    Predicate Information (identified by operation id):
       3 - access("CE"."ITEMID"="CI"."ITEMID")Edited by: danscott on Jul 10, 2012 5:43 PM
    Edited by: danscott on Jul 11, 2012 6:28 AM

  • Explain Plan from TKPROF trace file.

    Hello,
    My procedure is taking time for that i have trace on in procedure fo find explain plan of particular query.
    Using below statement Trace is on.
       EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE = TRUE';
       EXECUTE IMMEDIATE 'ALTER SESSION SET TIMED_STATISTICS = TRUE';Now for getting expalin plan from TKPROF i have used below statement but for some query i have found explain paln and some case i cant found explain plan.
    tkprof mf_ora_23773.trc mf_ora_23773.txt explain =abc/abc
    can u please help me to analyze where i m wrong?
    Thanks.

    First of all, you should best avoid using the explain= clause on the tkprof command line.
    This will run explain plan on the statement in the trace file, and that explain plan can even be wrong, as there is no information on datatypes in the trace file.
    The real explain plan data is flushed to the trace file, when the program issues commit or rollback. Oracle always issues an implicit commit when the program disconnects, so when you run the program to completion you should have explain plan output in your trace files.
    You won't get explain plan output if you don't have access to the objects in the SQL statement. Also recursive SQL won't produce explain plan result.
    One would need to see part of your trace file to verify your assertion the explain clause doesn't always work.
    Sybrand Bakker
    Senior Oracle DBA

  • Explain Plan not working with 919, was working with 804

    Hi,
    Explain Plan (F6) was working cprrectly with Raptor 804 and is not working any more on 919. This is the same query exactly (select * from mytable). The server is 9.2.0.5.0. The result with 919 is "Invalid Column Name".
    Regards
    Eric

    I get this too.
    The plan table structure is
    CREATE TABLE "ELX"."PLAN_TABLE"
       (     "STATEMENT_ID" VARCHAR2(30),
         "TIMESTAMP" DATE,
         "REMARKS" VARCHAR2(80),
         "OPERATION" VARCHAR2(30),
         "OPTIONS" VARCHAR2(30),
         "OBJECT_NODE" VARCHAR2(128),
         "OBJECT_OWNER" VARCHAR2(30),
         "OBJECT_NAME" VARCHAR2(30),
         "OBJECT_INSTANCE" NUMBER,
         "OBJECT_TYPE" VARCHAR2(30),
         "OPTIMIZER" VARCHAR2(255),
         "SEARCH_COLUMNS" NUMBER,
         "ID" NUMBER,
         "PARENT_ID" NUMBER,
         "POSITION" NUMBER,
         "COST" NUMBER,
         "CARDINALITY" NUMBER,
         "BYTES" NUMBER,
         "OTHER_TAG" VARCHAR2(255),
         "PARTITION_START" VARCHAR2(255),
         "PARTITION_STOP" VARCHAR2(255),
         "PARTITION_ID" NUMBER,
         "OTHER" LONG,
         "DISTRIBUTION" VARCHAR2(30)And raptor seems to be issuing this
    PARSING IN CURSOR #2 len=82 dep=0 uid=80 oct=50 lid=80 tim=3808125826 hv=1796199517 ad='17ebc98c'
    EXPLAIN PLAN SET STATEMENT_ID ='4363' INTO PLAN_TABLE FOR select * from collection
    END OF STMT
    PARSE #2:c=0,e=742,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=3808125815
    BINDS #2:
    =====================
    PARSE ERROR #3:len=480 dep=1 uid=80 oct=2 lid=80 tim=3808128369 err=904
    insert into "PLAN_TABLE" (statement_id, timestamp, operation, options,object_node, object_owner, object_name, object_instance, object_type,search_columns, id, parent_id, position, other,optimizer, cost, cardinality, bytes, other_tag, partition_start, partition_stop, partition_id, distribution, cpu_cost, io_cost, temp_space, access_predicates, filter_predicates ) values(:1,SYSDATE,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27)
    =====================cpu_cost, io_cost, temp_space, access_predicates, filter_predicates seem to be invalid
    Message was edited by:
    smitjb

  • Explain plan changing for the same sql

    Hi All,
    In a E Business suite application, we have the 10.2.0.4 Database.
    One of the program is running a select stmt which is using different explain plan one in a month which is causing issue in the program running for longer time.
    Ex : When it uses the index A, it is running fine. When it uses the index B, it is running for longer time.
    Can you please advice on the possible reasons for the same sql to choose index B instead of index A some times.
    Thanks,
    Rakesh

    It could be that the SQL is question got aged out of the shared pool and when it came to be reparsed - the values in the bind variables were such that access via index b was more attractive than access via index a.
    Could you please send the query and the good and bad plans and all other information that might help diagnose the problem..
    Note: we had a similiar case where plans suddenly changed for no apperant reason (on 10.2.0.2) - we found that under certain circumstances the optimizer would not peek into the bind varaibles to derive the execution path.

  • Explain plan, slow update

    We have two schemes in our database. One general and one history. When record in general scheme is modified trigger updates record in history scheme and inserts one new row in history scheme. There are indexes on columns that are used in updates(where part :)) and as i can see(using explain plan for) indexes are used. Problem is that when client application fires update it takes too long(6 sec) to be executed. Actually there is very strange behavior, it executes 5 updates in one second then next one takes 6 seconds. Tables have only around 1000 rows. Is there any way to see whole explain plan including trigger execution, when i issue exp plan for update of general scheme i get only exp plan for update - there are no informations about trigger. Also if anybody has some advice I am more than willing to hear :)

    You might want to consider enabling a SQL Trace via DBMS_MONITOR for more recent versions or setting the SQL_TRACE parameter in past versions. Once you have the output you can use TKPROF. Once you have the TKPROF report you can try and find the queries that are taking the most time to execute.
    Hope that helps!

  • Error with Explain Plan

    Hello,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
    NLSRTL Version 10.2.0.4.0 - Production
    I have this issue:
    On the sited above database first I execute a SELECT query, no matter what is it, with EXPLAIN PLAN FOR:
    EXPLAIN PLAN FOR SELECT <some query comes here>This executes successfully.
    Next, I do this to see the explain plan:
    SELECT *
    FROM TABLE (dbms_xplan.display());This generates the following error, which I read from the column "PLAN_TABLE_OUTPUT" of the result set:
    ERROR: an uncaught error in function display has happened; please contact Oracle support
    Please provide also a DMP file of the used plan table PLAN_TABLE
    ORA-00904: "OTHER_TAG": invalid identifierI see that it is said to contact Oracle support, but unfortunately in this firm I am not in position to contact Oracle, when there is an issue.
    Probably it is obvious to most of you, but since I receive this error for first time, I am wondering where the reason for the error could be.
    The table PLAN_TABLE exists, which I know is needed to hold the output of an EXPLAIN PLAN statement.
    Generally, in this database, when I try to see the explain plan for any query, the plan shows no values for niether parameter: Cost, CPU Cost, I/O Cost, Cardinality, whatever.
    Could anyone presume, what could be changed in order the problem to be fixed.
    What else .. the reason is not into the tool I am using, which is PL/SQL Developer, version 7.1.5, because for other databases there is no problem with EXPLAIN PLAN.
    Thanks.

    You have an invalid PLAN_TABLE that has been created by some utility or has come from a script from a lower version.
    See the script $ORACLE_HOME/rdbms/admin/catplan.sql for the correct 10.2.0.4 PLAN_TABLE (script executed by SYS AS SYSDBA)
    Alternatively, use $ORACLE_HOME/rdbms/admin/utlxplan.sql to create a private PLAN_TABLE
    Hemant K Chitale

Maybe you are looking for

  • MRP Area tables for MD06 information access like SQ01, DBM logical database

    We currently use SQ01 DBM logical database to access MD06 information (tables MDKP and MDTB).  The MRP results from the MRP Area are not showing in this query. This implies that the MRP area, MRP results are stored in other tables. I have reviewed ta

  • Outlook contact groups not appearing in iTunes Sync

    I was having trouble getting contact groups to appear in iTunes froma outlook. For awhile I got 1 group to appear and no others. I expermentally deleted that group and it remained while nothing else showed up. Apple support had no idea what to do and

  • Cannot unlock Photoshop Album Starter 3.2 with unlock code

    Why am I unable to access my pictures with an unlock code in Photoshop Album Starter 3.2?

  • Extended Warranty Question for iPhone 4

    Hello, I have the extended warranty that's $1.99.  It says: This service provides convenient coverage against any mechanical or electrical malfunctions and manufacturer defects. If your wireless phone is defective or becomes inoperable, just visit an

  • Drum Recording - Drummer can't hear tracks

    Hi guys. I was wondering if I could hear your solutions to this issue. when i record drums with my drummer, he plays with headphones plugged into the headphone jack, but the drums are loud and drown out the metronome. The only solution i've found is