Explain Plan across a DBLink

We are using Oracle 8.1.7.2, but this problem seems pretty generic across most of the versions.
The situation: We have several queries that access data in two different databases, across a dblink. These queries both do joins across the dblink, and also utilize subselects to pull or push data across the link.
The problem: When we run Explain against these queries, it only states "Remote", and shows the carinality of rows returned, in the explain plan for that portion of the query that is run on the remote database through the dblink.
Obviously, for those queries using sub-selects we could manually split the query and run each half in the database where the objects reside. But for the complex queries doing joins across the link, we cannot split the query to explain it in this way.
The question: Is there a way to make the Explain function provide an access path map for these kinds of queries? If not, are there any other tools that would perform explains (or similar functionality) on queries accessing two databases?
Thanks for the assistance.
Scott

You're right. My guess is that the current db is not able to retrieve the details of the remote table in the data dictionary. And hence instead of table name, only REMOTE is displayed.

Similar Messages

  • Explain Plan in pl/sql

    Dear All.
    DBMS_XPLAN.DISPLAY_CURSOR(
       sql_id        IN  VARCHAR2  DEFAULT  NULL,
       child_number  IN  NUMBER    DEFAULT  NULL,
       format        IN  VARCHAR2  DEFAULT  'TYPICAL');
    SQL> SELECT * FROM table (
      2     DBMS_XPLAN.DISPLAY_CURSOR('b7jn4mf49n569'));
    PLAN_TABLE_OUTPUT
    SQL_ID  b7jn4mf49n569, child number 0
    select o.name, u.name from obj$ o, type$ t, user$ u  where o.oid$ = t.tvoid and
    u.user#=o.owner# and  bitand(t.properties,8388608) = 8388608 and
    (sysdate-o.ctime) > 0.0007
    Plan hash value: 4266358741
    | Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| T
    |   0 | SELECT STATEMENT              |         |       |       |    94 (100)|
    |   1 |  NESTED LOOPS                 |         |     1 |    72 |    94   (2)| 0
    |   2 |   NESTED LOOPS                |         |     1 |    56 |    93   (2)| 0
    |*  3 |    TABLE ACCESS FULL          | OBJ$    |    71 |  2414 |    37   (3)| 0
    |*  4 |    TABLE ACCESS BY INDEX ROWID| TYPE$   |     1 |    22 |     1   (0)| 0
    |*  5 |     INDEX UNIQUE SCAN         | I_TYPE2 |     1 |       |     0   (0)|
    |   6 |   TABLE ACCESS CLUSTER        | USER$   |     1 |    16 |     1   (0)| 0
    |*  7 |    INDEX UNIQUE SCAN          | I_USER# |     1 |       |     0   (0)|
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       3 - filter(("O"."OID$" IS NOT NULL AND SYSDATE@!-"O"."CTIME">.0007))
       4 - filter(BITAND("T"."PROPERTIES",8388608)=8388608)
       5 - access("O"."OID$"="T"."TVOID")
       7 - access("U"."USER#"="O"."OWNER#")
    29 rows selected
    SQL> As you can see using DBMS_XPLAN.DISPLAY_CURSOR. I can display the explain plan of any query IN SQL*PLUS.
    But I want to write a PL/SQL function that generating an explain plan for any query and displaying the explain plan in a htlm page
    how can i do same thing in pl/sql? I need yours advice.
    Thanks in advance!

    Generate the plan like so:
    begin
    execute immediate 'explain plan for select * from dual';
    end;Then you can put the dbms_xplan.display bit into a ref cursor and pass that across to the front end.
    Eg:
    SQL> variable rc refcursor
    SQL> begin
      2  execute immediate 'explain plan for select * from dual';
      3  open :rc for select * from table(dbms_xplan.display);
      4* end;
    SQL> /
    PL/SQL procedure successfully completed.
    SQL> print rc;
    PLAN_TABLE_OUTPUT
    Plan hash value: 3543395131
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |     2 |     5   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     5   (0)| 00:00:01 |
    8 rows selected.

  • Explain me about cost in explain plan.

    Hi All,
    Please explain me about cost in explain plan.
    Below is my explan plans
    1. first plan showing cost is more but query is returing the result very fast (564 msecs ) --- local database.
    2. second plan showing less cost but result is very slow (14 secs) .
    1. local database.
    PLAN_TABLE_OUTPUT 
    | ID  | Operation                         | NAME                      | ROWS  | Bytes | COST  |
    |   0 | SELECT STATEMENT                  |                           |    17 |  2312 |    60 |
    |   1 |  SORT UNIQUE                      |                           |    17 |  2312 |    59 |
    |   2 |   COUNT STOPKEY                   |                           |       |       |       |
    |   3 |    NESTED LOOPS                   |                           |    17 |  2312 |    58 |
    |   4 |     MAT_VIEW ACCESS BY INDEX ROWID| MV_EDECO_ESONGS           |    17 |  1326 |    24 |
    |   5 |      INDEX RANGE SCAN             | IDX_ESNG_REG_TITLE        |    21 |       |     3 |
    |   6 |     TABLE ACCESS BY INDEX ROWID   | MV_EDECO_ESONGS_TERR_CTRY |     1 |    58 |     2 |
    |   7 |      INDEX UNIQUE SCAN            | IDX_ESNG_TERR_ESNG_ID_1   |     1 |       |     1 |
    PLAN_TABLE_OUTPUT
    | ID  | Operation                         | NAME                       | ROWS  | Bytes | COST  |
    |   0 | SELECT STATEMENT                  |                            |     9 |  1260 |    34 |
    |   1 |  SORT UNIQUE                      |                            |     9 |  1260 |    33 |
    |   2 |   COUNT STOPKEY                   |                            |       |       |       |
    |   3 |    NESTED LOOPS                   |                            |     9 |  1260 |    32 |
    |   4 |     MAT_VIEW ACCESS BY INDEX ROWID| MV_EDECO_ESONGS            |     9 |   720 |    14 |
    |   5 |      INDEX RANGE SCAN             | IDX_ESNG_REG_TITLE         |    11 |       |     3 |
    |   6 |     MAT_VIEW ACCESS BY INDEX ROWID| MV_EDECO_ESONGS_TERR_CTRY  |     1 |    60 |     2 |
    |   7 |      INDEX UNIQUE SCAN            | PK_EDESONGSTERCTRY_ESONGID |     1 |       |     1 |
    ------------------------------------------------------------------------------------------------Regards,
    Rajasekhar

    rajasekhar_n wrote:
    Hoek, I have cross checked the results both the querys are precessing same records and returing same result *(both are using DB links).*But you said the first query was on a local database?
    If you're using dblinks then it's not local is it! ?:|

  • EXPLAIN PLAN estimate is WAY off!

    I used Explain Plan to estimate the time it will take to run a SQL query.
    This is the query:
    select to_char (a.calldate,'yyyy')as "date", a.CALL_TYPE_ABBR_NAME, b.name, sum(a.call_price) as "monthly revenue"
    from allcalls a, call_types b
    where a.CALL_TYPE_ABBR_NAME= b.code
    group by to_char (a.calldate,'yyyy'), a.CALL_TYPE_ABBR_NAME, b.name
    order by to_char (a.calldate,'yyyy'),a.CALL_TYPE_ABBR_NAME, b.name;
    allcalls table has 12,669,348 records and call_types has only 3.
    Both tables have been analyzed.
    The query actually takes only 2 minutes and 21 seconds to complete execution.
    But according to EXPLAIN PLAN and SELECT plan_table_output FROM table(dbms_xplan.display) it will take 2 hours. The output of the table(dbms_xplan.display) is:
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <HTML><HEAD></HEAD>
    <BODY><!--StartFragment--><TABLE>
    <TBODY>
    <TR bgcolor=#afafaf>
    <TD><B>PLAN_TABLE_OUTPUT</B></TD></TR><TR>
    <TD>Plan hash value: 3083213950</TD>
    </TR>
    <TR>
    <TD> </TD>
    </TR>
    <TR>
    <TD>------------------------------------------------------------------------------------------</TD>
    </TR>
    <TR>
    <TD>| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |</TD>
    </TR>
    <TR>
    <TD>------------------------------------------------------------------------------------------</TD>
    </TR>
    <TR>
    <TD>| 0 | SELECT STATEMENT | | 12M| 241M| | 215K (1)| 00:43:05 |</TD>
    </TR>
    <TR>
    <TD>| 1 | SORT GROUP BY | | 12M| 241M| 776M| 215K (1)| 00:43:05 |</TD>
    </TR>
    <TR>
    <TD>|* 2 | HASH JOIN | | 12M| 241M| | 138K (1)| 00:27:40 |</TD>
    </TR>
    <TR>
    <TD>| 3 | TABLE ACCESS FULL| CALL_TYPES | 3 | 30 | | 3 (0)| 00:00:01 |</TD>
    </TR>
    <TR>
    <TD>| 4 | TABLE ACCESS FULL| ALLCALLS | 12M| 120M| | 138K (1)| 00:27:39 |</TD>
    </TR>
    <TR>
    <TD>------------------------------------------------------------------------------------------</TD>
    </TR>
    <TR>
    <TD> </TD>
    </TR>
    <TR>
    <TD>Predicate Information (identified by operation id):</TD>
    </TR>
    <TR>
    <TD>---------------------------------------------------</TD>
    </TR>
    <TR>
    <TD> </TD>
    </TR>
    <TR>
    <TD> 2 - access("A"."CALL_TYPE_ABBR_NAME"="B"."CODE")</TD>
    </TR>
    </TBODY></TABLE><!--EndFragment--></BODY>
    </HTML>
    So, it's basically useless to rely on EXPLAIN PLAIN to estimate the time it will take to run a query? Or is there something else we can do to get the estimate better?

    Hello Channa,
    You are really going about this thing the wrong way... but... you have amused a few :)
    As you've been told (a number of times and, a number of different ways :) ) any attempt at pegging the execution time of a SQL statement is quite futile and rather irrelevant to boot.
    This is what I suggest you do:
    1. Whatever application you are selling, it must handle a certain volume of data. Since I don't know anything about the application you're selling, I'm going to give you an example while pulling numbers out of my magic hat ;) . Let's presume that your application should handle reasonably well 12,000,000 rows spread across 100 tables and, that there are a number of reports that are run with greater frequency than others. (since you know your application, you'll need to adjust these numbers for whatever are representative numbers for your app).
    1.1 As an aside, I would not tell a prospective customer that you've tested the system using 14 rows. That could turn a few of them off ;) In other words, would you buy a database product from anyone that told you "Oh yeah! we tested it with 14 rows, it did great! ... runs like champ!" ? probably not. That's why you need larger numbers as pointed out in 1. above.
    2. Once you have a setup with a reasonable number of rows ;) you run whatever reports are important to your customers and you time them. The timings don't have to be exact, they'll vary but, everything being kept equal, the variation won't be as dramatic as using 14 rows ;)
    2.1 Given a reasonable test sample - meaning: more rows than there are donuts in a cop's car at 7:00 a.m - you customarily throw away the best and worst run times because those are often not representative.
    2.2 You compute an average and a standard deviation (this is not hard to do)
    2.3 You tell your customers that in order to get similar results to what you've presented they need xyz hardware configuration. In other words, you qualify the environment in which those results can be obtained.
    If someone asks for an exact figure, you tell them that there are refreshments on the table ;) or, if you are so inclined, you explain that exact is not possible.
    HTH,
    John.

  • Explain Plan and  COST

    Hi,
    Is there any relationship between the cost factor in the explain plan and query execution time. I have come across situations in both ways, i.e. high cost and low execution time, low cost and high execution time. What parameters do we need to consider in tuning a query high cost or low cost?So my assumption is that the lower cost does not guarantee faster response time. Again i have have seen many people try to reduce the cost first.Can anyone help me on this issue please.
    Thanks-Bhaskar

    Cost is a metric that Oracle uses to estimate the runtime of a query. However, it is not something that you probably ought to pay a lot of attention to. If you are looking at the performance of a query, you are presumably operating on the assumption that the optimizer may have chosen an incorrect plan. If the optimizer chose an incorrect plan then by definition the cost is incorrect. If the cost is correct then, by definition, Oracle chose the correct plan.
    It makes much more sense to focus on things like the cardinality of each step (the number of rows each step in the plan is expected to return). If the cardinality estimates are correct (or reasonably close), then the optimizer probably chose the correct plan. If the cardinality estimates are way off, the optimizer probably chose a less than optimal plan. And when you find the first step where the cardinality estimates are wildly incorrect, you'll know where you need to start focusing your tuning efforts.
    Justin

  • What is the "cost" in an explain plan

    We are using Oracle 10 with the cost based optimizer in our PeopleSoft system. When we come across long running SQL statements we run explain plans. I have heard different explanations of what cost means. The latest I heard is disk access time and a cost of one being 15 milliseconds for one disk access. So if a select statement has a cost of, say 1000, then that means it is taking .015 X 1000 or 15 seconds to bring in all of the rows - Is this accurate.
    Thanks,
    Allen Cunningham
    DBA, Sonoma State University

    That question is not directly linked to Peoplesoft where you initially posted your thread.
    Anyway, the situation is not that simple as you said, have a look to the Jonathan Lewis' blog entry :
    http://jonathanlewis.wordpress.com/2006/12/11/cost-is-time/
    Nicolas.
    +<thread moved to Database General Forum: General Database Discussions

  • Explain plan changes by result size from contains clause

    I use 10.2.0.3 Std Edition and have a query like this
    select t1.id from table1 t1
    where t1.col99 = 123
    and t1.id in (select ttxt.id from fulltexttable ttxt where contains (ttxt.thetext, 'word1 & word2'));
    (note: for each row in table1 exists at least one corresponding row in fulltexttable)
    Now I came across a surprising change in execution plans depending on the values of word1 and word2:
    - if the number of result rows from the subquery is low compared to all rows the full text index is used (table access by rowid/domain index)
    - if the number of result rows is high explain plan does not indicate any use of the domain index (full text index) but only a full text table scan. And the slow execution proves this plan.
    But: if I create explain plan for the subquery only there is no difference whether the number of result rows is high or low: the full text index is always used.
    Any clue for this change in execution strategy?
    Michael

    hi michael,
    this is expected behaviour. because you have a query incorporating more than just a text-index, and furthermore, multiple tables, the optimizer may choose different access paths according to the cardinality of your where clause terms. in fact, anything you see is actually vanilla behaviour.
    however, as i suppose, you probably have not yet heard about the "post filter" characteristic of a context index. see the tuning chapter of the text dev guide for more info. also note that the optimizer has no other way than accessing the context index directly iff you execute the subquery on its own (the "post filter" characteristic is not applicable here, because a post filter always needs some input to be filtered). and finally, be aware that oracle may unnest your subquery by its own decision, that is, do not try to force a direct context index access by a subquery, it will not work (a compiler hint is the only thing that works relyably).
    the only thing i can not follow is the fts for your second example. dont you have join indexes on table1.id and fulltexttable.id, respectively?
    p

  • Query tunning in Oracle using Explain Plan

    Adding to my below question: I have now modified the query and the path shownby 'Explain plan' has reduced. The 'Time' column of plan_table is also showing much lesser value. However, some people are suggesting me to consider the time required by the query to execute on Toad. Will it be practical? Please help!!
    Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...
    Edited by: 885901 on Sep 20, 2011 2:10 AM

    885901 wrote:
    Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...how fast is fast enough?

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Cpu time is not getting displayed in explain plan

    Hi All,
    I am trying to analyze one query using explain plan .like below
    1) explain plan for
    SELECT /*+ parallel(tsp,8) use_hash( tsp tp) */ count(1)
    FROM router tp,
    receiver tsp
    WHERE tp.rs = tsp.rp
    AND creater_date >=to_date('04032009000000','ddmmyyyyhh24miss')
    and tsp.XVF is not null
    and tp.XVF is not null
    and tp.role_name='BR';
    2)@$ORACLE_HOME/rdbms/admin/utlxpls.sql
    But i am getting only following columns in result .
    | Id | Operation | Name | Rows | Bytes | Cost |
    No Cpu time preset .
    How can i extimate CPU time ?
    Pls help
    Thanks

    am_73798 wrote:
    I am trying to analyze one query using explain plan .like below
    But i am getting only following columns in result .
    | Id | Operation | Name | Rows | Bytes | Cost |
    No Cpu time preset .
    How can i extimate CPU time ?You need to mention your database version (4-digits, e.g. 9.2.0.8).
    In Oracle 9i CPU costing is disabled by default, you need to gather WORKLOAD system statistics to enable the CPU costing.
    In 10g CPU costing is enabled by default and uses default NOWORKLOAD system statistics if no WORKLOAD system statistics have been gathered. It can only be disabled by setting an undocumented parameter or by changing the OPTIMIZER_FEATURES_ENABLE parameter back to 9i compatibility.
    You can check the status of your system statistics by running the following query in SQL*Plus:
    column sname format a20
    column pname format a20
    column pval2 format a20
    select
    sname
    , pname
    , pval1
    , pval2
    from
    sys.aux_stats$;Can you show us the actual (complete) output you get from "utlxpls.sql"? Use the \ tag to preserve formatting here:
    \output
    \will show asoutput
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Not Understanding the filter in Explain Plan - filter(NULL IS NOT NULL)

    Hi All,
    Request your help in understanding the below scenario. (I am not aware of teh application and table details. Just trying to help my friend)
    SQL> conn
    Enter user-name: [email protected]
    Enter password:
    Connected.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE    10.2.0.3.0      Production
    TNS for Linux: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    --Checking the count in PO_LINES
    SQL> select count(*) from po_lines;
      COUNT(*)
             0
    --PO_LINES is a synonym
    SQL> select object_type,owner from dba_objects where object_name = 'PO_LINES';
    OBJECT_TYPE         OWNER
    SYNONYM             APPS
    --The synonym is pointing to PO.PO_LINES_ALL
    SQL> select * from user_synonyms where synonym_name = 'PO_LINES';
    SYNONYM_NAME                   TABLE_OWNER                    TABLE_NAME                     DB_LINK
    PO_LINES                       PO                             PO_LINES_ALL
    --But when counting PO.PO_LINES_ALL I am getting different result
    SQL> select count(*) c from po.po_lines_all;
             C
          8828
    --Explain plan of teh original query is
    SQL> explain plan for
      2  select
      3  * from po_lines;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT   |              |     1 |   252 |     0   (0)|
    |*  1 |  FILTER            |              |       |       |            |
    |   2 |   TABLE ACCESS FULL| PO_LINES_ALL |  8796 |  2164K|   106   (4)|
    Predicate Information (identified by operation id):
       1 - filter(NULL IS NOT NULL)
    --Now the object PO.PO_LINES_ALL is TABLE, not an mview.
    SQL> select object_type,owner from dba_objects where object_name = 'PO_LINES_ALL';
    OBJECT_TYPE         OWNER
    TABLE               POSeek your help in understanding what is happening here.
    Thanks in Advance,
    jeneesh

    Next time, prefix with APPS. when you show us the explain plan:
    SQL> explain plan for
      2  select
      3  * from apps.po_lines;  -- added the prefix of owner.Just like you prefixed with PO. when you showed us the query on PO_LINES_ALL. It ensures that you are using the synonym which you showed us.
    Btw. PO_LINES_ALL, could still be a VIEW given your overview of the situation.
    Anyway a filter "NULL IS NOT NULL" is indicative that the optimizer performed something called semantic query optimization (SQO).
    SQO is the process of deducing new predicates based upon a) existing predicates in your query (which there is none), b) added predicates to your query (eg. by a VPD policy function), and c) declared constraints on the tables invovled in your query.
    A typical example of when a "NOT is NOT NULL" predicate will show up is when for instance in the EMP table there is a declared constraint on EMPNO like this:
    check(EMPNO > 0)And your query would hold a predicate that is inconsistent with the constraint, for instance like this:
    select *
    from EMP
    where EMPNO <= 0Oracle will deduce that EMPNO cannot be both greater than zero (constraint) as well as smaller than or equal to zero (your query predicate), and will transform the query into:
    select *
    from EMP
    where EMPNO <= 0
      and NULL is NOT NULLThus preventing accessing the EMP table all together, and immediately returning this query with no data found.
    Edited by: Toon Koppelaars on Mar 15, 2010 7:17 AM

  • Filter(NULL IS NOT NULL) in Explain Plan ??

    Hi All,
    Can someone please explain what this explain plan statement means? I see a filter(NULL IS NOT NULL) as the first statement - could not figure out why it came up so from googling.
    My Query Used:
    EXPLAIN PLAN FOR
    MERGE INTO summary_bysrccd
    USING
    (SELECT LAST_DAY(TRUNC(to_timestamp(os.requestdatetime, 'yyyymmddhh24:mi:ss.ff4'))) AS SUMMARY_DATE,
    os.acctnum,
    ol.sourcecode AS sourcecode,
    ol.sourcename AS sourcename,
    count(1) cnt_articleview
    FROM article_views os , master_sourcecode ol
    where os.sourcecode = ol.sourcecode
    AND os.acctnum IS NOT NULL
    AND ol.sourcecode IS NOT NULL
    AND os.requestdatetime IS NOT NULL
    AND UPPER(os.success_ind) = 'S'
         AND (
              ('INCR'  = 'FULL'
              AND  (get_date_timestamp(os.requestdatetime) BETWEEN TO_DATE('23-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('27-AUG-2011 23:59:59','DD-MON-YYYY HH24:MI:SS')
              AND   os.entry_CreatedDate BETWEEN TO_DATE('22-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('28-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS')
              OR ('INCR' = 'FULL'
              AND os.entry_createddate BETWEEN TO_DATE('23-AUG-2011 00:00:00','DD-MON-YYYY HH24:MI:SS') AND TO_DATE('27-AUG-2011 23:59:59','DD-MON-YYYY HH24:MI:SS') )
    group by LAST_DAY(TRUNC(to_timestamp(os.requestdatetime, 'yyyymmddhh24:mi:ss.ff4'))),
    os.acctnum,ol.sourcecode,ol.sourcename) mrg_query
    ON (ods_av_summary_bysrccd.acctnum = mrg_query.acctnum AND
    ods_av_summary_bysrccd.summary_date=mrg_query.summary_date AND
    ods_av_summary_bysrccd.sourcecode=mrg_query.sourcecode)
    WHEN NOT MATCHED THEN
    INSERT (SUMMARY_date,ACCTNUM,SOURCECODE,SOURCENAME,CNT_ARTICLEVIEW,ENTRY_LASTUPDATEDDATE)
    VALUES(mrg_query.summary_date,mrg_query.acctnum,mrg_query.sourcecode,mrg_query.sourcename,
    mrg_query.cnt_articleview,sysdate)
    WHEN MATCHED THEN
    UPDATE SET ods_av_summary_bysrccd.cnt_articleview=
    CASE WHEN NVL('INCR','INCR') = 'FULL' THEN mrg_query.cnt_articleview
    ELSE ods_av_summary_bysrccd.cnt_articleview+mrg_query.cnt_articleview
    END,
    ods_av_summary_bysrccd.entry_lastupdateddate=sysdate;My Explain Plan:
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 268591246
    | Id  | Operation                                 | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | MERGE STATEMENT                           |                           |     1 |   456 |       |     3   (0)| 00:00:01 |       |       |
    |   1 |  MERGE                                    | ODS_AV_SUMMARY_BYSRCCD    |       |       |       |            |          |       |       |
    |   2 |   VIEW                                    |                           |       |       |       |            |          |       |       |
    |   3 |    NESTED LOOPS OUTER                     |                           |     1 |   417 |       |     3   (0)| 00:00:01 |       |       |
    |   4 |     VIEW                                  |                           |     1 |   360 |       |     5 (100)| 00:00:01 |       |       |
    |   5 |      SORT GROUP BY                        |                           |     1 |    73 |   595M|            |          |       |       |
    PLAN_TABLE_OUTPUT
    |*  6 |       FILTER                              |                           |       |       |       |            |          |       |       |
    |*  7 |        HASH JOIN                          |                           |  6975K|   485M|  3944K| 17594   (1)| 00:03:32 |       |       |
    |   8 |         TABLE ACCESS FULL                 | ODS_MASTER_SOURCECODE     | 84021 |  2953K|       |   273   (1)| 00:00:04 |       |       |
    |*  9 |         TABLE ACCESS BY GLOBAL INDEX ROWID| ODS_ARTICLE_VIEWS         |  7007K|   247M|       |   826   (0)| 00:00:10 |    33 |    33 |
    |* 10 |          INDEX FULL SCAN                  | IDX_AV_ACCTNUM            |    25M|       |       |    26   (0)| 00:00:01 |       |       |
    |  11 |     TABLE ACCESS BY GLOBAL INDEX ROWID    | ODS_AV_SUMMARY_BYSRCCD    |     1 |    57 |       |     3   (0)| 00:00:01 | ROWID | ROWID |
    |* 12 |      INDEX UNIQUE SCAN                    | ODS_AV_SUMMARY_BYSRCCD_PK |     1 |       |       |     2   (0)| 00:00:01 |       |       |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       6 - filter(NULL IS NOT NULL)
       7 - access("OS"."SOURCECODE"="OL"."SOURCECODE")
       9 - filter("OS"."REQUESTDATETIME" IS NOT NULL AND "OS"."ENTRY_CREATEDDATE">=TO_DATE(' 2011-08-23 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "OS"."ENTRY_CREATEDDATE"<=TO_DATE(' 2011-08-27 23:59:59', 'syyyy-mm-dd hh24:mi:ss') AND UPPER("OS"."SUCCESS_IND")='S')
      10 - filter("OS"."ACCTNUM" IS NOT NULL)
      12 - access("ODS_AV_SUMMARY_BYSRCCD"."SUMMARY_DATE"(+)=INTERNAL_FUNCTION("MRG_QUERY"."SUMMARY_DATE") AND
                  "ODS_AV_SUMMARY_BYSRCCD"."ACCTNUM"(+)="MRG_QUERY"."ACCTNUM" AND "ODS_AV_SUMMARY_BYSRCCD"."SOURCECODE"(+)="MRG_QUERY"."SOURCECODE")
    Note
    PLAN_TABLE_OUTPUT
       - dynamic sampling used for this statement

    Hi Toon,
    Thanks for the quick resolution. I went back and verified the table's colunm details and it has a NOT NULL constraint.
    Regards,
    Chaitanya
    P.S: Is it ok if I ask you for some help regarding a production issue I have been encountering since 15 days but haev no clear resolution yet about what/why is the reason (the said issue is neither uniform nor regular - its affecting some modules and happening on some days - i shall give the full details if you are willing to have a look) - i shall start a new post or email you directly - yur convenience.

  • Explain plan results are different in SQL Developer than SQL Plus

    My Environment:
    SQL Developer 1.0.0.15.27
    Platform where SQL Developer is running: Windows XP 2002 SP2
    Oracle Database and Client 9.2.0.7
    Optimizer_mode: FIRST_ROWS
    I have the following SQL statement:
    SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE :b2 = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > :b1
    When I run an Explain in SQL Developer I get the following access path (which is the one I really want):
    SELECT STATEMENT                          TABLE ACCESS(BY INDEX ROWID) FEDLINK.AU_COMPANY          NESTED LOOPS                                   INDEX(RANGE SCAN)
    FEDLINK.UX2_TEMP_AU_COMPANY
              INDEX(RANGE SCAN) FEDLINK.PX1_COMPANY
    However, when I execute the statement with sql_trace turned on and use tkprof to generate the actual access path, the statement executes as follows (which is WAY more expensive):
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 3.58 6.68 28136 29232 0 0
    total 3 3.58 6.69 28136 29232 0 0
    Misses in library cache during parse: 1
    Optimizer goal: FIRST_ROWS
    Parsing user id: 979 (FEDLINK) (recursive depth: 1)
    Rows Row Source Operation
    0 NESTED LOOPS
    0 TABLE ACCESS FULL AU_COMPANY
    0 INDEX RANGE SCAN UX2_TEMP_AU_COMPANY (object id 49783)
    Notice the FULL access of au_company.
    I understand that SQL Developer has nothing to do with why the statement executed the way it did, but why is the Explain in SQL Developer different than the actual execution plan?
    Added note....when I run the explain in SQL Plus it is the same as the actual execution. Here is the explain from SQL Plus:
    explain plan for SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE '1' = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > '01-MAY-2006';
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 76 | 2597 |
    | 1 | NESTED LOOPS | | 2 | 76 | 2597 |
    | 2 | TABLE ACCESS FULL | AU_COMPANY | 2 | 42 | 2595 |
    | 3 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 1 | 17 | 2
    Thanks,
    Brenda

    The explain is different (full scan of au_company in SQL Plus / index access in SQL Developer) even when I use variables in SQL Plus. Here is the output for SQL Plus using variables instead of literals:
    SQL> variable b1 varchar2
    SQL> variable b2 char
    SQL> explain plan for SELECT a1.comp_id
    2 FROM temp_au_company a0, au_company a1
    3 WHERE :b2 = a0.temp_emp_code
    4 AND a0.comp_id = a1.comp_id
    5 AND a0.sls_terr_code != a1.sls_terr_code
    6 AND a1.last_mdfy_date > :b1
    7 /
    Explained.
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 3184 | 118K| 2995 |
    | 1 | HASH JOIN | | 3184 | 118K| 2995 |
    | 2 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 3187 | 54179 | 3 |
    | 3 | TABLE ACCESS FULL | AU_COMPANY | 24009 | 492K| 2983 |
    Any other ideas? They should be the same.
    Brenda

  • Performance problem: Query explain plan changes in pl/sql vs. literal args

    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.

    AmandaSoosai wrote:
    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.Best wild guess based on what you've posted is a bind variable mismatch (your column is declared as a NUMBER data type and your bind variable is declared as a VARCHAR for example). Unless you have histograms on the columns in question ... which, if you're using bind variables is usually a really bad idea.
    A basic illustration of my guess
    http://blogs.oracle.com/optimizer/entry/how_do_i_get_sql_executed_from_an_application_to_uses_the_same_execution_plan_i_get_from_sqlplus

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

Maybe you are looking for