Explain Plan and  COST

Hi,
Is there any relationship between the cost factor in the explain plan and query execution time. I have come across situations in both ways, i.e. high cost and low execution time, low cost and high execution time. What parameters do we need to consider in tuning a query high cost or low cost?So my assumption is that the lower cost does not guarantee faster response time. Again i have have seen many people try to reduce the cost first.Can anyone help me on this issue please.
Thanks-Bhaskar

Cost is a metric that Oracle uses to estimate the runtime of a query. However, it is not something that you probably ought to pay a lot of attention to. If you are looking at the performance of a query, you are presumably operating on the assumption that the optimizer may have chosen an incorrect plan. If the optimizer chose an incorrect plan then by definition the cost is incorrect. If the cost is correct then, by definition, Oracle chose the correct plan.
It makes much more sense to focus on things like the cardinality of each step (the number of rows each step in the plan is expected to return). If the cardinality estimates are correct (or reasonably close), then the optimizer probably chose the correct plan. If the cardinality estimates are way off, the optimizer probably chose a less than optimal plan. And when you find the first step where the cardinality estimates are wildly incorrect, you'll know where you need to start focusing your tuning efforts.
Justin

Similar Messages

  • General queries regarding explain plan and query

    Hello Oracle buddies,
    I have few badly formed queries with plenty of nested loops, merge join cartesian , plenty of sorting and in the query so many sub queries and all.. The cost of the queries are high like anything.
    some even has 130Crore of cost .
    When I got the chance to look into those quries I test them in Non Prod systems and which almost have 90-95% similar data as it was refresh by PROD few weeks back.
    I found few queries are having the same explain plan but cost is less like anything. for example 5000 or 6000.
    When I check for the possibilities of wrong statistics I found they just collect with default setting...
    In Non prod I saw only the auto stat job is ran and most of the tables are having the stats which are of last analyzed on the day of refresh.
    Now what could be so differentiating factor that drives a queries' cost lesser than Prod systems, when the data is almost same. Also if prod ssystem is gather by only gather_schema_stat('SCHEMA_NAME') then it should carry the same stat in non prod while refreshing. I know ppl do not gather stat on test only auto job is running ...
    I need to have clear prove before I can have a clear understanding..
    Please help me to know what factors could be differentiating?
    -Regards,
    J_DBA_Sourav

    j_DBA_sourav wrote:
    Hello Jonathan,
    Thanks for the reply. The team refreshed it, by expdp/impdp method where by default statistics are included. In that case?
    Is this problem probable to happen due to statistics only or anything else is also responsible. Even the explain plan is same.
    Please through some light on it.
    Auto job is on as I stated earlier but in test systems most of the tables are showing refreshed date as last_analyzed
    -Regards
    JDS
    Anything that puts the stats out of sync with each other may be sufficient to cause problems. Any queries that depend on sysdate may cause a problem.
    As a simple check:  select table_name, last_analyzed from user_tables on the two systems, with some order by clause (e.g. table_name), and see how many tables were analysed at different times - anything on either system after the exp/imp could be part of your problem.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Now on Twitter: @jloracle

  • Table or Function module to get Internal order planning and Cost element pl

    Dear All,
    Table or Function module to get Internal order planning and Cost element planning.
    Internal order planning from T-code KO13.
    Thanks in advance.
    Regards,
    Ravi
    Edited by: Ravi Chandra on Sep 13, 2011 8:08 AM

    BPEJ, BPEG, BPEP

  • How to proceed further once the explain plan and trace files are generated?

    Hi Friends,
    I need to improve the performance of on of the views that i am working on.
    As suggested in the thread - http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0 , i gave generated the explain plan and the trace file.
    From the explain plan, we can see the expensive operations for the query.
    Can any one please tell, how to proceed further from here on i.e. how to make this expensive operations less expensive?
    For ex: FULL TABLE SCAN might be an expensive operation when the table has indexes.In such cases, how can we avoid such operations to make query faster?
    Regards,
    Sreekanth Munagala.

    Hi Veena,
    An earlier post by you regarding P45 is as below
    Starter report P45(3) / P46 efiling for UK
    from my understanding though i have not worked on GB Payroll you have said that you deleted IT 65 details of leaver,however there must be clusters generated in system from where the earlier data needs to be deleted and may be that is why you are facing the issue.
    In Indian payroll when we execute text file for efiling of tax after challan mapping all the data compiles and sits in PCL cluster and therefore we are unable to generate form 16 with proper output,here we delete the clusters and rerun again the mappings and then check form 16.
    Hope this might help you,Experts have suggested you earlier also,they may correct me for this.
    Salil

  • Efficient way to read through big explain plan and genterate html explain plan for sql id's executed in past

    Hi,
    I have a sql which is recently having a performance problems in Production. I have generated a explain plan for it trying to find out what it is doing but plan itself is close to 1000 lines. I want to check if there is any efficient way to go through big plan like this one and quickly find the damaging areas..
    2) I also wanted to know if there is way to generate explain plans in HTML format which executed in past and have entry in dba_hist_sqltext.
    3) I also have two sql_monitor reports which I want to compare. is there any efficient way to do it as well?
    Please share your thoughts!
    Thanks in advance!
    Regards,
    Suman-

    Hi,
    I suggest you can try running sql advisor on the query maybe something fruitful comes up
    http://www.oracle-base.com/articles/11g/sql-access-advisor-11gr1.php
    I am not sure about the explain plan being printed in html format but
    You may also want to try the sqlhistory.sql query from below page
    http://evdbt.com/scripts/
    I have used it many times to check on executions and explain plans which may have changed over the period
    I have faced it many times , the query picks up a bad explain plan and performs poorly

  • Difference between COPC planning and cost object controlling

    I want to know why do we have to calculate the cost of a manufactured product by both COPC planning and Cost object controlling by order?
    Will the cost of a manufactured product be different in COPC planning method and cost object controlling method?
    if we use only cost object controlling to calculaute the cost of a manufactured product without using COPC planning wht will be the implications?

    Raja,
    Yes the cost can be different. The cost calculating thorught Product Cost Planning may be static, you calculate it by period it is more for planning prupose ans to set standard. The cost calculated through Cost Object Controlling is resulting from your actual transactions, and the are few variables that could increase or decrease the cost of product. For example the overhead cost (Increase of salaries) can directly affect your activity rate at the work center and impact cost of your product when the production order is complete. You need both costs (Cost Planning and Cost Object releated) so at the end of the period you can calculate variance betweeen your plan cost and your actual costs.
    Hope this clarifies your doubts.
    GG

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Explain plan and SQL profile.

    Hi ,
    I got a sql tuning result for one query , and i need to understand the below fileds in the explain plan
    operation lineid object object_type order rows bytes cost time cpucost iocost
    any document to understand will be appreciated
    and in recomendation i got Consider accepting the recommended SQL profile. suppose if it was not feasible after accepting the SQL profile can i revert back.
    Thanks

    user12266475 wrote:
    Hi ,
    I got a sql tuning result for one query , and i need to understand the below fileds in the explain plan
    operation lineid object object_type order rows bytes cost time cpucost iocost
    any document to understand will be appreciated
    and in recomendation i got Consider accepting the recommended SQL profile. suppose if it was not feasible after accepting the SQL profile can i revert back.
    Thankshttp://docs.oracle.com/cd/E11882_01/server.112/e16638/toc.htm
    Thread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • Explain plan - lower cost but higher response time in 11g compared to 10g

    Hello,
    I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
    In 11g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANALYZED NUM_ROWS
    11-08-2012 18:21:12 3413956
    Elapsed: 00:00:00.30
    In 10g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANAL NUM_ROWS
    07-NOV-12 3502160
    Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
    BTW - 'm running the queries directly on the server, so no network latency in play here.
    Thanks in advance
    aBBy.

    *11g Env:*
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    *10g Env:*
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    The query used is:
    explain plan for
    select
    NCP_DETAIL_ID ,
    NCP_ID ,
    STATUS_ID ,
    FIBER_NODE ,
    NODE_DESC ,
    GL ,
    FTA_ID ,
    OLD_BUS_ID ,
    VIRTUAL_NODE_IND ,
    SERVICE_DELIVERY_TYPE ,
    HHP_AUDIT_QTY ,
    COMMUNITY_SERVED ,
    CMTS_CARD_ID ,
    OPTICAL_TRANSMITTER ,
    OPTICAL_RECEIVER ,
    LASER_GROUP_ID ,
    UNIT_ID ,
    DS_SLOT ,
    DOWNSTREAM_PORT_ID ,
    DS_PORT_OR_MOD_RF_CHAN ,
    DOWNSTREAM_FREQ ,
    DOWNSTREAM_MODULATION ,
    UPSTREAM_PORT_ID ,
    UPSTREAM_PORT ,
    UPSTREAM_FREQ ,
    UPSTREAM_MODULATION ,
    UPSTREAM_WIDTH ,
    UPSTREAM_LOGICAL_PORT ,
    UPSTREAM_PHYSICAL_PORT ,
    NCP_DETAIL_COMMENTS ,
    ROW_CHANGE_IND ,
    STATUS_DATE ,
    STATUS_USER ,
    MODEM_COUNT ,
    NODE_ID ,
    NODE_FIELD_ID ,
    CREATE_USER ,
    CREATE_DT ,
    LAST_CHANGE_USER ,
    LAST_CHANGE_DT ,
    UNIT_ID_IP ,
    US_SLOT ,
    MOD_RF_CHAN_ID ,
    DOWNSTREAM_LOGICAL_PORT ,
    STATE
    from markethealth.NCP_DETAIL_TAB
    WHERE UNIT_ID = :B1
    ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
    This is the query used for Query 1.
    Stats differences are:
    1. Rownum differes by apprx - 90K more rows in 10g env
    2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
    3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g.

  • Explain Plan and Execution Plan in 10gR2.

    Hi,
    Version 10.2.0.1.0.
    I have two questions:
    1) If the explain plan differs from the execution path in this version, then, is it safe to assume that the statistics are stale (or not gathered at all) on the underlying tables?
    2) Can you in any way make a query use RBO instead of CBO? (I know it doesn't make any sense since CBO is lot smarter, but, for purely academic reasons).
    Thank you,
    Rahul.

    The rule based optimizer is most definitely present in 10gR2. It might not be in the documentation, but it is still there.
    C:\sql>sqlplus test/test
    SQL*Plus: Release 10.2.0.2.0 - Production on Tue Oct 10 15:43:34 2006
    Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    test@SVTEST> set autotrace traceonly
    test@SVTEST> alter session set optimizer_mode=rule;
    Session altered.
    Elapsed: 00:00:00.01
    test@SVTEST> select * from dual;
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 272002086
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | TABLE ACCESS FULL| DUAL |
    Note
    - rule based optimizer used (consider using cbo)
    Statistics
    1 recursive calls
    0 db block gets
    3 consistent gets
    2 physical reads
    0 redo size
    407 bytes sent via SQL*Net to client
    381 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    test@SVTEST>

  • Is there any tool to analyze explain plan and gives the report

    Hi All,
    Is there any tool/scripts to analyze explain plan from plan_table and gives the output report
    Thanks,
    Sankar

    Hi Jaffar,
    Thank you!!!
    The below query will generate the execution tree:
    SELECT OPERATION,
         OPTIONS OPTIONS,
    DECODE(TO_CHAR(ID),'0','---',OBJECT_NAME) OBJECT_NAME,
         (ID ||'-'|| NVL(PARENT_ID,0) ||'-'|| NVL(POSITION,0) ) "ID**PARENT_ID**EXECUTION_STEP",
         SUBSTR(OPTIMIZER,1,8) OPT
    FROM PLAN_TABLE
    START WITH ID = 0
    AND STATEMENT_ID = 'Q1'
    CONNECT BY PRIOR ID = PARENT_ID
    AND STATEMENT_ID = 'Q1'
    Thanks,
    Sankar

  • Problem to create Explain Plan and use XML Indexes. Plz follow scenario..

    Hi,
    Oracle Version - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit
    I have been able to reproduce the error as below:
    Please run the following code in Schema1:
    CREATE TABLE TNAME1
       DB_ID            VARCHAR2 (10 BYTE),
       DATA_ID          VARCHAR2 (10 BYTE),
       DATA_ID2         VARCHAR2 (10 BYTE),
       IDENTIFIER1      NUMBER (19) NOT NULL,
       ID1              NUMBER (10) NOT NULL,
       STATUS1          NUMBER (10) NOT NULL,
       TIME_STAMP       NUMBER (19) NOT NULL,
       OBJECT_ID        VARCHAR2 (40 BYTE) NOT NULL,
       OBJECT_NAME      VARCHAR2 (80 BYTE) NOT NULL,
       UNIQUE_ID        VARCHAR2 (255 BYTE),
       DATA_LIVE        CHAR (1 BYTE) NOT NULL,
       XML_MESSAGE      SYS.XMLTYPE,
       ID2              VARCHAR2 (255 BYTE) NOT NULL,
       FLAG1            CHAR (1 BYTE) NOT NULL,
       KEY1             VARCHAR2 (255 BYTE),
       HEADER1          VARCHAR2 (2000 BYTE) NOT NULL,
       VERSION2         VARCHAR2 (255 BYTE) NOT NULL,
       TYPE1            VARCHAR2 (15 BYTE),
       TIMESTAMP1   TIMESTAMP (6),
       SOURCE_NUMBER    NUMBER
    XMLTYPE XML_MESSAGE STORE AS BINARY XML
    PARTITION BY RANGE (TIMESTAMP1)
       (PARTITION MAX
           VALUES LESS THAN (MAXVALUE)
    NOCOMPRESS
    NOCACHE
    ENABLE ROW MOVEMENT
    begin
    app_utils.drop_parameter('TNAME1_PAR');
    end;
    BEGIN
    DBMS_XMLINDEX.REGISTERPARAMETER(
    'TNAME1_PAR',
    'PATH TABLE     TNAME1_RP_PT
                              PATHS (INCLUDE (            /abc:Msg/product/productType
                                                                    /abc:Msg/Products/Owner
                                     NAMESPACE MAPPING (     xmlns:abc="Abc:Set"
    END;
    CREATE INDEX Indx_XPATH_TNAME1
       ON "TNAME1" (XML_MESSAGE)
       INDEXTYPE IS XDB.XMLINDEX PARAMETERS ( 'PARAM TNAME1_PAR' )
    local;Then in Schema2, create
    create synonym TNAME1 FOR SCHEMA1.TNAME1
    SCHEMA1:
    GRant All on TNAME1 to SCHEMA2Now in SCHEMA2, if we try:
    Explain Plan for
    SELECT xmltype.getclobval (XML_MESSAGE)
    FROM TNAME1 t
    WHERE XMLEXISTS (
    'declare namespace abc="Abc:Set";  /abc:Msg/product/productType= ("1", "2") '
    PASSING XML_MESSAGE);WE GET -> ORA-00942: table or view does not exist
    whereas this works:
    Explain Plan for
    SELECT xmltype.getclobval (XML_MESSAGE)
    FROM TNAME1 t- Please tell me, what is the reason behind it and how can I overcome it. It's causing all my views based on this condition to fail in another schema i.e. not picking up the XMLIndexes.
    Also
    SELECT * from DBA_XML_TAB_COLS WHERE TABLE_NAME like 'TNAME1';Output is like:
    OWNER, || TABLE_NAME, || COLUMN_NAME, || XMLSCHEMA || SCHEMA_OWNER, || ELEMENT_NAME, || STORAGE_TYPE, || ANYSCHEMA, || NONSCHEMA
    SCHEMA1 || TNAME1 ||     XML_MESSAGE ||          ||          || BINARY     || NO     || YES ||
    SCHEMA1 || TNAME1 ||     SYS_NC00025$ ||          ||          || CLOB     ||     ||
    - Can I change AnySchema to YES from NO for -column_name = XML_MESSAGE ? May be that will solve my problem.
    - SYS_NC00025$ is the XML Index, Why don't I get any values for ANYSCHEMA, NONSCHEMA on it. Is this what is causing the problem.
    Kindly suggest.. Thanks..

    The problem sounds familiar. Please create a SR on http://support.oracle.com for this one.

  • Samples to be consided in planning and costing

    Hi, as per iso standards we need to test certain samples like from 200 parts, we need to test 1 sample. so we cut a small piece from the raw material, we test. now our req is system to consider that sample qty in mrp planning and capture that testing cost? pls advise how can we map this in sap.

    Hi
    I could not understand your question relateD TO mrp. as per my understanding you are going to consider one metal sheet with 2.4x4.8m in dimensions and in that you are going to cut small portion of the stell piece for inspection..........right in that case it will not consider
    but when you are doing production order confirmation that point you are going to maintain Yield and Scrap quantity to adjust to capture the cost
    as per your 2nd question:
    in Quality management view you have to maintain the order inspection type as QM01and settelement process to be followed as your requirement individual/collective and as well as QM02.
    i hope it will give you an overview for your issue
    Thanks
    Shaik

  • Query Performance and reading an Explain Plan

    Hi,
    Below I have posted a query that is running slowly for me - upwards of 10 minutes which I would not expect. I have also supplied the explain plan. I'm fairly new to explain plans and not sure what the danger signs are that I should be looking out for.
    I have added indexes to these tables, a lot of which are used in the JOIN and so I expected this to be quicker.
    Any help or pointers in the right direction would be very much appreciated -
    SELECT a.lot_id, a.route, a.route_rev
    FROM wlos_owner.tbl_current_lot_status_dim a, wlos_owner.tbl_last_seq_num b, wlos_owner.tbl_hist_metrics_at_op_lkp c
    WHERE a.fw_ver = '2'
    AND a.route = b.route
    AND a.route_rev = b.route_rev
    AND a.fw_ver = b.fw_ver
    AND a.route = c.route
    AND a.route_rev = c.route_rev
    AND a.fw_ver = c.fw_ver
    AND a.prod = c.prod
    AND a.lot_type = c.lot_type
    AND c.step_seq_num >= a.step_seq_num
    PLAN_TABLE_OUTPUT
    Plan hash value: 2447083104
    | Id  | Operation           | Name                       | Rows  | Bytes | Cost
    (%CPU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT    |                            |   333 | 33633 |  1347
       (8)| 00:00:17 |
    |*  1 |  HASH JOIN          |                            |   333 | 33633 |  1347
       (8)| 00:00:17 |
    |*  2 |   HASH JOIN         |                            |   561 | 46002 |  1333
       (7)| 00:00:17 |
    |*  3 |    TABLE ACCESS FULL| TBL_CURRENT_LOT_STATUS_DIM | 11782 |   517K|   203
       (5)| 00:00:03 |
    PLAN_TABLE_OUTPUT
    |*  4 |    TABLE ACCESS FULL| TBL_HIST_METRICS_AT_OP_LKP |   178K|  6455K|  1120
       (7)| 00:00:14 |
    |*  5 |   TABLE ACCESS FULL | TBL_LAST_SEQ_NUM           |  8301 |   154K|    13
      (16)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND
                  "A"."FW_VER"=TO_NUMBER("B"."FW_VER"))
       2 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV" AND
                  "A"."FW_VER"="C"."FW_VER" AND "A"."PROD"="C"."PROD" AND "A"."LOT_T
    YPE"="C"."LOT_TYPE")
           filter("C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
       3 - filter("A"."FW_VER"=2)
    PLAN_TABLE_OUTPUT
       4 - filter("C"."FW_VER"=2)
       5 - filter(TO_NUMBER("B"."FW_VER")=2)
    24 rows selected.

    Guys thank you for your help.
    I changed the type of the offending column and the plan looks a lot better and results seem a lot quicker.
    However I have added to my SELECT, quite substantially, and have a new explain plan.
    There are two sections in particular that have a high cost and I was wondering if you seen anything inherently wrong or can explain more fully what the PLAN_TABLE_OUTPUT descriptions are telling me - in particular
    INDEX FULL SCAN
    PLAN_TABLE_OUTPUT
    Plan hash value: 3665357134
    | Id  | Operation                        | Name                           | Rows
      | Bytes | Cost (%CPU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT                 |                                |
    4 |   316 |    52   (2)| 00:00:01 |
    |*  1 |  VIEW                            |                                |
    4 |   316 |    52   (2)| 00:00:01 |
    |   2 |   WINDOW SORT                    |                                |
    4 |   600 |    52   (2)| 00:00:01 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID   | TBL_HIST_METRICS_AT_OP_LKP     |
    1 |    71 |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |   4 |     NESTED LOOPS                 |                                |
    4 |   600 |    51   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS                |                                |    7
    5 |  5925 |    32   (0)| 00:00:01 |
    |*  6 |       INDEX FULL SCAN            | UNIQUE_LAST_SEQ                |    8
    9 |  2492 |    10   (0)| 00:00:01 |
    |   7 |       TABLE ACCESS BY INDEX ROWID| TBL_CURRENT_LOT_STATUS_DIM     |
    PLAN_TABLE_OUTPUT
    1 |    51 |     1   (0)| 00:00:01 |
    |*  8 |        INDEX RANGE SCAN          | TBL_CUR_LOT_STATUS_DIM_IDX1    |
    1 |       |     1   (0)| 00:00:01 |
    |*  9 |      INDEX RANGE SCAN            | TBL_HIST_METRIC_AT_OP_LKP_IDX1 |    2
    9 |       |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("SEQ"=1)
       3 - filter("C"."FW_VER"=2 AND "A"."PROD"="C"."PROD" AND "A"."LOT_TYPE"="C"."L
    OT_TYPE" AND
                  "C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
       6 - access("B"."FW_VER"=2)
           filter("B"."FW_VER"=2)
    PLAN_TABLE_OUTPUT
       8 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND "A
    "."FW_VER"=2)
       9 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV")

  • Explain plan changing between 9i and 11g

    Hi
    Recently we migrated the database from 9i to 11g.In 11g environment one query was running for long time and when we comapre the explain plan between 9i and 11g, we found one of the table is going through "INDEX RANGE SCAN NON UNIQUE" in 9i but in 11g it is accessing through "INDEX RANGE SCAN INDEX".
    Is there any hint to add so that it will access table through "INDEX RANGE SCAN NON UNIQUE"  in 11g?
    Please Help.

    I agree with Paul.
    Why are you assuming that the optimizer in 11g, which has had bug fixes and vast improvements since 9i, is optimizing the query badly just because you think the query is running slowly.
    Before making assumptions, you need to find the cause of the issue, not just look for differences in two non-comparible explain plans and assume that's the cause.  Adding hints to force indexes and suchlike is not the answer (even though some idiots may suggest it is).
    As it says in the documentation in relation to hints:
    Comments
    Hints were introduced in Oracle7, when users had little recourse if the optimizer generated suboptimal plans. Now Oracle provides a number of tools, including the SQL Tuning Advisor, SQL plan management, and SQL Performance Analyzer, to help you address performance problems that are not solved by the optimizer. Oracle strongly recommends that you use those tools rather than hints. The tools are far superior to hints, because when used on an ongoing basis, they provide fresh solutions as your data and database environment change.

Maybe you are looking for