Plan Table for running SQL..

Hi There,
How can I go through the explain plan for a SQL query running by a particular user?
For example:
User A is running: select * from table x , table y where x.id = y.id;
User B login and can do explain plan on the query running above to get the total cost..etc.
Thanks

user5545873 wrote:
More specifically, I want to know how to connect the session id SID and the SQL_ID (we can get both from v$session_longops) with the v$sql_plan.. or plan_table.
THanks
Edited by: user5545873 on Oct 2, 2009 12:10 PMhoek's idea of using Oracle trace and tkprof generating execution plans should work.
Aside from that, you might be able to trace through the V$ tables to get the info you are looking for. I personally find the V$ tables to be a bit erratic regarding what is there and what is not so you may or may not find everything you are looking for.
Are you asking for join columns?
Table                     Column          Datatype     Join Column
v$session_longops     sql_id             varchar2(13)    v$sql.sql_id
                        sql_address     raw(8)             v$sql.address
                        sql_hash_value     number             v$sql.hash_value
v$sql_plan             address             raw(8)          v$sql.address
                     child_address     raw(8)             v$sql.child_address
                     child_number     number             v$sql.child_number
                     hash_value     number             v$sql.hash_value

Similar Messages

  • Explain plan output  for a sql in JSP page

    Hello all,
    I have a requirement to give SQL query as an input and get the output and explain plan in the same JSP page. i could get the SQL result, but i want to get the EXPLAIN Plan.
    can any one help me in this.
    Thanks
    Kiran

    Hello all,
    I have a requirement to give SQL query as an input and get the output and explain plan in the same JSP page. i could get the SQL result, but i want to get the EXPLAIN Plan.
    can any one help me in this.
    Thanks
    Kiran

  • What is the need for planning table.

    can anybody explain me about planning table, how and where to use that..

    Dear
    Use of planning table :
    1.Capacity Requirement Planning in Discrete and PP-PI indistutry
    2.For demanad management , you can create PIR and also generate MTS production order based on planned order for those PIR
    3.Decision making on CRP in planning table for operation dispatch , work centre , scheduling , detail scheduling , time , dates of capacity avalution is possible through Planning table under particular overall profile .
    refer : Use of Planning Table (MF50) in Capacity Levelling with Production Order
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/PPCRPPPPI/PPCRP_PPPI.pdf
    Regards
    JH

  • Explain plan for running query

    Hi everyone,
    I come to know how to generate explain plan for a given query by giving
    Explain plan for select * fro emp;
    Consider a query running for 5 hrs in a session and i want to genrate explain plan for that current query in its 4th hour i dont know the sql as well
    all the steps by step would be much apppreciated
    like finding Current SQL then so on
    Thanks
    Shareef

    912856 wrote:
    Hi everyone,
    I come to know how to generate explain plan for a given query by giving
    Explain plan for select * fro emp;
    Consider a query running for 5 hrs in a session and i want to genrate explain plan for that current query in its 4th hour i dont know the sql as well
    all the steps by step would be much apppreciated
    like finding Current SQL then so on
    Thanks
    ShareefYOu can also use dbms_xplain to generate plan used in v$sql. like for example
    SQL>SELECT  SQL_ID,  CHILD_NUMBER FROM  V$SQL WHERE  SQL_TEXT LIKE 'select * from em%';
    SQL_ID        CHILD_NUMBER
    6kd5fkqdjb8fu            0
    SQL>SELECT  * FROM  TABLE(DBMS_XPLAN.DISPLAY_CURSOR('6kd5fkqdjb8fu',0,'ALLSTATS'));If you need the actual tuntime statistics used by sql statement then you need to put hint /*+ gather_plan_statistics */ in sql ststement, something like
    select /*+ gather_plan_statistics */ * from emp;
    and then generate the explain plan for this
    Have a look
    http://hoopercharles.wordpress.com/2010/03/01/dbms_xplan-format-parameters/
    select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

  • How to delete the line for 300SAP* in table USR02 in SQL Management studio

    Hello
    I used to delete the line for 300SAP* in table USR02 in SQL Enterprise Manager. After I could log on with “pass”.
    I wander how to delete it in SQL Management studio. When I expand tdatabase it takes so long time and it is uncontrollable

    Hello,
    you have to delete the row by a sql statement.
    Open a new query and run a script like this:
    use <Your SID DB>                 -- e.g. use PRD
    setuser 'your sid in lowercase'  --- e.g. setuser 'prd'
    delete from USR02 where MANDT = '300' and BNAME = 'sap*'
    go
    Run a complete backup before deleting data manually.
    Regards
      Clas

  • Different 'execution plans' for same sql in 10R2

    DB=10.2.0.5
    OS=RHEL 3
    Im not sure of this, but seeing different plans for same SQL.
    select sql_text from v$sqlarea where sql_id='92mb4z83fg4st'; <---TOP SQL from AWR
    SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","LOGINSUCCESSFLG",
    "ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE"
    FROM "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY";
    SQL> set autotrace traceonly
    SQL> SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","LOGINSUCCESSFLG",
    "ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE"
    FROM "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY"; 2 3
    1822203 rows selected.
    Execution Plan
    Plan hash value: 568996432
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1803K| 75M| 2919 (2)| 00:00:36 |
    | 1 | TABLE ACCESS FULL| ENDUSERLOGINATTEMPTHISTORY | 1803K| 75M| 2919 (2)| 00:00:36 |
    Statistics
    0 recursive calls
    0 db block gets
    133793 consistent gets
    0 physical reads
    0 redo size
    76637183 bytes sent via SQL*Net to client
    1336772 bytes received via SQL*Net from client
    121482 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1822203 rows processed
    ===================================== another plan ===============
    SQL> select * from TABLE(dbms_xplan.display_awr('92mb4z83fg4st'));
    15 rows selected.
    Execution Plan
    Plan hash value: 3015018810
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY_AWR |
    Note
    - rule based optimizer used (consider using cbo)
    Statistics
    24 recursive calls
    24 db block gets
    49 consistent gets
    0 physical reads
    0 redo size
    1529 bytes sent via SQL*Net to client
    492 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    15 rows processed
    =========second one shows only 15 rows...
    Which one is correct ?

    Understood, second plan is for self 'dbms_xplan'.
    Anyhow I opened a new session where I did NOT on 'auto-trace'. but plan is somewhat than the original.
    SQL> /
    PLAN_TABLE_OUTPUT
    SQL_ID 92mb4z83fg4st
    SELECT /*+ OPAQUE_TRANSFORM */ "ENDUSERID","LASTLOGINATTEMPTTIMESTAMP","LOGINSOURCECD","
    LOGINSUCCESSFLG","ENDUSERLOGINATTEMPTHISTORYID","VERSION_NUM","CREATEDATE" FROM
    "BOMB"."ENDUSERLOGINATTEMPTHISTORY" "ENDUSERLOGINATTEMPTHISTORY"
    Plan hash value: 568996432
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | | | 2919 (100)| |
    | 1 | TABLE ACCESS FULL| ENDUSERLOGINATTEMPTHISTORY | 1803K| 75M| 2919 (2)| 00:00:36 |
    15 rows selected.
    I am just wondering, which plan is the accurate and which I need to believe ?

  • Looking for an SQL query to retreive callvariables + ECC from a RUN SCRIPT RESULT (Translation to VRU)

    Hi Team,
    I am looking for an SQL query to check the data (ECC + CallVariable) received following a RUN SCRIPT RESULT when requesting an external VRU with a Translation Route to VRU with a "Run External Script".
    I believe the data are parsed between the Termination Call Detail + Termination Call Variable .
    If you already have such an SQL query I would very much appreciate to have it.
    Thank you and Regards
    Nick

    Omar,
    with all due respect, shortening a one day's interval might not be an option for a historical report ;-)
    I would recommend to take a look the following SQL query:
    DECLARE @dateFrom DATETIME, @dateTo DATETIME
    SET @dateFrom = '2014-01-24 00:00:00'
    SET @dateTo   = '2014-01-25 00:00:00'
    SELECT
    tcv.DateTime,
    tcd.RecoveryKey,
    tcd.RouterCallKeyDay,
    tcd.RouterCallKey,
    ecv.EnterpriseName AS [ECVEnterpriseName],
    tcv.ArrayIndex,
    tcv.ECCValue
    FROM Termination_Call_Variable tcv
    JOIN
    (SELECT RouterCallKeyDay,RouterCallKey,RecoveryKey FROM Termination_Call_Detail WHERE DateTime > @dateFrom AND DateTime < @dateTo) tcd
    ON tcv.TCDRecoveryKey = tcd.RecoveryKey
    LEFT OUTER JOIN Expanded_Call_Variable ecv ON tcv.ExpandedCallVariableID = ecv.ExpandedCallVariableID
    WHERE tcv.DateTime > @dateFrom AND tcv.DateTime < @dateTo
    With variables, you can parametrize your code (for instance, you could write SET @dateFrom = ? and let the calling application fill in the datetime value in for you).
    Plus joining two large tables with all rows like you did (TCD-TCV) is never a good option.
    Another aspect to consider: all ECC's are actually arrays (always), so it's not good to leave out the index value (tcv.ArrayIndex).
    G.

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Current running SQL stms execution plan?

    Hi,
    Is it any ways to findout the current running SQL stms execution plan?
    without using Explain plan & autotrace.
    Thanks in advance,
    Thomas.

    I'm using this code. You just have to give the Session Identifier (&SID ):SELECT     '| Operation                                     |  Objet   | Lignes| Bytes|  Cout  | Pstart| Pstop |' as "Plan Table"  FROM DUAL
    UNION ALL
    SELECT     '----------------------------------------------------------------------------------------------------' FROM DUAL
    UNION ALL
    SELECT * FROM
             (SELECT /*+ NO_MERGE */
              RPAD('| '||
                   SUBSTR(
                        LPAD(' ',1*(LEVEL-1)) || OPERATION || DECODE(OPTIONS, NULL,'',' '||OPTIONS), 1, 47
                         ), 48, ' '
              )||'|'||
              RPAD(
                   SUBSTR(OBJECT_NAME||' ',1, 9), 10, ' '
              )||'|'||
              LPAD(
                   DECODE(CARDINALITY,
                        NULL,'  ',
                        DECODE(SIGN(CARDINALITY-1000),
                              -1, CARDINALITY||' ',
                             DECODE(SIGN(CARDINALITY-1000000),
                                   -1,TRUNC(CARDINALITY/1000)||'K',
                                  DECODE(SIGN(CARDINALITY-1000000000),
                                       -1,TRUNC(CARDINALITY/1000000)||'M',
                                       TRUNC(CARDINALITY/1000000000)||'G')
                        ), 7, ' '
              )||'|'||
              LPAD(
                   DECODE(BYTES,
                        NULL,' ',
                        DECODE(SIGN(BYTES-1024),
                             -1, BYTES||' ',
                             DECODE(SIGN(BYTES-1048576),
                                  -1, TRUNC(BYTES/1024)||'K',
                                  DECODE(SIGN(BYTES-1073741824),
                                       -1,TRUNC(BYTES/1048576)||'M',
                                       TRUNC(BYTES/1073741824)||'G')
                        ), 6, ' '
              )||'|'||
              LPAD(
                   DECODE(COST,
                        NULL,' ',
                        DECODE(SIGN(COST-10000000),
                             -1, COST||' ',
                             DECODE(SIGN(COST-1000000000),
                                  -1, TRUNC(COST/1000000)||'M',
                                  TRUNC(COST/1000000000)||'G')
                        ), 8, ' '
              )||'|'||
              LPAD(
                   DECODE(PARTITION_START,
                        'ROW LOCATION', 'ROWID',
                        DECODE(PARTITION_START,
                             'KEY', 'KEY',
                             DECODE(PARTITION_START,
                                  'KEY(INLIST)', 'KEY(I)',
                                  DECODE(SUBSTR(PARTITION_START, 1, 6),
                                       'NUMBER', SUBSTR(SUBSTR(PARTITION_START, 8, 10), 1,LENGTH(SUBSTR(PARTITION_START, 8, 10))-1),
                                       DECODE(PARTITION_START,
                                            NULL,' ',
                                            PARTITION_START)
                        )||' ', 7, ' '
              )||'|'||
              LPAD(
                   DECODE(PARTITION_STOP,
                        'ROW LOCATION', 'ROW L',
                        DECODE(PARTITION_STOP,
                             'KEY', 'KEY',
                             DECODE(PARTITION_STOP,
                                  'KEY(INLIST)', 'KEY(I)',
                                  DECODE(SUBSTR(PARTITION_STOP, 1, 6),
                                  'NUMBER', SUBSTR(SUBSTR(PARTITION_STOP, 8, 10), 1,LENGTH(SUBSTR(PARTITION_STOP, 8, 10))-1),
                                  DECODE(PARTITION_STOP,
                                       NULL,' ',
                                       PARTITION_STOP)
                   )||' ', 7, ' '
              )||'|' AS "Explain plan"
         FROM V$SQL_PLAN
         START WITH (ADDRESS = (SELECT SQL_ADDRESS FROM V$SESSION WHERE SID=&SID)
                   AND HASH_VALUE = (SELECT SQL_HASH_VALUE FROM V$SESSION WHERE SID=&SID)
                   AND CHILD_NUMBER = 0
                   AND ID=0 )
         CONNECT BY PRIOR ID = PARENT_ID
                        AND PRIOR ADDRESS = ADDRESS
                        AND PRIOR HASH_VALUE = HASH_VALUE
                        AND PRIOR CHILD_NUMBER = CHILD_NUMBER
         ORDER BY ID, POSITION)
    UNION ALL
    SELECT '----------------------------------------------------------------------------------------------------' FROM DUAL;Regards,
    Yoann.

  • Explain plans for PL/SQL code?

    Hi!
    I am pulling SQL statements (select, insert, update, delete, etc.) from PL/SQL code and producing explain plans. Some of the delete statements have "WHERE CURRENT OF" in them which produces an ORA-3001 error (feature not implemented) error. How can I do an explain plan of these statements? Can I replace "WHERE CURRENT OF" with "=" to get a plan? How far off will the plan be? Any suggestions, ideas, etc. greatfully appreciated!
    This is Oracle 9.2.0.4/5 on AIX if it makes a difference.
    Thanks!
    Dave Venus

    WHERE CURRENT OF shouldn't be a problem. Here above an example on 9.2.0.4 on AIX5.2 :
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
    With the Partitioning option
    JServer Release 9.2.0.4.0 - Production
    SQL> create table TBL_USER_PROFILE_CATEGORY
      2  as
      3  select 654 id_category, 1103 id_user from dual union all
      4  select 654 id_category, 1104 id_user from dual union all
      5  select 18  id_category, 1103 id_user from dual union all
      6  select 629 id_category, 1103 id_user from dual union all
      7  select 110 id_category, 1103 id_user from dual union all
      8  select 110 id_category, 1104 id_user from dual union all
      9  select 18  id_category, 1104 id_user from dual union all
    10  select 37  id_category, 1103 id_user from dual union all
    11  select 24  id_category, 1103 id_user from dual union all
    12  select 7   id_category, 104  id_user from dual union all
    13  select 37  id_category, 1104 id_user from dual union all
    14  select 22  id_category, 1103 id_user from dual union all
    15  select 22  id_category, 1104 id_user from dual union all
    16  select 25  id_category, 1104 id_user from dual union all
    17  select 25  id_category, 1103 id_user from dual ;
    Table created.
    SQL>
    SQL> alter table TBL_USER_PROFILE_CATEGORY add primary key (id_category, id_use
    Table altered.
    SQL>
    SQL> CREATE OR REPLACE
      2  PROCEDURE P$UPDATE_TBL_USER_PROFILE_CAT(p_id_cat_old IN NUMBER) AS
      3    p_id_category number;
      4    p_id_user     number;
      5    cursor mycur is select id_category, id_user
      6                    from TBL_USER_PROFILE_CATEGORY
      7                    where  id_category = p_id_cat_old
      8                    for update of id_category;
      9  BEGIN
    10    open mycur;
    11    LOOP
    12       FETCH mycur INTO p_id_category, p_id_user;
    13       EXIT WHEN mycur%NOTFOUND;
    14       BEGIN
    15 DELETE FROM TBL_USER_PROFILE_CATEGORY
    16 WHERE CURRENT OF mycur;
    17       END;
    18     END LOOP;
    19     CLOSE mycur;
    20     COMMIT;
    21  EXCEPTION WHEN OTHERS THEN rollback;
    22  END;
    23  /
    Procedure created.
    SQL>
    SQL> exec P$UPDATE_TBL_USER_PROFILE_CAT(654)
    PL/SQL procedure successfully completed.
    SQL> select * from TBL_USER_PROFILE_CATEGORY;
    ID_CATEGORY    ID_USER
             18       1103
            629       1103
            110       1103
            110       1104
             18       1104
             37       1103
             24       1103
              7        104
             37       1104
             22       1103
             22       1104
             25       1104
             25       1103
    13 rows selected.
    SQL> Please, paste here what's your code...
    Nicolas.

  • Explain Plan for a SQL

    Hi
    I have done an explain for one of my sql and i got the result as follows
    SELECT STATEMENT Cost = 502
    SORT AGGREGATE
    HASH JOIN
    NESTED LOOPS
    HASH JOIN
    TABLE ACCESS FULL CMC_NWPE_RELATION
    HASH JOIN
    TABLE ACCESS FULL CMC_NWST_NET_SET
    NESTED LOOPS
    TABLE ACCESS FULL CMC_NWPR_RELATION
    INDEX UNIQUE SCAN SYS_C006513
    INDEX UNIQUE SCAN SYS_C006316
    TABLE ACCESS FULL CMC_MEPR_PRIM_PROV
    I need to know whether i have gone wrong with any of the joins in the sql. For the matter of fact I need to interpret the result of the explain plan for my sql.
    Thanks in advance
    Chandra Sekhar

    Hi
    First of all u haven't send the complete explanation in the plan table. As far i got from your result ur accesing the ful table scan CMC_NWPE_RELATION ,CMC_NWST_NET_SET , CMC_NWPR_RELATION & one more.
    check in your auery why these table are accesing full when the data in this table will volumonuous ur system this query will become very slow as it will have to acces nosof rowsin table 1*nosof rowsin table 2*nosof rowsin table 3*nosof rowsin table 4 so u create the index in those tables and index should be made on the columns which r used in ur where clause of the query
    Amit

  • Possible for Native SQL to read Pooled Tables?

    Hi Experts,
    Is it possible to run native SQL against SAP's internal Pool Table, the table that stores the many "Pooled Tables" across SAP?  I realize that is much preferred to access the pooled table using Open SQL via ABAP functions, but that is not an option for me.  I am writing native SQL against a copy of the SAP ECC underlying tables.
    If it is possible, what is the technical name of the table?
    Thanks,
    Kevin

    Hello Kevin,
    quite a strange question. As you mentioned it before, the preffered way is to use open sql. The pooled tables are handled by the application server and so, they are not accessible by native sql.
    Independently, consider about two things:
    1.) The ABAP DDIC uses sometimes different sizes and field-lengths, so you should not read LSTR or LRAW fields with native sql.
    2.) You have to be more patient about transaction handling and parameter setting via SQL, because your application server does not get any information about what you have done before.
    Kind regards,
    Hendrik

  • UDM for long running sql

    Hi
    I am planning to use enterprise manager grid control to create a UDM for the following sql that would alert me for the sql that is running for more then a hour for all databases any ideas on how to do this
    SELECT
            substr(swn.sql_text,40),
            ||'SQL is Running on Instance ' ||s.inst_id || 'Since '|| ROUND(sl.elapsed_seconds/60) elapsed_mins,      
    FROM   gv$session_longops sl,
    gv$session s ,
    gv$sql swn
    WHERE  s.sid     = sl.sid
    AND    s.inst_id = sl.inst_id
    AND    s.serial# = sl.serial#
    AND    s.inst_id = swn.inst_id
    AND    s.sql_address = swn.address
    AND    s.sql_hash_value = swn.hash_value
    AND    sl.sofar  <> sl.totalwork
    AND    sl.totalwork <> 0
    AND    round((sl.elapsed_seconds)/60,0) > 60
    order by 7Edited by: user9243284 on Jun 7, 2010 3:48 AM

    I think you should specify:
    SQL query output: two columns
    Metric Type: String
    and the following query:
    SELECT 'NA',0
    from dual
    union
    SELECT distinct '( ' ||i.instance_name ||','|| sl.sid ||','|| sl.serial# ||', ) ' || substr(s.sql_text,1,1000) sql, ROUND(sl.elapsed_seconds/60) mins
    FROM gv$session_longops sl,
    gv$sql s,
    gv$instance i
    WHERE sl.sofar = sl.totalwork
    AND sl.totalwork = 0
    AND sl.inst_id = s.inst_id (+)
    AND sl.sql_address = s.address (+)
    AND sl.sql_hash_value = s.hash_value (+)
    AND sl.inst_id = i.inst_id
    use the select from dual, to make sure your query allways returns at least one row.
    BTW, you will find some examples of UDM creation on my blog.
    Regards
    Rob
    http://oemgc.wordpress.com

  • IMG setting for REM Planning Table

    Hi Gurus,
    Can any one throw light on following IMG settings-
    Path -
    IMG --> Production --> REM --> Planning --> Planning table --> Maintain Row selection --> Reciepts Other version
    Relevance of "Other version - in IMG setting for visible rows in planning table".
    Planning table output does not change with check or uncheck of version.
    Which check or unchecking, planning table is considering all the production versions of the materials maintained.
    What is purpose of having a check or uncheck, if system is giving the same result.
    Regards,
    Srini

    Hi Srini,
    We create production version to OPTIMIZE the production. We mention the routing and the BOM in the production version. Actually speaking, in the REM scenario we don't even need a planned order. Production order is absolutely not needed. So in the planning table, it shows the production data for all the versions that you have created, irrespective of whether or not you check the indicator. During planning run, the system will take the first production version by default. So in the planning table you can see the optimum production and you can do leveling also, possible through the row selection option in IMG. Hope this sheds a light on your concern.
    Regards,
    Sreekant.

  • Capacity Reservation Table for a planned order

    Dear experts,
    When I look at MD13, I can see the capacity requirement for a planned order.  I can also see the requirements build up in my CM01 screen.  We would like to pull the data into our BW system, but we are having issues finding where the capacity reservation information resides.  When I look at table KBEZ, I can only find values that correspond to production orders.  Is there a similar table for planned orders?  Does SAP save the detailed scheduling infromation for planned orders in a table somewhere, or is this calculated at the time MD13 or CM01 is run?
    Thanks,
    Matthew Bruckner

    Have you tried with KBED? I can see with SE16 that you have a field for selection with the planned order number....

Maybe you are looking for

  • HT1476 How do I transfer insurance from one iphone to another iphone?

    How do I transfer insurance from one iphone to another?

  • How do I delete a second itunes account on my iphone?

    Hi- I have to itunes user ID's and the incorrect ID keeps showing up on my iphone when I want my mobile me itunes account to show instead. I'm trying ot update my apps and the wrong itunes user ID keeps coming up. Thank you.

  • Softproof in Lightroom different from Photoshop

    Hi all, I've just recieved a print from Photobox. It looks ace, except for a big band in the sky graduation. Stupidly I hadn't looked on the web site to see if they had a colour profile before sending . As it happens they do, and softproofing the ima

  • Moving a Movie between Apple TVs

    Started an iTunes rented movie on my ATV. Shortly thereafter I wanted to transfer it to another ATV in another room. The second ATV would not complete the transfer because it was necessary to stop the playback on the first ATV. I understand that, and

  • Error in saving of 'Assign File' step in LSMW

    Hi all, I am preparing LSMW to load material master data for multiple sources, there are 4 flat files. In step 'Assign File', I created source structures as follows:. 1. Structure 1 2. Structure 2     below Structure 2, we have children root     2.1.