SQL Query analysis plan

Hi,
Please explain how to analysis the below plan to improve the performance.
STATEMENT_ID TIMESTAMP OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER SEARCH_COLUMNS ID PARENT_ID POSITION OTHER
31 08/14/2008 12:44:11 SELECT STATEMENT CHOOSE 0.00 826.00
31 08/14/2008 12:44:11 NESTED LOOPS 1.00 0.00 1.00
31 08/14/2008 12:44:11 NESTED LOOPS 2.00 1.00 1.00
31 08/14/2008 12:44:11 HASH JOIN 3.00 2.00 1.00
31 08/14/2008 12:44:11 TABLE ACCESS FULL test PARTSOFMEASURES 6.00 ANALYZED 4.00 3.00 1.00
31 08/14/2008 12:44:11 HASH JOIN 5.00 3.00 2.00
31 08/14/2008 12:44:11 TABLE ACCESS FULL test COMP_PROFILE 4.00 ANALYZED 6.00 5.00 1.00
31 08/14/2008 12:44:11 NESTED LOOPS 7.00 5.00 2.00
31 08/14/2008 12:44:11 NESTED LOOPS 8.00 7.00 1.00
31 08/14/2008 12:44:11 TABLE ACCESS FULL test PARTSOFMEASURES 5.00 ANALYZED 9.00 8.00 1.00
31 08/14/2008 12:44:11 TABLE ACCESS BY INDEX ROWID test ORDERS 1.00 ANALYZED 10.00 8.00 2.00
31 08/14/2008 12:44:11 INDEX RANGE SCAN test ORDERS_FKEY4_IDX NON-UNIQUE 1.00 11.00 10.00 1.00
31 08/14/2008 12:44:11 INDEX UNIQUE SCAN test test_PKEY UNIQUE ANALYZED 2.00 12.00 7.00 2.00
31 08/14/2008 12:44:11 TABLE ACCESS BY INDEX ROWID test ORDERSESHIPMENT 2.00 ANALYZED 13.00 2.00 2.00
31 08/14/2008 12:44:11 INDEX RANGE SCAN test ORDERSSHIPMENT_FKEY1_IDX NON-UNIQUE ANALYZED 1.00 14.00 13.00 1.00
31 08/14/2008 12:44:11 TABLE ACCESS BY INDEX ROWID test1 COMPANY 7.00 ANALYZED 15.00 1.00 2.00
31 08/14/2008 12:44:11 INDEX UNIQUE SCAN test1 COMPANY_PKEY UNIQUE ANALYZED 1.00 16.00 15.00 1.00
Thanks,
Sekhar

Without knowing the database version, the underlying tables and indexes, without knowing a bit about the data volumes or statistics, and without seeing the SQL statement, it's impossible for anyone to provide any feedback based solely on a query plan. The CBO thinks this is the most efficient plan, and it has a lot more information than we do, so we'd have no information to argue otherwise.
When you are providing all this information, can you also provide a nicely formatted query plan (generated via DBMS_XPLAN and formatted to preserve white space so that things line up vertically?
Justin

Similar Messages

  • SQL Query on Planning Repository

    Hi all,
    Let's say that I have an Entity Hierarchy as follows:
    - Entity
    + E1
    + E2
    + E3
    E31
    E32
    + E4
    E41
    E42
    + E5
    E6
    E7
    + E8
    E31 (Shared)
    and we have access control set at group level:
    - G1 has write access to Idesc(E1)
    - G2 has write access to Idesc(E2)
    - G3 has write access to Idesc(E3)
    - G4 has write access to Idesc(E4)
    - G5 has write access to Idesc(E5)
    - G8 has write access to Idesc(E8)
    I have one user U2 who belongs to G2.
    How could I build a SQL query to determine if U2 has access to Base Entity E31?
    I have taken a look to http://camerons-blog-for-essbase-hackers.blogspot.nl/2011/10/stupid-planning-queries-6-security.html
    although we would have to check if the base entity belongs to any of the entities where U2 has access.
    Thanks for any help.

    I believe it's commonly known that it's case sensitive. e.g. it's documented here:
    http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=using_recordsets_7.html
    "Unlike the rest of ColdFusion, Query of Queries is case-sensitive. However, Query of Queries supports two string functions, UPPER() and LOWER(), which you can use to achieve case-insensitive matching."

  • Explain SQL Query execution plan: Oracle

    Dear Masters,
    Kindly help me to understand execution plan for an SQL statement. I have following SQL execution plan for a query in system. How should I interpret it. I thank You in advace for your guidance.
    SELECT STATEMENT ( Estimated Costs = 1.372.413 , Estimated #Rows = 0 )
           5 NESTED LOOPS
             ( Estim. Costs = 1.372.413 , Estim. #Rows = 3.125 )
             Estim. CPU-Costs = 55.798.978.498 Estim. IO-Costs = 1.366.482
               2 TABLE ACCESS BY INDEX ROWID MSEG
                 ( Estim. Costs = 1.326.343 , Estim. #Rows = 76.717 )
                 Estim. CPU-Costs = 55.429.596.575 Estim. IO-Costs = 1.320.451
                 Filter Predicates
                   1 INDEX RANGE SCAN MSEG~R
                     ( Estim. Costs = 89.322 , Estim. #Rows = 60.069.500 )
                     Search Columns: 1
                     Estim. CPU-Costs = 2.946.739.229 Estim. IO-Costs = 89.009
                     Access Predicates
               4 TABLE ACCESS BY INDEX ROWID MKPF
                 ( Estim. Costs = 1 , Estim. #Rows = 1 )
                 Estim. CPU-Costs = 4.815 Estim. IO-Costs = 1
                 Filter Predicates
                   3 INDEX UNIQUE SCAN MKPF~0
                     Search Columns: 3
                     Estim. CPU-Costs = 3.229 Estim. IO-Costs = 0
                     Access Predicates

    Hi Panjak,
    Yeahh, there's a huge unperformatic SQL statment, what I can see from this acces plan is:
    1 DBO decided to start the query on index R on MSEG, using only part of the index (only one column) with no good uniqueness, accessing disk IO-Costs for this (60mi records), and expecting many interactions (loops) in memory to filter, see CPU-Costs.
    So with the parameters you gave to SQL, they start in a very bad way.
    2 After that program will access the MSEG commanded by what was found on First step, also with a huge loading from DB and filtering (another where criteria on MSEG fields, not found on index R), reducing the result set to 76.717 rows.
    3/4 With this, program goes direct to primary key index on MKPF with direct access (optimized access) and follow to access database table MKPF.
    5 At last will "loop" the result sets from MSEG and MKPF, mixing the tuplas generating the final result set.
    Do you want to share your SQL, the parameters you are sending and code which generate it with us?
    Regards, Fernando Da Ró

  • How to create a Matrix table using this data in SQL Query Analyzer

    Hello all,
    I have a problem while I am trying to represent my Sql Table namely table1 in Matrix form
    my table Format is
    city1 city2 Distance--------------------------------------------------------
    Mumbai Delhi 100
    Delhi Banaras 50
    Mumbai Rajasthan 70
    Banaras haryana 40
    Mumbai Mumbai 0
    784 entries
    there are 784 cities each having link to other
    Now i want my output as
    Mumbai Delhi Banaras haryana
    Mumbai 0 100 -- --
    Delhi 100 0 50 --
    Banaras
    haryana
    respective distance from one city to other should be shown
    final Matrix would be 784*784
    I am using SQL Query Analyser for this
    Please help me in this regard

    I'm pretty much certain that you don't want to do this in pure SQL. So that means that you want to do it with a reporting tool. I'm not familiar with SQL Query Analyzer, but if it is in fact a reporting tool you'll want to consult its documentation looking for the terms "pivot" or perhaps "cross tab."

  • SQL query with Bind variable with slower execution plan

    I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
    Execution Plan
    0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
    1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
    3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
    4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
    (Cost=2 Card=135 Bytes=6480)
    Statistics
    0 recursive calls
    18 db block gets
    15558 consistent gets
    47 physical reads
    9896 redo size
    423 bytes sent via SQL*Net to client
    1095 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    55 rows processed
    I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
    Execution Plan
    0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
    2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
    3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
    4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
    Statistics
    0 recursive calls
    12 db block gets
    3003199 consistent gets
    54 physical reads
    9448 redo size
    423 bytes sent via SQL*Net to client
    1258 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    55 rows processed
    TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
    Regards
    Ivan

    Many thanks for your reply.
    I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
    for table I use:-
    begin
    dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
    end;
    for index I use:-
    begin
    dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
    end;
    Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
    regards
    Ivan

  • Can users see the query plan of a SQL query in Oracle?

    Hi,
    I wonder for a given sql query, after the system optimization, can I see the query plan in oracle? If yes, how to do that? thank you.
    Xing

    You can use explain plan in SQLPlus
    SQL>  explain plan for select * from user_tables;
    Explained.
    Elapsed: 00:00:01.63
    SQL> select * from table(dbms_xplan.display());
    PLAN_TABLE_OUTPUT
    Plan hash value: 806004009
    | Id  | Operation                       | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |          |  2014 |  1123K|   507   (6)| 00:00:07 |
    |*  1 |  HASH JOIN RIGHT OUTER          |          |  2014 |  1123K|   507   (6)| 00:00:07 |
    |   2 |   TABLE ACCESS FULL             | SEG$     |  4809 |   206K|    34   (3)| 00:00:01 |
    |*  3 |   HASH JOIN RIGHT OUTER         |          |  1697 |   873K|   472   (6)| 00:00:06 |
    |   4 |    TABLE ACCESS FULL            | USER$    |    74 |  1036 |     3   (0)| 00:00:01 |
    |*  5 |    HASH JOIN OUTER              |          |  1697 |   850K|   468   (6)| 00:00:06 |
    |   6 |     NESTED LOOPS OUTER          |          |  1697 |   836K|   315   (6)| 00:00:04 |
    |*  7 |      HASH JOIN                  |          |  1697 |   787K|   226   (8)| 00:00:03 |
    |   8 |       TABLE ACCESS FULL         | TS$      |    13 |   221 |     5   (0)| 00:00:01 |
    |   9 |       NESTED LOOPS              |          |  1697 |   759K|   221   (8)| 00:00:03 |
    |  10 |        MERGE JOIN CARTESIAN     |          |  1697 |   599K|   162  (10)| 00:00:02 |
    |* 11 |         HASH JOIN               |          |     1 |   326 |     1 (100)| 00:00:01 |
    |* 12 |          FIXED TABLE FULL       | X$KSPPI  |     1 |    55 |     0   (0)| 00:00:01 |
    |  13 |          FIXED TABLE FULL       | X$KSPPCV |   100 | 27100 |     0   (0)| 00:00:01 |
    |  14 |         BUFFER SORT             |          |  1697 | 61092 |   162  (10)| 00:00:02 |
    |* 15 |          TABLE ACCESS FULL      | OBJ$     |  1697 | 61092 |   161  (10)| 00:00:02 |
    |* 16 |        TABLE ACCESS CLUSTER     | TAB$     |     1 |    96 |     1   (0)| 00:00:01 |
    |* 17 |         INDEX UNIQUE SCAN       | I_OBJ#   |     1 |       |     0   (0)| 00:00:01 |
    |  18 |      TABLE ACCESS BY INDEX ROWID| OBJ$     |     1 |    30 |     1   (0)| 00:00:01 |
    |* 19 |       INDEX UNIQUE SCAN         | I_OBJ1   |     1 |       |     0   (0)| 00:00:01 |
    |  20 |     TABLE ACCESS FULL           | OBJ$     | 52728 |   411K|   151   (4)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - access("T"."FILE#"="S"."FILE#"(+) AND "T"."BLOCK#"="S"."BLOCK#"(+) AND
                  "T"."TS#"="S"."TS#"(+))
       3 - access("CX"."OWNER#"="CU"."USER#"(+))
       5 - access("T"."DATAOBJ#"="CX"."OBJ#"(+))
       7 - access("T"."TS#"="TS"."TS#")
      11 - access("KSPPI"."INDX"="KSPPCV"."INDX")
      12 - filter("KSPPI"."KSPPINM"='_dml_monitoring_enabled')
      15 - filter("O"."OWNER#"=USERENV('SCHEMAID') AND BITAND("O"."FLAGS",128)=0)
      16 - filter(BITAND("T"."PROPERTY",1)=0)
      17 - access("O"."OBJ#"="T"."OBJ#")
      19 - access("T"."BOBJ#"="CO"."OBJ#"(+))
    42 rows selected.
    Elapsed: 00:00:03.61
    SQL> If your plan table does not exist, execute the script $ORACLE_HOME/RDBMS/ADMIN/utlxplan.sql to create the table.

  • SQL Query C# Using Execution Plan Cache Without SP

    I have a situation where i am executing an SQL query thru c# code. I cannot use a stored procedure because the database is hosted by another company and i'm not allowed to create any new procedures. If i run my query on the sql mgmt studio the first time
    is approx 3 secs then every query after that is instant. My query is looking for date ranges and accounts. So if i loop thru accounts each one takes approx 3 secs in my code. If i close the program and run it again the accounts that originally took 3 secs
    now are instant in my code. So my conclusion was that it is using an execution plan that is cached. I cannot find how to make the execution plan run on non-stored procedure code. I have created a sqlcommand object with my queary and 3 params. I loop thru each
    one keeping the same command object and only changing the 3 params. It seems that each version with the different params are getting cached in the execution plans so they are now fast for that particular query. My question is how can i get sql to not do this
    by either loading the execution plan or by making sql think that my query is the same execution plan as the previous? I have found multiple questions on this that pertain to stored procedures but nothing i can find with direct text query code.
    Bob;
     

    I did the query running different accounts and different dates with instant results AFTER the very first query that took the expected 3 secs. I changed all 3 fields that i've got code for parameters for and it still remains instant in the mgmt studio but
    still remains slow in my code. I'm providing a sample of the base query i'm using.
    select i.Field1, i.Field2, 
    d.Field3  'Field3',
    ip.Field4 'Field4', 
    k.Field5 'Field5'
    from SampleDataTable1 i, 
    SampleDataTable2 k, 
    SampleDataTable3 ip,
    SampleDataTable4 d 
    where i.Field1 = k.Field1 and i.Field4 = ip.Field4 
    i.FieldDate between '<fromdate>' and  '<thrudate>' 
    and k.Field6 = <Account>
    Obviously the field names have been altered because the database is not mine but other then the actual names it is accurate. It works it just takes too long in code as described in the initial post. 
    My params setup during the init for the connection and the command.
    sqlCmd.Parameters.Add("@FromDate", SqlDbType.DateTime);
            sqlCmd.Parameters.Add("@ThruDate", SqlDbType.DateTime);
            sqlCmd.Parameters.Add("@Account", SqlDbType.Decimal);
    Each loop thru the code changes these 3 fields.
        sqlCommand.Parameters["@FromDate"].Value = dtFrom;
        sqlCommand.Parameters["@ThruDate"].Value = dtThru;
        sqlCommand.Parameters["@Account"].Value = sAccountNumber;
    SqlDataReader reader = sqlCommand.ExecuteReader();
            while (reader.Read())
                reader.Close();
    One thing i have noticed is that the account field is decimal(20,0) and by default the init i'm using defaults to decimal(10) so i'm going to change the init to 
       sqlCmd.Parameters["@Account"].Precision = 20;
       sqlCmd.Parameters["@Account"].Scale = 0;
    I don't believe this would change anything but at this point i'm ready to try anything to get the query running faster. 
    Bob;

  • Sql Query need to extract the Work Flow information from Hyperion Planning

    Can Any one give me the sql query to extract the Work flow from Hyperion Planning in 11.1.2.1.
    I can extract from the Hyperion Planning but it is not in required format again I need to do lot of formating. I need the information as per the flow structure is showing linke in one perticular planning unit in all coloumn format. Hence only sql query will help me to extract this kind of information. could any one provide the sql query to extract this kind of request.
    Thanks in Advance.
    Edited by: 987185 on Feb 9, 2013 10:57 PM

    If you have a look at the data models in - http://www.oracle.com/technetwork/middleware/bi-foundation/epm-data-models-11121-354684.zip
    There is the structure of the planning application tables, have a look at the HSP_PM_* tables
    Alternatively you can look at extracting the information using LCM.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • SQL Query to get Project Plan Name and Resource Name from Reporting database of Project Server 2007

    Can you please help me to write an SQL Query to get Project Plan Name and Resource Name from Reporting database of Project Server 2007. Thanks!!

    Refer
    http://gallery.technet.microsoft.com/projectserver/Server-20072010-SQL-Get-a99d4bc6
    SELECT
    dbo.MSP_EpmAssignment_UserView.ProjectUID,
    dbo.MSP_EpmAssignment_UserView.TaskUID,
    dbo.MSP_EpmProject_UserView.ProjectName,
    dbo.MSP_EpmTask_UserView.TaskName,
    dbo.MSP_EpmAssignment_UserView.ResourceUID,
    dbo.MSP_EpmResource_UserView.ResourceName,
    dbo.MSP_EpmResource_UserView.ResourceInitials
    INTO #TempTable
    FROM dbo.MSP_EpmAssignment_UserView INNER JOIN
    dbo.MSP_EpmProject_UserView ON dbo.MSP_EpmAssignment_UserView.ProjectUID = dbo.MSP_EpmProject_UserView.ProjectUID INNER JOIN
    dbo.MSP_EpmTask_UserView ON dbo.MSP_EpmAssignment_UserView.TaskUID = dbo.MSP_EpmTask_UserView.TaskUID INNER JOIN
    dbo.MSP_EpmResource_UserView ON dbo.MSP_EpmAssignment_UserView.ResourceUID = dbo.MSP_EpmResource_UserView.ResourceUID
    SELECT
    ProjectUID,
    TaskUID,
    ProjectName,
    TaskName,
    STUFF((
    SELECT ', ' + ResourceInitials
    FROM #TempTable
    WHERE (TaskUID = Results.TaskUID)
    FOR XML PATH (''))
    ,1,2,'') AS ResourceInitialsCombined,
    STUFF((
    SELECT ', ' + ResourceName
    FROM #TempTable
    WHERE (TaskUID = Results.TaskUID)
    FOR XML PATH (''))
    ,1,2,'') AS ResourceNameCombined
    FROM #TempTable Results
    GROUP BY TaskUID,ProjectUID,ProjectName,TaskName
    DROP TABLE #TempTable
    -Prashanth

  • Disable SQL query to celltext content in web form of Hyperion Planning

    We are using Hyperion Planning ver 11.1.1.1. We found that there is a SQL query executed for each cell in the web form to query the SQL Server metadata database and check whether there are some celltext associated with this cell.
    As we did not implement the celltext function to our end users, we would like to disable this checking. As some of our web forms have more than 10000 cells, generate 10000 SQL queries to the metadata database definitely is a performance issue.
    We have asked Oracle team whether we can disable the SQL query but the answer is that no such option exist.
    Anyone experienced this case and have workaround solution?
    Thanks!

    Hyperion user wrote:
    Alp Burak wrote:
    Hi,
    We had faced the same issue a few years ago. One of our geeks had done a change in either Enterdata.js or Enterdata.jsp which disabled form cell validation. I don't currently have the code with me but it wasn't a big change really, remarking a function could be doing the trick.
    I don't think this is officially recommended by Oracle though.
    AlpThanks for your advice. We will try to locate the enterdata.jsp and enterdata.js and found out where the SQL being executed.We found out the Enterdata.js under the deployment directory of Weblogic. However it is over 400KB size and many many lines of codes. We think that it is very difficult to locate where should be customized to remove the SQL checking on cell content.
    \\Hqsws04\hyperion\deployments\WebLogic9\servers\HyperionPlanning\webapps\HyperionPlanning

  • Different execution plans for the same SQL query

    Good afternoon,
    My customer is sending an SQL query that goes slower after some time.
    He has verified the exection plan, and this seems to change after some time.
    At first (when the database has just been restarted), it looks like this:
    Rows Row Source Operation
    3 TABLE ACCESS BY GLOBAL INDEX ROWID <TABLE1> PARTITION: 1 1
    22 NESTED LOOPS
    3 VIEW
    3 MINUS
    14215 SORT UNIQUE
    14215 NESTED LOOPS
    14215 TABLE ACCESS FULL <TABLE1> PARTITION: 1 1
    14215 TABLE ACCESS BY GLOBAL INDEX ROWID <TABLE2> PARTITION: 1 1
    14215 INDEX UNIQUE SCAN <INDEX2> (object id 26024)
    14212 SORT UNIQUE
    14212 REMOTE
    18 INDEX RANGE SCAN <INDEX1> (object id 25911)
    After a while, this becomes:
    Rows Row Source Operation
    8 NESTED LOOPS
    14218 TABLE ACCESS FULL <TABLE1> PARTITION: 1 1
    8 VIEW
    113744 MINUS
    202151524 SORT UNIQUE
    14218 NESTED LOOPS
    14218 TABLE ACCESS FULL <TABLE1> PARTITION: 1 1
    14218 TABLE ACCESS BY GLOBAL INDEX ROWID <TABLE2> PARTITION: 1 1
    14218 INDEX UNIQUE SCAN <INDEX_2> (object id 26024)
    202037780 SORT UNIQUE
    14210 REMOTE
    At regular intervals a "coalesce" of the <INDEX2> is scheduled.
    Do you have any idea what might cause this different plans? (it's heavily impacting performance, because the second execution plan takes 5 times more time)
    Thanks
    Dominique
    Edited by: scampsd on May 17, 2011 1:42 PM

    scampsd wrote:
    Good afternoon,
    My customer is sending an SQL query that goes slower after some time.
    He has verified the exection plan, and this seems to change after some time.
    At first (when the database has just been restarted), it looks like this:
    At regular intervals a "coalesce" of the <INDEX2> is scheduled.
    Do you have any idea what might cause this different plans? (it's heavily impacting performance, because the second execution plan takes 5 times more time)Something on your system is changing. Unfortunately if finding out what were easy you would have done it already
    You have isolated a couple of things. What happends of you disable the index maintenance?
    It is odd that performance as you described it degrands merely because of time the database has been up

  • Error while executing a sql query for select

    HI All,
    ORA-01652: unable to extend temp segment by 128 in tablespace PSTEMP i'm getting this error while i'm executing the sql query for selecting the data.

    I am having 44GB of temp space, while executing the below query my temp space is getting full, Expert please let us know how the issue can be resolved..
    1. I dont want to increase the temp space
    2. I need to tune the query, please provide your recomendations.
    insert /*+APPEND*/ into CST_DSA.HIERARCHY_MISMATCHES
    (REPORT_NUM,REPORT_TYPE,REPORT_DESC,GAP,CARRIED_ITEMS,CARRIED_ITEM_TYPE,NO_OF_ROUTE_OF_CARRIED_ITEM,CARRIED_ITEM_ROUTE_NO,CARRIER_ITEMS,CARRIER_ITEM_TYPE,CARRIED_ITEM_PROTECTION_TYPE,SOURCE_SYSTEM)
    select
    REPORTNUMBER,REPORTTYPE,REPORTDESCRIPTION ,NULL,
    carried_items,carried_item_type,no_of_route_of_carried_item,carried_item_route_no,carrier_items,
    carrier_item_type,carried_item_protection_type,'PACS'
    from
    (select distinct
    c.REPORTNUMBER,c.REPORTTYPE,c.REPORTDESCRIPTION ,NULL,
    a.carried_items,a.carried_item_type,a.no_of_route_of_carried_item,a.carried_item_route_no,a.carrier_items,
    a.carrier_item_type,a.carried_item_protection_type,'PACS'
    from CST_ASIR.HIERARCHY_asir a,CST_DSA.M_PB_CIRCUIT_ROUTING b ,CST_DSA.REPORT_METADATA c
    where a.carrier_item_type in('Connection') and a.carried_item_type in('Service')
    AND a.carrier_items=b.mux
    and c.REPORTNUMBER=(case
    when a.carrier_item_type in ('ServicePackage','Service','Connection') then 10
    else 20
    end)
    and a.carrier_items not in (select carried_items from CST_ASIR.HIERARCHY_asir where carried_item_type in('Connection') ))A
    where not exists
    (select *
    from CST_DSA.HIERARCHY_MISMATCHES B where
    A.REPORTNUMBER=B.REPORT_NUM and
    A.REPORTTYPE=B.REPORT_TYPE and
    A.REPORTDESCRIPTION=B.REPORT_DESC and
    A.CARRIED_ITEMS=B.CARRIED_ITEMS and
    A.CARRIED_ITEM_TYPE=B.CARRIED_ITEM_TYPE and
    A.NO_OF_ROUTE_OF_CARRIED_ITEM=B.NO_OF_ROUTE_OF_CARRIED_ITEM and
    A.CARRIED_ITEM_ROUTE_NO=B.CARRIED_ITEM_ROUTE_NO and
    A.CARRIER_ITEMS=B.CARRIER_ITEMS and
    A.CARRIER_ITEM_TYPE=B.CARRIER_ITEM_TYPE and
    A.CARRIED_ITEM_PROTECTION_TYPE=B.CARRIED_ITEM_PROTECTION_TYPE
    AND B.SOURCE_SYSTEM='PACS'
    Explain Plan
    ==========
    Plan
    INSERT STATEMENT ALL_ROWSCost: 129 Bytes: 1,103 Cardinality: 1                                                        
         20 LOAD AS SELECT CST_DSA.HIERARCHY_MISMATCHES                                                   
              19 PX COORDINATOR                                              
                   18 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10002 :Q1002Cost: 129 Bytes: 1,103 Cardinality: 1                                         
                        17 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 129 Bytes: 1,103 Cardinality: 1                                    
                             15 HASH JOIN RIGHT ANTI NA PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 129 Bytes: 1,098 Cardinality: 1                               
                                  4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 63 Bytes: 359,283 Cardinality: 15,621                          
                                       3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001Cost: 63 Bytes: 359,283 Cardinality: 15,621                     
                                            2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001Cost: 63 Bytes: 359,283 Cardinality: 15,621                
                                                 1 MAT_VIEW ACCESS FULL MAT_VIEW PARALLEL_COMBINED_WITH_PARENT CST_ASIR.HIERARCHY :Q1001Cost: 63 Bytes: 359,283 Cardinality: 15,621           
                                  14 NESTED LOOPS ANTI PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 65 Bytes: 40,256,600 Cardinality: 37,448                          
                                       11 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 65 Bytes: 6,366,160 Cardinality: 37,448                     
                                            8 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1002               
                                                 7 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 1 Bytes: 214 Cardinality: 2           
                                                      6 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 1 Bytes: 214 Cardinality: 2      
                                                           5 INDEX FULL SCAN INDEX CST_DSA.IDX$$_06EF0005 Cost: 1 Bytes: 214 Cardinality: 2
                                            10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002Cost: 63 Bytes: 2,359,224 Cardinality: 37,448                
                                                 9 MAT_VIEW ACCESS FULL MAT_VIEW PARALLEL_COMBINED_WITH_PARENT CST_ASIR.HIERARCHY :Q1002Cost: 63 Bytes: 2,359,224 Cardinality: 37,448           
                                       13 TABLE ACCESS BY INDEX ROWID TABLE PARALLEL_COMBINED_WITH_PARENT CST_DSA.HIERARCHY_MISMATCHES :Q1002Cost: 0 Bytes: 905 Cardinality: 1                     
                                            12 INDEX RANGE SCAN INDEX PARALLEL_COMBINED_WITH_PARENT SYS.HIERARCHY_MISMATCHES_IDX3 :Q1002Cost: 0 Cardinality: 1                
                             16 INDEX RANGE SCAN INDEX PARALLEL_COMBINED_WITH_PARENT CST_DSA.IDX$$_06EF0001 :Q1002Cost: 1 Bytes: 5 Cardinality: 1

  • HELP!   SQL Query:  Other ways to reorder column display?

    I have a SQL query report with a large number of columns (users can hide/show columns as desired). It would be great if the column display order could be changed by changing the order of the columns in the SELECT list in the Report Definition, but that doesn't work -- it puts changed or added columns at the end regardless of the order in the SELECT list of the query.
    Is there some other way to reorder the columns displayed without using the Report Attributes page? It's extremely tedious to move columns around using the up/down arrows which redisplays the page each time. Am I missing a way to change display order, or does anyone have a "trick" to do this? It's so painful....
    When defining forms you can reoder columns by specifying a sequence number for each column. Just curious as to why reports were not done the same way, and are there any plans to address this in a future release?
    Karen

    Yes, reordering columns is extremely painful.
    It is supposed to be much improved in the next version.
    See
    Re: Re-ordering columns on reports
    Moving columns up/down in Report  Attributes
    See my example at
    http://htmldb.oracle.com/pls/otn/f?p=24317:141
    Basically, let the users move columns around until they are blue in the face, provide a Save button to save the column order in a user preference and reorder the columns when the page reloads.
    Or you can use Carl's PL/SQL shuttle as the widget to specify the columns shown and their order. The shuttle is at http://htmldb.oracle.com/pls/otn/f?p=11933:27
    Hope this helps.
    Message was edited by:
    Vikas

  • URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query

    Hi Everyone,
    When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
    UPDATE TABLE_A a
    SET a.current_commit_date =
    (SELECT MAX (b.loading_date)
    FROM TABLE_B b
    WHERE a.sales_order_id = b.sales_order_id
    AND a.sales_order_line_id = b.sales_order_line_id
    AND b.confirmed_qty > 0
    AND b.data_flag IS NULL
    OR b.schedule_line_delivery_date >= '23 NOV 2008')
    Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
    I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
    When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
    Please please help me in optimizing this.
    Thanks,
    Sudhindra

    Check the instruction again, you're leaving out information we need in order to help you, like optimizer information.
    - Post your exact database version, that is: the result of select * from v$version;
    - Don't use TOAD's execution plan, but use
    SQL> explain plan for <your_query>;
    SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
    Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
    It's also explained in the instruction.
    When was the last time statistics were gathered for table_a and table_b?
    You can find out by issuing the following query:select table_name
    , last_analyzed
    , num_rows
    from user_tables
    where table_name in ('TABLE_A', 'TABLE_B');
    Can you also post the results of these counts;select count(*)
    from table_b
    where confirmed_qty > 0;
    select count(*)
    from table_b
    where data_flag is null;
    select count(*)
    from table_b
    where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy');

  • How to view the sql query?

    hi,
      how to view the sql query formed from the xml structure in the receiver jdbc?

    You can view SAP Note at
    http://service.sap.com/notes
    But you require SMP login ID for this which you should get from your company. The content of the notes are as follows:
    Reason and Prerequisites
    You are looking for additional parameter settings. There are two possible reasons why a feature is available via the "additional parameters" table in the "advanced mode" section of the configuration, but not as documented parameter in the configuration UI itself:
    Category 1: The parameter has been introduced for a patch or a SP upgrade where no UI upgrade and/or documentation upgrade was possible. In this case, the parameter will be moved to the UI and the documentation as soon as possible. The parameter in the "additional parameters" table will be deprecated after this move, but still be working. The parameter belongs to the supported adapter functionality and can be used in all, also productive, scenarios.
    Category 2. The parameter has been introduced for testing purposes, proof-of-concept scenarios, as workaround or as pre-released functionality. In this case, the parameter may or may not be moved to the UI and documentation, and the functionality may be changed, replaced or removed. For this parameter category there is no guaranteed support and usage in productive scenarios is not supported.
    When you want to use a parameter documented here, please be aware to which category it belongs!
    Solution
    The following list shows all available parameters of category 1 or 2. Please note:
    Parameter names are always case-sensitive! Parameter values may be case-sensitive, this is documented for each parameter.
    Parameter names and values as documented below must be used always without quotaton marks ("), if not explicitly stated otherwise.
    The default value of a parameter is always chosen that it does not change the standard functionality
    JDBC Receiver Adapter Parameters
    1. Parameter name: "logSQLStatement"
                  Parameter type: boolean
                  Parameter value: true for any string value, false only for empty string
                  Parameter value default: false (empty String)
                  Available with: SP9
                  Category: 2
                  Description:
                  When implementing a scenario with the JDBC receiver adapter, it may be helpful to see which SQL statement is generated by the JDBC adapter from the XI message content for error analysis. Before SP9, this can only be found in the trace of the JDBC adapter if trace level DEBUG is activated. With SP9, the generated SQL statement will be shown in the details page (audit protocol) of the message monitor for each message directly.
                  This should be used only during the test phase and not in productive scenarios.
    Regards,
    Prateek

Maybe you are looking for

  • How can I use jbo:InputSelect for a char type filed?

    How can I use <jbo:InputSelect for a char type filed? I have two tables. One is room (primary key is room(type is char)), another table is responsibility (foreign ker is room(type is char)). Both tables have same length for room column. There is a pr

  • Changing email address on iCloud account

    My email was hacked a while ago, and I managed to change email addresses on most accounts I can't seem to change the address on my iCloud account, any help please?????!!!!

  • Ipad says "insufficient bandwidth" yet wireless PC streams fine - help please.

    Help please - my Ipad says insufficient bandwith to watch I player etc yet my wireless PC streams them fine - any ideas whats wrong?

  • Safari 4.0.3 causing crashes?

    Since updating I have had several "gray screen of death" moments. Especially if I do a shut down with Safari open and let the computer close Safari. Also, have had many more Safari hangups, beach ball moments that required Force Quit. Seems the 4.0.3

  • LinkedList vs. Map

    In my coursework, we did a lot of work with LinkedList. So when I had an application where I wanted to keep a list of name-value pairs, I created an object with two members - name and value - and made a LinkedList of them. Everything-looks-like-a-nai